Knight of the round table

          wansong

          如何配置一個負載均衡(Loadbalanced)的高可用性(High-Availability,HA)Apache集群(Cluster)

          如何配置一個負載均衡(Loadbalanced)的
          高可用性(High-Availability,HA)Apache集群(Cluster)
                               How To Set Up A Loadbalanced High-Availability Apache Cluster
          2007-8-31
          1.0
          falko time<ft[at] falkotimme [dot] com>
          2006426

              This tutorial shows how to set up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another "Single Point Of Failure", we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently.
          本教程說明了如何配置一個提供高可用性(High-Availability,簡稱HA)的雙節點(two-node)的Apache網絡服務器(web sever,例如:JBoss)集群。在我們創造一個負載均衡的Apache集群前,兩個Apache節點(例如:JBoss)之間是分開引入請求的。因為我們不想負載均衡器變成另一個“單點故障”,所以我們也必須為負載均衡器提供高可用性。所以我們的負載均衡器事實上將由兩個負載均衡器節點組成,它們彼此利用心跳(heartbeat)監控對方,假如一個負載均衡器失效,那么另一個負載均衡器就會接管服務。
           
          The advantage of using a load balancer compared to using round robin DNS is that it takes care of the load on the web server nodes and tries to direct requests to the node with less load, and it also takes care of connections/sessions. Many web applications (e.g. forum software, shopping carts, etc.) make use of sessions, and if you are in a session on Apache node 1, you would lose that session if suddenly node 2 served your requests. In addition to that, if one of the Apache nodes goes down, the load balancer realizes that and directs all incoming requests to the remaining node which would not be possible with round robin DNS.
          用負載均衡器比用DNS循環的優勢在于它會照顧在Web服務器(例如:JBoss)節點上的負載,并設法把請求轉到負載較小的節點上,并且它還照顧連接/會話(connections/sessions)。很多Web應用程序(例如:軟件論壇,購物車等)使用會話,如果你只在Apache節點(例如:JBoss)一上創建會話,那么當節點二突然為你的請求提供服務的時候你將丟失這個會話。除此之外,如果Apache節點(例如:JBoss節點)中的任何一個節點down掉了(如:關機,停止服務等),負載均衡器認識到此問題后,那它就會指揮所有傳來的請求到其余正常工作的節點上,這未必在可能存在的方面先進DNS循環。
           
          For this setup, we need four nodes (two Apache nodes and two load balancer nodes) and five IP addresses: one for each node and one virtual IP address that will be shared by the load balancer nodes and used for incoming HTTP requests.
          這種集群結構,我們需要有四個節點(兩個Apache節點和兩個負載均衡器節點)和五個IP地址:每一個節點和一個虛擬IP地址將分擔由負載均衡器節點傳入的HTTP請求。
           
          I will use the following setup here:
          ·      Apache node 1: webserver1.example.com (webserver1) - IP address: 192.168.0.101; Apache document root: /var/www
          ·      Apache node 2: webserver2.example.com (webserver2) - IP address: 192.168.0.102; Apache document root: /var/www
          ·      Load Balancer node 1: loadb1.example.com (loadb1) - IP address: 192.168.0.103
          ·      Load Balancer node 2: loadb2.example.com (loadb2) - IP address: 192.168.0.104
          ·      Virtual IP Address: 192.168.0.105 (used for incoming requests)
          這里我將用以下設置:
          l   Apache節點一(例如:裝有JBoss的服務器):webserver1.example.com (webserver1)-IP地址:  
          192.168.0.101;Apache文檔根路徑:/var/www
          l   Apache節點二:webserver2.example.com (webserver2)-IP地址:
          192.168.0.102;Apache文檔根路徑:/var/www
          l   負載均衡器節點一:loadb1.example.com (loadb1) – IP地址: 192.168.0.103
          l   負載均衡器節點二:loadb2.example.com (loadb2) – IP地址: 192.168.0.104
          l   虛擬IP地址:192.168.0.105 (用于外部請求的IP地址)
           
          Have a look at the drawing on [url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url] to understand how this setup looks like.
               我們可以到[url]http://www.linuxvirtualserver.org/docs/ha/ultramonkey.html[/url]上看看,學習一下像這樣的設置怎么做。
           
          In this tutorial I will use Debian Sarge for all four nodes. I assume that you have installed a basic Debian installation on all four nodes, and that you have installed Apache on webserver1 and webserver2, with /var/www being the document root of the main web site.
          在這個指南里我將使用Debian Sarge適用于所有的四個節點。我假設你已經在所有的四個節點上安裝了一個基礎的Debian安裝,而且你已經在webserver1webserver2上安裝了Apache(例如:JBoss),以/var/www的名字創建了web站點主文檔目錄。
           
          I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!
          首先我想說的是,這不是配置這樣一個系統的唯一方法。有許多方法可以達到這種目標,但是我接受這種方法。我不做任何保證這種方式會為你工作!
          1 Enable IPVS On The Load Balancers                           
          在負載均衡器上啟用IPVS
          First we must enable IPVS on our load balancers. IPVS (IP Virtual Server) implements transport-layer load balancing inside the Linux kernel, so called Layer-4 switching.
          首先我們必須啟用IPVS在我們的負載均衡器上。IPVS(IP虛擬服務器)在Linux Kernel內部實現傳輸層(transport-layer)負載均衡,所以叫四層交換(Layer-4 switching)。
           
          loadb1/loadb2:
          echo ip_vs_dh >> /etc/modules
          echo ip_vs_ftp >> /etc/modules
          echo ip_vs >> /etc/modules
          echo ip_vs_lblc >> /etc/modules
          echo ip_vs_lblcr >> /etc/modules
          echo ip_vs_lc >> /etc/modules
          echo ip_vs_nq >> /etc/modules
          echo ip_vs_rr >> /etc/modules
          echo ip_vs_sed >> /etc/modules
          echo ip_vs_sh >> /etc/modules
          echo ip_vs_wlc >> /etc/modules
          echo ip_vs_wrr >> /etc/modules
          Then we do this:
          然后我們這樣做:
          loadb1/loadb2:
          modprobe ip_vs_dh
          modprobe ip_vs_ftp
          modprobe ip_vs
          modprobe ip_vs_lblc
          modprobe ip_vs_lblcr
          modprobe ip_vs_lc
          modprobe ip_vs_nq
          modprobe ip_vs_rr
          modprobe ip_vs_sed
          modprobe ip_vs_sh
          modprobe ip_vs_wlc
          modprobe ip_vs_wrr
          If you get errors, then most probably your kernel wasn't compiled with IPVS support, and you need to compile a new kernel with IPVS support (or install a kernel image with IPVS support) now.
               如果你得到錯誤,那么很有可能是你的kernel編譯時沒有IPVS支持,那么你需要馬上編譯一個新的kernel支持IPVS(或者在安裝一個支持IPVS的kernel鏡像)。
          2 Install Ultra Monkey On The Load alancers                 
          在負載均衡器上安裝Ultra Monkey
          Ultra Monkey is a project to create load balanced and highly available services on a local area network using Open Source components on the Linux operating system; the Ultra Monkey package provides heartbeat (used by the two load balancers to monitor each other and check if the other node is still alive) and ldirectord, the actual load balancer.
          Ultra Monkey是一個方案,創建在一個局域網使用的對于Linux操作系統的負載均衡和高可用性服務的開放源碼組件;Ultra Monkey包提供心跳(heartbeat ,用于兩個負載均衡器監視和檢查對方,如果另一個節點是活著的)和指揮器(ldirectord),實現負載均衡。
           
          To install Ultra Monkey, we must edit /etc/apt/sources.list now and add these two lines (don't remove the other repositories):
          安裝Ultra Monkey,我們必須馬上編輯 /etc/apt/sources.list 文件,增加兩條線路(不要刪除其他的repositories)
           
          loadb1/loadb2:
          vi /etc/apt/sources.list
          deb [url]http://www.ultramonkey.org/download/3/[/url] sarge main
          deb-src [url]http://www.ultramonkey.org/download/3[/url] sarge main
           
          Afterwards we do this:
          然后我們這樣做:
           
          loadb1/loadb2:
          apt-get update
          and install Ultra Monkey:
          接著安裝Ultra Monkey:
           
          loadb1/loadb2:
          apt-get install ultramonkey
          If you see this warning:
          如果你看見這樣的警告:
           libsensors3 not functional                                   
                                                                  
           It appears that your kernel is not compiled with sensors support. As a
           result, libsensors3 will not be functional on your system.         
                                                                  
           If you want to enable it, have a look at "I2C Hardware Sensors Chip 
           support" in your kernel configuration.                        
           
          you can ignore it.
          你可以忽略它。
          During the Ultra Monkey installation you will be asked a few question. Answer as follows:
          在安裝Ultra Monkey 的時候會問你一些問題?;卮鹑缦拢?/span>
           Do you want to automatically load IPVS rules on boot?
          <-- No
          Select a daemon method.
          <-- none
          3 Enable Packet Forwarding On The Load Balancers 
          在負載均衡器上激活包轉發
          The load balancers must be able to route traffic to the Apache nodes. Therefore we must enable packet forwarding on the load balancers. Add the following lines to /etc/sysctl.conf:
          負載均衡器必須能夠在通信線路上與Apache節點通信。因此我們必須在負載均衡器上激活包轉發。在/etc/sysctl.conf文件中增加如下項:
          loadb1/loadb2:
          vi /etc/sysctl.conf
          # Enables packet forwarding
          net.ipv4.ip_forward = 1
          Then do this:
          然后這樣做:
          loadb1/loadb2:
          sysctl -p
          配置心跳和指揮器Now we have to create three configuration files for heartbeat. They must be identical on loadb1 and loadb2!
          現在我們不得不為心跳創建三個配置文件。它們在loadb1 and loadb2上必須完全一樣。
          loadb1/loadb2:
          vi /etc/ha.d/ha.cf
          logfacility        local0
          bcast        eth0                # Linux
          mcast eth0 225.0.0.1 694 1 0
          auto_failback off
          node        loadb1
          node        loadb2
          respawn hacluster /usr/lib/heartbeat/ipfail
          apiauth ipfail gid=haclient uid=hacluster
          Important: As nodenames we must use the output of
          重要的是:節點名字,我們必須用它來輸出。
          uname -n
          loadb1/loadb2:
          vi /etc/ha.d/haresources
          loadb1        \
                  ldirectord::ldirectord.cf \
                  LVSSyncDaemonSwap::master \
                  IPaddr2::192.168.0.105/24/eth0/192.168.0.255
          The first word is the output of
          第一個單詞是輸出
          uname -n
          on loadb1, no matter if you create the file on loadb1 or loadb2! After IPaddr2 we put our virtual IP address 192.168.0.105.
          對于loadb1,在loadb1和loadb2上不論你是否船艦這個文件!在IPadr2后面我們放置我們的虛擬IP地址192.168.0.105
          loadb1/loadb2:
          vi /etc/ha.d/authkeys
          auth 3
          3 md5 somerandomstring
          somerandomstring is a password which the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
          somerandomstring  是一個密碼,在loadb1和loadb2上的兩個心跳守護線程用它來鑒別對方。在這使用你們自己的字符串。你可以在三個鑒定機制中方式中選擇。我使用MD5,因為它是最安全的。
          /etc/ha.d/authkeys should be readable by root only, therefore we do this:
          這個文件應該最好只有root用戶能操作,因此我們要這樣做:
          loadb1/loadb2:
          chmod 600 /etc/ha.d/authkeys
          ldirectord is the actual load balancer. We are going to configure our two load balancers (loadb1.example.com and loadb2.example.com) in an active/passive setup, which means we have one active load balancer, and the other one is a hot-standby and becomes active if the active one fails. To make it work, we must create the ldirectord configuration file /etc/ha.d/ldirectord.cf which again must be identical on loadb1 and loadb2.
          指揮器是實際的負載均衡者。我們將要在主動/被動(active/passive)設置中配置我們的兩個負載均衡器(loadb1.example.com and loadb2.example.com),這意味著有一個活躍(起作用)的負載均衡者,另一個是熱備份。如果這個活躍的均衡者這失效了,那么另一個熱備份的均衡應該變成活躍的。達到這樣一個工作目的,我們必須創建指揮器配置文件 /etc/ha.d/ldirectord.cf ,這次在loadb1 loadb2上也必須保持相同。
          loadb1/loadb2:
          vi /etc/ha.d/ldirectord.cf
          checktimeout=10
          checkinterval=2
          autoreload=no
          logfile="local0"
          quiescent=yes

          virtual=192.168.0.105:80
                  real=192.168.0.101:80 gate
                  real=192.168.0.102:80 gate
                  fallback=127.0.0.1:80 gate
                  service=http
                 request="ldirector.html"
                  receive="Test Page"
                  scheduler=rr
                  protocol=tcp
                  checktype=negotiate
          In the virtual= line we put our virtual IP address (192.168.0.105 in this example), and in the real= lines we list the IP addresses of our Apache nodes (192.168.0.101 and 192.168.0.102 in this example). In the request= line we list the name of a file on webserver1 and webserver2 that ldirectord will request repeatedly to see if webserver1 and webserver2 are still alive. That file (that we are going to create later on) must contain the string listed in the receive= line.
          在這個 virtual= 線上,我們放置我們的虛擬IP地址(這個例子是192.168.0.105),然后在 real= 線上我們列出我們的Apache節點(這個例子上是:192.168.0.101 是 192.168.0.102)的IP地址。在這個 request= 我們列出在webserver1和威爾伯server上一個文件的名字,指揮器將重復請求去看webserver1和webserver2是否都是活著的。這個文件(我們將在稍后創建)必須包含在 receive= 線上列出的字符串。
          Afterwards we create the system startup links for heartbeat and remove those of ldirectord because ldirectord will be started by the heartbeat daemon:
          然后我們創建這個系統對于心跳的啟動環節,移除那些指揮器,因為指揮器將被心跳守護線程啟動:
          loadb1/loadb2:
          update-rc.d heartbeat start 75 2 3 4 5 . stop 05 0 1 6 .
          update-rc.d -f ldirectord remove
          Finally we start heartbeat (and with it ldirectord):
          最后我們啟動心跳(和對于它的指揮器)
          loadb1/loadb2:
          /etc/init.d/ldirectord stop
          /etc/init.d/heartbeat start
          5 Test The Load Balancers  試負載均衡器
          Let's check if both load balancers work as expected:
          讓我們來檢測兩個負載均衡者是否會像預期的那樣工作:
          loadb1/loadb2:
          ip addr sh eth0
          The active load balancer should list the virtual IP address (192.168.0.105):
          活躍的負載均衡者將列出虛擬IP地址是(192.168.0.105):
          2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
              link/ether 00:16:3e:40:18:e5 brd ff:ff:ff:ff:ff:ff
              inet 192.168.0.103/24 brd 192.168.0.255 scope global eth0
              inet 192.168.0.105/24 brd 192.168.0.255 scope global secondary eth0
          The hot-standby should show this:
          熱備份的負載均衡者將顯示:
          2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
              link/ether 00:16:3e:50:e3:3a brd ff:ff:ff:ff:ff:ff
              inet 192.168.0.104/24 brd 192.168.0.255 scope global eth0
          loadb1/loadb2:
          ldirectord ldirectord.cf status
          Output on the active load balancer:
          在活動的負載均衡器上輸出:
          ldirectord for /etc/ha.d/ldirectord.cf is running with pid: 1455
          Output on the hot-standby:
          在熱備份上輸出:
          ldirectord is stopped for /etc/ha.d/ldirectord.cf
          loadb1/loadb2:
          ipvsadm -L -n
          Output on the active load balancer:
          在活躍的負載均衡器上輸出:
          IP Virtual Server version 1.2.1 (size=4096)
          Prot LocalAddress:Port Scheduler Flags
           -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
          TCP 192.168.0.105:80 rr
           -> 192.168.0.101:80             Route   0      0          0
           -> 192.168.0.102:80             Route   0      0          0
           -> 127.0.0.1:80                 Local   1      0          0
          Output on the hot-standby:
          在熱備份上輸出:
          IP Virtual Server version 1.2.1 (size=4096)
          Prot LocalAddress:Port Scheduler Flags
           -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
          loadb1/loadb2:
          /etc/ha.d/resource.d/LVSSyncDaemonSwap master status
          Output on the active load balancer:
          在活動的負載均衡器上輸出:
          master running
          (ipvs_syncmaster pid: 1591)
          Output on the hot-standby:
          在熱備份上輸出:
          master stopped
          If your tests went fine, you can now go on and configure the two Apache nodes.
          如果你的測試是美好的,你現在可以繼續配置兩架Apache節點。
          6 Configure The Two Apache Nodes  配置兩個Apache節點
          Finally we must configure our Apache cluster nodes webserver1.example.com and webserver2.example.com to accept requests on the virtual IP address 192.168.0.105.
          最后我們必須配置我們的Apache集群節點webserver1.example.com and webserver2.example.com 接受在虛擬IP地址192.168.0.105的請求。
          webserver1/webserver2:
          apt-get install iproute
          Add the following to /etc/sysctl.conf:
          在/etc/sysctl.conf 文件中加入如下內容:
          webserver1/webserver2:
          vi /etc/sysctl.conf
          # Enable configuration of arp_ignore option
          net.ipv4.conf.all.arp_ignore = 1

          # When an arp request is received on eth0, only respond if that address is
          # configured on eth0. In particular, do not respond if the address is
          # configured on lo
          net.ipv4.conf.eth0.arp_ignore = 1

          # Ditto for eth1, add for all ARPing interfaces
          #net.ipv4.conf.eth1.arp_ignore = 1


          # Enable configuration of arp_announce option
          net.ipv4.conf.all.arp_announce = 2

          # When making an ARP request sent through eth0 Always use an address that
          # is configured on eth0 as the source address of the ARP request. If this
          # is not set, and packets are being sent out eth0 for an address that is on
          # lo, and an arp request is required, then the address on lo will be used.
          # As the source IP address of arp requests is entered into the ARP cache on
          # the destination, it has the effect of announcing this address. This is
          # not desirable in this case as adresses on lo on the real-servers should
          # be announced only by the linux-director.
          net.ipv4.conf.eth0.arp_announce = 2

          # Ditto for eth1, add for all ARPing interfaces
          #net.ipv4.conf.eth1.arp_announce = 2
          Then run this:
          然后運行:
          webserver1/webserver2:
          sysctl -p
          Add this section for the virtual IP address to /etc/network/interfaces:
          在/etc/network/interfaces 文件中增加虛擬IP地址的部分:
          webserver1/webserver2:
          vi /etc/network/interfaces
          auto lo:0
          iface lo:0 inet static
           address 192.168.0.105
           netmask 255.255.255.255
           pre-up sysctl -p > /dev/null
          Then run this:
          然后運行:
          webserver1/webserver2:
          ifup lo:0
          Finally we must create the file ldirector.html. This file is requested by the two load balancer nodes repeatedly so that they can see if the two Apache nodes are still running. I assume that the document root of the main apache web site on webserver1 and webserver2 is /var/www, therefore we create the file /var/www/ldirector.html:
          最后我們必須創建這個 ldirector.html 文件這個文件被兩個負載平衡器節點重復的請求,如果這兩個Apache節點都是運行的,所以它們能看見。我架設在webserver1 和 webserver2上的Apahce web站點的文檔主目錄是/var/www,因此我們要在創建/var/www/ldirector.html文件:
          webserver1/webserver2:
          vi /var/www/ldirector.html
          Test Page
          7 Further Testing  進一步測試
          You can now access the web site that is hosted by the two Apache nodes by typing [url]http://192.168.[/url]0.105 in your browser.
          現在你可以通過由兩臺Apache節點組成的主機訪問web站點,在瀏覽器的地址欄中鍵入[url]http://192.168.0.105[/url]。
          Now stop the Apache on either webserver1 or webserver2. You should then still see the web site on [url]http://192.168.0.105[/url] because the load balancer directs requests to the working Apache node. Of course, if you stop both Apaches, then your request will fail.
          現在停止任何一臺webserver,webserver1或者webserver2。你應該仍然可以訪問web站點[url]http://192.168.0.105[/url],因為負載均衡器指揮請求到工作著的Apache節點上。當然,如果你停止兩臺Apache,那么你的請求將會失敗。
          Now let's assume that loadb1 is our active load balancer, and loadb2 is the hot-standby. Now stop heartbeat on loadb1:
          現在讓我們架設這個loadb1是我們活躍的負載均衡器,loadb2是熱備份的。現在在loadb1上停止心跳:
          loadb1:
          /etc/init.d/heartbeat stop
          Wait a few seconds, and then try [url]http://192.168.0.105[/url] again in your browser. You should still see your web site because loadb2 has taken the active role now.
          等待幾秒鐘,然后嘗試在瀏覽器的地址欄里輸入[url]http://192.168.0.105[/url]再次請求。你應該仍然看見你的web站點,因為loadb2已經從備份狀態變成活躍的。
          Now start heartbeat again on loadb1:
          現在再一次在loadb1上啟動心跳:
          loadb1:
          /etc/init.d/heartbeat start
          loadb2 should still have the active role. Do the tests from chapter 5 again on loadb1 and loadb2, and you should see the inverse results as before.
          Loadb2仍將保持活躍狀態的角色。在loadb1和loadb2上再一次做測試,從第五章的步驟開始,你也應該看到這個翻轉的結果和前面的一樣。
          If you have also passed these tests, then your loadbalanced Apache cluster is working as expected. Have fun!
          如果你也通過了這些測試,那么你的負載均衡Apache集群的工作是預期的。正確完工!
          8 Further Reading  進一步閱讀
          This tutorial shows how to loadbalance two Apache nodes. It does not show how to keep the files in the Apache document root in sync or how to create a storage solution like an NFS server that both Apache nodes can use, nor does it provide a solution how to manage your MySQL database(s). You can find solutions for these issues here:
          本指南演示了如何配置兩個Apache節點的負載均衡。它并沒有說明如何在apahce的根目錄保持文件同步,或者如何創建一個像NFS服務器一樣的存儲方案供給兩個Apahce節點使用,也沒有提供一個怎樣管理你的MySQL數據庫的解決方案。你可以找到這些問題的解決方案在如下地址:
           鏈接·      heartbeat / The High-Availability Linux Project: [url]http://linux-ha.org[/url]
          心跳/高可用性 Linux工程:[url]http://linux-ha.org[/url]
          ·      The Linux Virtual Server Project: [url]http://www.linuxvirtualserver.org[/url]
          ·      Ultra Monkey: [url]http://www.ultramonkey.org[/url]

          posted on 2011-08-07 14:01 w@ns0ng 閱讀(2490) 評論(0)  編輯  收藏 所屬分類: jboss

          主站蜘蛛池模板: 兴业县| 罗平县| 金山区| 高州市| 丰原市| 岑溪市| 营口市| 云南省| 泰顺县| 防城港市| 香河县| 昌图县| 临清市| 南溪县| 高阳县| 陵川县| 兴安盟| 连平县| 菏泽市| 卓尼县| 亳州市| 商南县| 海阳市| 沈阳市| 瓮安县| 永德县| 府谷县| 南康市| 霞浦县| 台南县| 肥东县| 伊春市| 资兴市| 兴隆县| 桂阳县| 遂溪县| 拉萨市| 唐海县| 太仓市| 连云港市| 新蔡县|