Si esteu buscant l'opció heartbeat de Cacti consulteu: RRDTool#Com_funciona.3F
heartbeat is the control instance of this whole setup. It is going to be installed on server1 and server2, and it monitors the other server. For example, if server1 goes down, heartbeat on server2 detects this and makes server2 take over. heartbeat also starts and stops the NFS server on both server1 and server2. It also provides NFS as a virtual service via the IP address 192.168.0.174 so that the web server cluster nodes see only one NFS server.
First we install heartbeat:
$ sudo apt-get install heartbeat
Now we have to create three configuration files for heartbeat. They must be identical on server1 and server2!
$ cat /etc/heartbeat/ha.cf: logfacility local0 keepalive 2 #deadtime 30 # USE THIS!!! deadtime 10 bcast eth0 node server1 server2
As nodenames we must use the output of uname -n on server1 and server2.
$ cat /etc/heartbeat/haresources: server1 IPaddr::192.168.0.174/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data::ext3 nfs-kernel-server
The first word is the output of uname -n on server1, no matter if you create the file on server1 or server2! After IPaddr we put our virtual IP address 192.168.0.174, and after drbddisk we use the resource name of our DRBD resource which is r0 here (remember, that is the resource name we use in /etc/drbd.conf - if you use another one, you must use it here, too).
/etc/heartbeat/authkeys: auth 3 3 md5 somerandomstring
somerandomstring is a password which the two heartbeat daemons on server1 and server2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms. I use md5 as it is the most secure one.
/etc/heartbeat/authkeys should be readable by root only, therefore we do this:
chmod 600 /etc/heartbeat/authkeys
Finally we start DRBD and heartbeat on server1 and server2:
/etc/init.d/drbd start /etc/init.d/heartbeat start
9 First Tests
Now we can do our first tests. On server1, run
In the output, the virtual IP address 192.168.0.174 should show up:
eth0 Link encap:Ethernet HWaddr 00:0C:29:A1:C5:9B
inet addr:192.168.0.172 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fea1:c59b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18992 errors:0 dropped:0 overruns:0 frame:0 TX packets:24816 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2735887 (2.6 MiB) TX bytes:28119087 (26.8 MiB) Interrupt:177 Base address:0x1400
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:A1:C5:9B
inet addr:192.168.0.174 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:177 Base address:0x1400
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:71 errors:0 dropped:0 overruns:0 frame:0 TX packets:71 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5178 (5.0 KiB) TX bytes:5178 (5.0 KiB)
on server1. You should see /data listed there now:
Filesystem Size Used Avail Use% Mounted on /dev/sda5 4.6G 430M 4.0G 10% / tmpfs 126M 0 126M 0% /dev/shm /dev/sda1 89M 11M 74M 13% /boot /dev/drbd0 24G 33M 23G 1% /data
If you do the same
on server2, you shouldn't see 192.168.0.174 and /data.
Now we create a test file in /data/export on server1 and then simulate a server failure of server1 (by stopping heartbeat):
$ touch /data/export/test1 /etc/init.d/heartbeat stop
If you run ifconfig and df -h on server2 now, you should see the IP address 192.168.0.174 and the /data partition, and
ls -l /data/export
should list the file test1 which you created on server1 before. So it has been mirrored to server2!
Now we create another test file on server2 and see if it gets mirrored to server1 when it comes up again:
(Wait a few seconds.)
ifconfig df -h ls -l /data/export
You should see 192.168.0.174 and /data again on server1 which means it has taken over again (because we defined it as primary), and you should also see the file /data/export/test2!