双机双主:正常情况下nginx是一台提供服务,另外一条备份。
互为主备要引入两个VIP,如mysql双主,nginx双主,这样要引入两个VIP,也就是还需要引入。
virtual_ipaddress {
192.168.179.199
192.1681.79.188
}
只是代表这个实例有两个VIP。
[root@localhost ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id real-server1
script_user root
enable_script_security
}
vrrp_script chk_nginx {
script "/data/shell/check_nginx_status.sh"
interval 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 151
priority 100
advert_int 5
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.179.199
}
track_script {
chk_nginx
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens32
virtual_router_id 152
priority 50
advert_int 5
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.179.198
}
track_script {
chk_nginx
}
}
更多关于企业集群运维管理系列的学习文章,请参阅:玩转企业集群运维管理专栏,本系列持续更新中。
[root@localhost ~]# cat /data/shell/check_nginx_status.sh
#!/bin/bash
nginx_status=`ps -ef | grep nginx | grep -v grep | grep -v check | wc -l`
if [ $nginx_status -eq 0 ];then
systemctl restart keepalived.service
fi
先启动nginx然后keepalived
[root@localhost ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:0e:1a:bf brd ff:ff:ff:ff:ff:ff
inet 192.168.179.102/24 brd 192.168.179.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.199/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::2d42:a0b1:1cdc:74c0/64 scope link
valid_lft forever preferred_lft forever
192.168.179.103
[root@localhost ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id real-server2
script_user root
enable_script_security
}
vrrp_script chk_nginx {
script "/data/shell/check_nginx_status.sh"
interval 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 151
priority 50
advert_int 5
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.179.199
}
track_script {
chk_nginx
}
}
vrrp_instance VI_2 {
state MASTER
interface ens32
virtual_router_id 152
priority 100
advert_int 5
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.179.198
}
track_script {
chk_nginx
}
}
[root@localhost ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:61:90:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.179.103/24 brd 192.168.179.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.198/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::f54d:5639:6237:2d0e/64 scope link
valid_lft forever preferred_lft forever
102操作以及日志
[root@localhost ~]# pkill nginx
[root@localhost ~]#
Nov 21 19:12:49 localhost Keepalived_vrrp[52348]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Nov 21 19:12:49 localhost Keepalived[52346]: Stopping
Nov 21 19:12:49 localhost systemd: Stopping LVS and VRRP High Availability Monitor...
Nov 21 19:12:49 localhost Keepalived_healthcheckers[52347]: Stopped
Nov 21 19:12:50 localhost Keepalived_vrrp[52348]: Stopped
Nov 21 19:12:50 localhost Keepalived[52346]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Nov 21 19:12:50 localhost systemd: start request repeated too quickly for keepalived.service
Nov 21 19:12:50 localhost systemd: Failed to start LVS and VRRP High Availability Monitor.
Nov 21 19:12:50 localhost systemd: Unit keepalived.service entered failed state.
Nov 21 19:12:50 localhost systemd: keepalived.service failed.
103上查看日志,可以看到199跑到103上了 ,下面是103日志和ip
Nov 21 19:12:45 localhost Keepalived_vrrp[7582]: VRRP_Instance(VI_1) Transition to MASTER STATE
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: VRRP_Instance(VI_1) Entering MASTER STATE
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: VRRP_Instance(VI_1) setting protocol VIPs.
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 192.168.179.199
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:50 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:55 localhost Keepalived_vrrp[7582]: Sending gratuitous ARP on ens32 for 192.168.179.199
Nov 21 19:12:55 localhost Keepalived_vrrp[7582]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 192.168.179.199
[root@localhost ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:61:90:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.179.103/24 brd 192.168.179.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.198/32 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.199/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::f54d:5639:6237:2d0e/64 scope link
valid_lft forever preferred_lft forever
更多关于企业集群运维管理系列的学习文章,请参阅:玩转企业集群运维管理专栏,本系列持续更新中。
简单点说抢占模式就是,当master宕机后,backup 接管服务。后续当master恢复后,vip漂移到master上,master重新接管服务,多了一次多余的vip切换,而在实际生产中是不需要这样。实际生产中是,当原先的master恢复后,状态变为backup,不接管服务,这是非抢占模式。
重点:非抢占式俩节点state必须为bakcup,且必须配置nopreempt。
注意:这样配置后,我们要注意启动服务的顺序,优先启动的获取master权限,与优先级没有关系了。
总结:抢占模式即MASTER从故障中恢复后,会将VIP从BACKUP节点中抢占过来。非抢占模式即MASTER恢复后不抢占BACKUP升级为MASTER后的VIP。
[root@real-server1 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id real-server1 #运行keepalived机器的标识
script_user root
enable_script_security
}
vrrp_script chk_nginx {
script "/data/shell/check_nginx_status.sh" #监控服务脚本,脚本记得授予x执行权限;
interval 2 #指定脚本执行的间隔。单位是秒。默认为1s。
}
vrrp_instance VI_1 {
state BACKUP
interface ens32 #绑定虚拟机的IP
virtual_router_id 151 #虚拟路由id,和从机保持一致
priority 100
nopreempt #设置为不抢占
advert_int 5 #查间隔,默认1秒,VRRP心跳包的发送周期,单位为s 组播信息发送间隔,两个节点设置必须一样
authentication {
auth_type PASS #主辅认证密码(生产环境介意修改),最长支持八位
auth_pass 1111
}
virtual_ipaddress { #虚拟IP地址
192.168.179.199
}
track_script {
chk_nginx
}
}
#这个脚本你可以测试一下,将keepalived正常启动,然后你pkill nginx,执行这个脚本看看后面会发生什么,通过/var/log/messages来验证这个脚本是否正确,或者systemctl status keepalived查看状态,正确之后就可以配置在你的keepalived的配置文件当中。
[root@real-server1 ~]# cat /data/shell/check_nginx_status.sh
#!/bin/bash
nginx_status=$(ps -ef | grep nginx | grep -v grep | grep -v check | wc -l)
if [ $nginx_status -eq 0 ];then
systemctl stop keepalived.service
fi
[root@real-server1 ~]# chmod o+x /data/shell/check_nginx_status.sh
#用脚本实现健康检查,如果nginx进程为0就要发生keepalived切换,实现VIP漂移。当你的nginx挂掉了,那么你的keepalived永远都启动不了,因为下面脚本定义了systemctl stop keepalived.service,nginx没有起来那么keepalived起来会自动关闭。
#advert_int 5 检查间隔,默认1秒,VRRP心跳包的发送周期,单位为s,组播信息发送间隔,可以看到组播包里面的信息包含了virtual_router_id 151 虚拟路由ID和优先级priority 100。
[root@localhost shell]# tcpdump -i ens32 -nn net 224.0.0.18
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens32, link-type EN10MB (Ethernet), capture size 262144 bytes
10:26:42.910170 IP 192.168.179.102 > 224.0.0.18: VRRPv2, Advertisement, vrid 151, prio 100, authtype simple, intvl 5s, length 20
10:26:47.911831 IP 192.168.179.102 > 224.0.0.18: VRRPv2, Advertisement, vrid 151, prio 100, authtype simple, intvl 5s, length 20
10:26:52.915502 IP 192.168.179.102 > 224.0.0.18: VRRPv2, Advertisement, vrid 151, prio 100, authtype simple, intvl 5s, length 20
[root@real-server1 ~]# ps -ef | grep keepalived | grep -v grep
root 70592 1 0 20:05 ? 00:00:00 /usr/sbin/keepalived -D
root 70593 70592 0 20:05 ? 00:00:00 /usr/sbin/keepalived -D
root 70594 70592 0 20:05 ? 00:00:00 /usr/sbin/keepalived -D
#keepalived正常启动的时候,共启动3个进程,一个是父进程,负责监控其子进程,一个是vrrp子进程,另外一个是checkers子进程。
两个子进程都被系统watchlog看管,两个子进程各自负责复杂自己的事。Healthcheck子进程检查各自服务器的健康状况,例如http,lvs。如果healthchecks进程检查到master上服务不可用了,就会通知本机上的vrrp子进程,让他删除通告,并且去掉虚拟IP,转换为BACKUP状态。
Jul 31 20:05:38 real-server1 Keepalived[70591]: Starting Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 31 20:05:38 real-server1 Keepalived[70591]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 31 20:05:38 real-server1 systemd: PID file /var/run/keepalived.pid not readable (yet?) after start.
Jul 31 20:05:38 real-server1 Keepalived[70592]: Starting Healthcheck child process, pid=70593
Jul 31 20:05:38 real-server1 Keepalived[70592]: Starting VRRP child process, pid=70594
Jul 31 20:05:38 real-server1 systemd: Started LVS and VRRP High Availability Monitor.
Jul 31 20:05:38 real-server1 Keepalived_healthcheckers[70593]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: Registering Kernel netlink reflector
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: Registering Kernel netlink command channel
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: Registering gratuitous ARP shared channel
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: Using LinkWatch kernel netlink reflector...
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jul 31 20:05:38 real-server1 Keepalived_vrrp[70594]: VRRP_Script(chk_nginx) succeeded
#测试健康检查脚本是否有用,可以看到脚本没问题
[root@real-server2 ~]# pkill nginx
Jul 31 20:06:50 real-server1 Keepalived[70592]: Stopping
Jul 31 20:06:50 real-server1 systemd: Stopping LVS and VRRP High Availability Monitor...
Jul 31 20:06:50 real-server1 Keepalived_vrrp[70594]: VRRP_Instance(VI_1) sent 0 priority
Jul 31 20:06:50 real-server1 Keepalived_vrrp[70594]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 31 20:06:50 real-server1 Keepalived_healthcheckers[70593]: Stopped
Jul 31 20:06:51 real-server1 Keepalived_vrrp[70594]: Stopped
Jul 31 20:06:51 real-server1 systemd: Stopped LVS and VRRP High Availability Monitor.
Jul 31 20:06:51 real-server1 Keepalived[70592]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
[root@real-server2 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id real-server2
script_user root
enable_script_security
}
vrrp_script chk_nginx {
script "/data/shell/check_nginx_status.sh"
interval 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 151
priority 50
advert_int 5
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.179.199
}
track_script {
chk_nginx
}
}
更多关于企业集群运维管理系列的学习文章,请参阅:玩转企业集群运维管理专栏,本系列持续更新中。
#Master节点测试,将nginx进程杀掉,根据健康检查脚本会实现vip漂移到备节点
[root@real-server1 ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:61:90:c1 brd ff:ff:ff:ff:ff:ff
inet 192.168.179.103/24 brd 192.168.179.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.199/32 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.199/24 brd 192.168.179.255 scope global secondary ens32:1
valid_lft forever preferred_lft forever
inet6 fe80::f54d:5639:6237:2d0e/64 scope link
valid_lft forever preferred_lft forever
[root@real-server1 ~]# pkill nginx
#keepalived主节点日志如下
Jul 31 20:27:45 real-server1 Keepalived[72926]: Stopping
Jul 31 20:27:45 real-server1 systemd: Stopping LVS and VRRP High Availability Monitor...
Jul 31 20:27:45 real-server1 Keepalived_vrrp[72928]: VRRP_Instance(VI_1) sent 0 priority
Jul 31 20:27:45 real-server1 Keepalived_vrrp[72928]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 31 20:27:45 real-server1 Keepalived_healthcheckers[72927]: Stopped
Jul 31 20:27:46 real-server1 Keepalived_vrrp[72928]: Stopped
Jul 31 20:27:46 real-server1 Keepalived[72926]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
Jul 31 20:27:46 real-server1 systemd: Stopped LVS and VRRP High Availability Monitor.
#来到备节点查看
[root@real-server2 ~]# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a7:ff:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.179.104/24 brd 192.168.179.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.179.199/32 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::831c:6df1:a633:742a/64 scope link
valid_lft forever preferred_lft forever
#备节点的keepalived日志,可以看到故障转移成功
Jul 31 08:27:45 real-server2 Keepalived_vrrp[8961]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: VRRP_Instance(VI_1) setting protocol VIPs.
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 192.168.179.199
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:50 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:55 real-server2 Keepalived_vrrp[8961]: Sending gratuitous ARP on ens32 for 192.168.179.199
Jul 31 08:27:55 real-server2 Keepalived_vrrp[8961]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 192.168.179.199
#Master主节点将nginx启动,并且启动keepalived,可以看到现在进入了backup状态,没有抢占
[root@real-server1 ~]# /usr/local/nginx/sbin/nginx
[root@real-server1 ~]# systemctl start keepalived
Jul 31 20:43:33 real-server1 Keepalived[75196]: Starting Healthcheck child process, pid=75197
Jul 31 20:43:33 real-server1 Keepalived[75196]: Starting VRRP child process, pid=75198
Jul 31 20:43:33 real-server1 systemd: Started LVS and VRRP High Availability Monitor.
Jul 31 20:43:33 real-server1 Keepalived_healthcheckers[75197]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: Registering Kernel netlink reflector
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: Registering Kernel netlink command channel
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: Registering gratuitous ARP shared channel
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: Opening file '/etc/keepalived/keepalived.conf'.
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: VRRP_Instance(VI_1) removing protocol VIPs.
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: Using LinkWatch kernel netlink reflector...
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Jul 31 20:43:33 real-server1 Keepalived_vrrp[75198]: VRRP_Script(chk_nginx) succeeded
最后总结上面简化配置为
#A调度机器设置为
vrrp_instance VI_feng
{
....
state backup
priority 100
nopreempt
....
}
#B调度机器设置为
vrrp_instance VI_feng
{
....
state backup
priority 70
nopreempt
....
}
不抢占是配置在优先级高的机器上面,同时状态要是backup,即集群内部要想实现不抢占,状态都需要设置为backup,优先级还是正常有高有低。谁优先级高配置一个不抢占参数nopreempt(因为优先级高的会抢占VIP)。每次抢占就需要发生切换和漂移,来回切换漂移影响业务访问,服务要中断!!!!更多关于企业集群运维管理系列的学习文章,请参阅:玩转企业集群运维管理专栏,本系列持续更新中。
脑裂(split):指在一个高可用(HA)系统中,当联系着的两个节点断开联系时,本来为一个整体的系统,分裂为两个独立节点,这时两个节点开始争抢共享资源,结果会导致系统混乱,数据损坏。对于无状态服务的HA,无所谓脑裂不脑裂;但对有状态服务(比如MySQL)的HA,必须要严格防止脑裂。
在高可用HA系统中,当联系2个节点的“心跳线”断开时,本来为一整体,一个VRRP协议组,动作协调的HA系统,就分裂为两个独立的个体。由于相互失去了联系,都以为对方出了故障;两个节点的HA软件像“连体人”一样,有共同的身体却拥有两个脑袋,争抢“共享资源”身体,争抢服务器里的应用服务,就会发送严重后果,或者共享资源被瓜分,两边服务都起不来或者都起来都为Master,假如两端服务器发生了脑裂现象就会成为各自的Master,会同时读写“共享存储”,导致数据损坏(常见的如数据库轮询着的联机日志出错) 对付HA系统“裂脑”的对策,目前达成共识的的大概有以下几条:
一般来说,裂脑的发生,有以下几种原因:
曾经碰到的一个keepalived脑裂的问题(如果启用了iptables,不设置"系统接收VRRP协议"的规则,就会出现脑裂)。
曾经在做keepalived+Nginx主备架构的环境时,当重启了备用机器后,发现两台机器都拿到了VIP。这也就是意味着出现了keepalived的脑裂现象,检查了两台主机的网络连通状态,发现网络是好的。然后在备机上抓包:
# tcpdump -i eth0|grep VRRP
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
22:10:17.146322 IP 192.168.1.54 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 160, authtype simple, intvl 1s, length 20
22:10:17.146577 IP 192.168.1.96 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 50, authtype simple, intvl 1s, length 20
22:10:17.146972 IP 192.168.1.54 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 160, authtype simple, intvl 1s, length 20
22:10:18.147136 IP 192.168.1.96 > vrrp.mcast.net: VRRPv2, Advertisement, vrid 51, prio 50, authtype simple, intvl 1s, length 20
抓包发现备机能接收到master发过来的VRRP广播,那为什么还会有脑裂现象?
接着发现iptables开启着,检查了防火墙配置。发现系统不接收VRRP协议。于是修改iptables,添加允许系统接收VRRP协议的配置: 自己添加了下面的iptables规则:
-A INPUT -s 192.168.1.0/24 -d 224.0.0.18 -j ACCEPT #允许组播地址通信
-A INPUT -s 192.168.1.0/24 -p vrrp -j ACCEPT #允许VRRP(虚拟路由器冗余协)通信
最后重启iptables,发现备机上的VIP没了。虽然问题解决了,但备机明明能抓到master发来的VRRP广播包,却无法改变自身状态。只能说明网卡接收到数据包是在iptables处理数据包之前。
在实际生产环境中,我们可以从以下几个方面来防止裂脑问题的发生:
#vim /etc/keepalived/keepalived.conf
......
vrrp_script check_local {
script "/root/check_gateway.sh"
interval 5
}
......
track_script {
check_local
}
脚本内容:
# cat /root/check_gateway.sh
#!/bin/sh
VIP=$1
GATEWAY=192.168.1.1
/sbin/arping -I em1 -c 5 -s $VIP $GATEWAY &>/dev/null
check_gateway.sh 就是我们的仲裁逻辑,发现ping不通网关,则关闭keepalived。
推荐自己写脚本,写一个while循环,每轮ping网关,累计连续失败的次数,当连续失败达到一定次数则运行service keepalived stop关闭keepalived服务。
如果发现又能够ping通网关,再重启keepalived服务。最后在脚本开头再加上脚本是否已经运行的判断逻辑,将该脚本加到crontab里面。更多关于企业集群运维管理系列的学习文章,请参阅:玩转企业集群运维管理专栏,本系列持续更新中。
参考链接:https://blog.csdn.net/qq_34556414/article/ details/109905166 https://blog.csdn.net/qq_34556414 /article/details/107696010 https://blog.csdn.net/ qq_34556414/article/details/107641766