黑帽联盟

 找回密码
 会员注册
查看: 1787|回复: 0
打印 上一主题 下一主题

[集群服务] corosync+drbd+mysql实现的高可用_MySQL

[复制链接]

852

主题

38

听众

3176

积分

白金VIP

Rank: 8Rank: 8

  • TA的每日心情
    开心
    2024-3-7 12:52
  • 签到天数: 1538 天

    [LV.Master]伴坛终老

    corosync+drbd+mysql实现的高可用_MySQL
    要求:

    一、能够在同一网段内直接通信

    二、节点名称,要和uname的结果一样,并保证可以根据节点名称解析到节点的IP地址,配置本地/etc/hosts

    三、SSH互信通信

    四、保证时间同步

    环境准备配置:

    test1,192.168.10.55配置

    1、配置IP地址
    1. [root@test1~]#vim /etc/sysconfig/network-scripts/ifcfg-eth0
    复制代码
    2、配置主机名
    1. [root@test1~]#uname -n
    2. [root@test1~]#hostname master1.local #临时生效
    3. [root@test1~]#vim /etc/sysconfig/network #永久生效
    复制代码
    3、配置主机名解析
    1. [root@test1~]#vim /etc/hosts
    2. 添加:
    3. 192.168.10.55 master1.local
    4. 192.168.10.56 master2.local
    复制代码
    3.2、测试主机名通信
    1. [root@test1~]#ping master1.local
    2. [root@test1~]#ping master2.local
    复制代码
    4、配置SSH互信认证
    1. [root@test1~]#ssh-keygen -t rsa -P''
    2. [root@test1~]#ssh-copy-id -i .ssh/id_rsa.pub  root@192.168.10.55
    复制代码
    5、使用ntp同步时间

    在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
    1. [root@test1~]#crontab -e
    2. */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
    复制代码
    test2,192.168.10.56配置

    1、配置IP地址
    1. [root@test2~]#vim /etc/sysconfig/network-scripts/ifcfg-eth0
    复制代码
    2、配置主机名
    1. [root@test2~]#uname -n
    2. [root@test2~]#hostname test2.local #临时生效
    3. [root@test2~]#vim /etc/sysconfig/network #永久生效
    复制代码
    3、配置主机名解析
    1. [root@test2~]#vim /etc/hosts
    2. 添加:
    3. 192.168.10.55 test1.localtest1
    4. 192.168.10.56 test2.localtest2
    复制代码
    3.2、测试主机名通信
    1. [root@test2~]#ping test1.local
    2. [root@test2~]#ping test1
    复制代码
    4、配置SSH互信认证
    1. [root@test2~]#ssh-keygen -t rsa -P ''
    复制代码
    5、使用ntp同步时间

    在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
    1. [root@test2~]#crontab -e
    2. */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
    复制代码
    安装配置heartbeat

    CentOS直接yum安装报错,提示找不到可用的软件包

    解决办法:
    1. [root@test1src]#wget http://mirrors.sohu.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
    2. [root@test1src]#rpm -ivh epel-release-6-8.noarch.rpm
    复制代码
    6.1、安装heartbeat:
    1. [root@test1src]#yum install heartbeat
    复制代码
    6.2、copy配置文件:
    1. [root@test1src]#cp /usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources} /etc/ha.d/
    复制代码
    6.3、配置认证文件:
    1. [root@test1src]#dd if=/dev/random count=1 bs=512 | md5sum #生成随机数
    2. [root@test1src]#vim /etc/ha.d/authkeys
    3. auth1
    4. 1 md5 d0f70c79eeca5293902aiamheartbeat
    5. [root@test1src]#chmod 600 authkeys
    复制代码
    test2节点的heartbeat安装和test1一样,此处略过。

    6.4、heartbeat主配置文件参数:
    1. [root@test2~]#vim /etc/ha.d/ha.cf
    2. #debugfile /var/log/ha-debug #排错日志
    3. logfile #日志位置
    4. keepalive 2 #多长时间发送一次心跳检测,默认2秒,可以使用ms
    5. deadtime 30 #多长时间检测不到主机就认为挂掉
    6. warntime 10 #如果没有收到心跳信息,那么在等待多长时间就认为对方挂掉
    7. initdead 120 #第一个节点起来后,等待其他节点的时间
    8. baud 19200 #串行线缆的发送速率是多少
    9. auto_failback on #故障恢复后是否转移回来
    10. ping 10.10.10.254 #pingnode,万一节点主机不通,要ping哪个主机
    11. ping_group group 110.10.10.254 10.10.10.253 #pingnodegroup,只要组内有一台主机能ping通就可以
    12. respawn hacluster /usr/lib/heartbeat/ipfail #当一个heartbeat服务停止了,会重启对端的heartbeat服务
    13. deadping 30 #pingnodes多长时间ping不通,就真的故障了
    14. #serial serialportname... #串行设备是什么
    15. serial /dev/ttyS0 #Linux
    16. serial /dev/cuaa0 #FreeBSD
    17. serial /dev/cuad0 #FreeBSD6.x
    18. serial /dev/cua/a #Solaris
    19. #What interface sto broadcast heartbeats over? #如果使用以太网,定义使用单播、组播还是广播发送心跳信息
    20. bcast eth0 #广播
    21. mcast eth0 225.0.0.169410#组播
    22. ucast eth0 192.168.1.2 #单播,只有两个节点的时候才用单播
    23. #定义stonith主机
    24. stonith_host * baytech 10.0.0.3 mylogin mysecret password
    25. stonith_host ken3 rps 10 /dev/ttyS1 kathy 0
    26. stonith_host kathy rps 10 /dev/ttyS1 ken 30
    27. #Tell what machines are in the cluster#告诉集群中有多少个节点,
    28. 每一个节点用node和主机名写一行,主机名要和uname-n保持一致
    29. nod ken3
    30. node kathy
    31. 一般只要定义心跳信息的发送方式、和集群中的节点就行。
    32. bcast eth0
    33. node test1.local
    34. node test2.local
    复制代码
    6.5、定义haresources资源配置文件:
    1. [root@test2~]#vim /etc/ha.d/haresources
    2. #node1 10.0.0.170 Filesystem::/dev/sda1::/data1::ext2 #默认用作主节点的主机名,要跟uname-n一样。VIP是多少。
    3. 自动挂载哪个设备,到哪个目录下,文件类型是什么。资源类型的参数要用双冒号隔开
    4. #just.linux-ha.org 135.9.216.110 http #和上面一样,这里使用的资源是在/etc/rc.d/init.d/下面的,
    5. 默认先到/etc/ha.d/resource.d/目录下找资源,找不到在到/etc/rc.d/init.d/目录找
    6. master1.localIPaddr::192.168.10.2/24 eth0 mysqld
    7. master1.localIPaddr::192.168.10.2/24/eth0drbddisk::dataFilesystem::/dev/drbd1::/data::ext3mysqld
    8. #使用IPaddr脚本来配置VIP
    复制代码
    6.6、拷贝master1.local的配置文件到master2.local上
    1. [root@test1~]#scp -p ha.cf haresources authkeys master2.local:/etc/ha.d/
    复制代码
    7、启动heartbeat
    1. [root@test1~]#service heartbeat start
    复制代码
    7.1、查看heartbeat启动日志
    1. [root@test1~]#tail -f /var/log/messages
    2. Feb1615:12:45test-1heartbeat:[16056]:info:Configurationvalidated.Startingheartbeat3.0.4
    3. Feb1615:12:45test-1heartbeat:[16057]:info:heartbeat:version3.0.4
    4. Feb1615:12:45test-1heartbeat:[16057]:info:Heartbeatgeneration:1455603909
    5. Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatstartedonport694(694)interfaceeth0
    6. Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatclosedonport694interfaceeth0-Status:1
    7. Feb1615:12:45test-1heartbeat:[16057]:info:glib:pingheartbeatstarted.
    8. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler
    9. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler
    10. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_SignalHandler:Addedsignalhandlerforsignal17
    11. Feb1615:12:45test-1heartbeat:[16057]:info:Localstatusnowsetto:'up'
    12. Feb1615:12:45test-1heartbeat:[16057]:info:Link192.168.10.1:192.168.10.1up.
    13. Feb1615:12:45test-1heartbeat:[16057]:info:Statusupdatefornode192.168.10.1:statusping
    14. Feb1615:12:45test-1heartbeat:[16057]:info:Linktest1.local:eth0up.
    15. Feb1615:12:51test-1heartbeat:[16057]:info:Linktest2.local:eth0up.
    16. Feb1615:12:51test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusup
    17. Feb1615:12:51test-1harc(default)[16068]:info:Running/etc/ha.d//rc.d/statusstatus
    18. Feb1615:12:52test-1heartbeat:[16057]:WARN:1lostpacket(s)for[test2.local][3:5]
    19. Feb1615:12:52test-1heartbeat:[16057]:info:Nopktsmissingfromtest2.local!
    20. Feb1615:12:52test-1heartbeat:[16057]:info:Comm_now_up():updatingstatustoactive
    21. Feb1615:12:52test-1heartbeat:[16057]:info:Localstatusnowsetto:'active'
    22. Feb1615:12:52test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusactive
    23. Feb1615:12:52test-1harc(default)[16086]:info:Running/etc/ha.d//rc.d/statusstatus
    24. Feb1615:13:02test-1heartbeat:[16057]:info:localresourcetransitioncompleted.
    25. Feb1615:13:02test-1heartbeat:[16057]:info:Initialresourceacquisitioncomplete(T_RESOURCES(us))
    26. Feb1615:13:02test-1heartbeat:[16057]:info:remoteresourcetransitioncompleted.
    27. Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16138]:INFO:Resourceisstopped
    28. Feb1615:13:02test-1heartbeat:[16102]:info:LocalResourceacquisitioncompleted.
    29. Feb1615:13:02test-1harc(default)[16219]:info:Running/etc/ha.d//rc.d/ip-request-respip-request-resp
    30. Feb1615:13:02test-1ip-request-resp(default)[16219]:receivedip-request-respIPaddr::192.168.10.2/24/eth0OKyes
    31. Feb1615:13:02test-1ResourceManager(default)[16238]:info:Acquiringresourcegroup:test1.localIPaddr::192.168.10.2/24/eth0mysqld
    32. Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16264]:INFO:Resourceisstopped
    33. Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/ha.d/resource.d/IPaddr192.168.10.2/24/eth0start
    34. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Addinginetaddress192.168.10.2/24withbroadcastaddress192.168.10.255todeviceeth0
    35. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Bringingdeviceeth0up
    36. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:/usr/libexec/heartbeat/send_arp-i200-r5-p/var/run/resource-agents/send_arp-192.168.10.2eth0192.168.10.2autonot_usednot_used
    37. Feb1615:13:03test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16360]:INFO:Success
    38. Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/init.d/mysqldstart
    39. Feb1615:13:04test-1ntpd[1605]:Listennormallyon15eth0192.168.10.2UDP123
    复制代码
    首页> mysql教程> 正文
    corosync+drbd+mysql实现的高可用_MySQL
    作者: PHP中文网|标签:MYSQL corosync+drbd+mysql|2016-5-27 13:45
    corosync+drbd+mysql实现的高可用_MySQL
    要求:

    一、能够在同一网段内直接通信

    二、节点名称,要和uname的结果一样,并保证可以根据节点名称解析到节点的IP地址,配置本地/etc/hosts

    三、SSH互信通信

    四、保证时间同步

    环境准备配置:

    test1,192.168.10.55配置

    1、配置IP地址
    1. [root@test1~]#vim /etc/sysconfig/network-scripts/ifcfg-eth0
    复制代码

    2、配置主机名
    1. [root@test1~]#uname -n
    2. [root@test1~]#hostname master1.local #临时生效
    3. [root@test1~]#vim /etc/sysconfig/network #永久生效
    复制代码

    3、配置主机名解析
    1. [root@test1~]#vim /etc/hosts
    2. 添加:
    3. 192.168.10.55 master1.local
    4. 192.168.10.56 master2.local
    复制代码

    3.2、测试主机名通信
    1. [root@test1~]#ping master1.local
    2. [root@test1~]#ping master2.local
    复制代码

    4、配置SSH互信认证
    1. [root@test1~]#ssh-keygen -t rsa -P ''
    2. [root@test1~]#ssh-copy-id -i .ssh/id_rsa.pub root@192.168.10.55
    复制代码

    5、使用ntp同步时间
    1. 在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
    2. [root@test1~]#crontab -e
    3. */5 * * * *  /sbin/ntpdate 192.168.10.1 &> /dev/null
    4. test2,192.168.10.56配置
    复制代码

    1、配置IP地址
    1. [root@test2~]#vim /etc/sysconfig/network-scripts/ifcfg-eth0
    复制代码

    2、配置主机名
    1. [root@test2~]#uname -n
    2. [root@test2~]#hostname test2.local #临时生效
    3. [root@test2~]#vim /etc/sysconfig/network #永久生效
    复制代码

    3、配置主机名解析
    1. [root@test2~]#vim /etc/hosts
    2. 添加:
    3. 192.168.10.55 test1.localtest1
    4. 192.168.10.56 test2.localtest2
    复制代码

    3.2、测试主机名通信
    1. [root@test2~]#ping test1.local
    2. [root@test2~]#ping test1
    复制代码

    4、配置SSH互信认证
    1. [root@test2~]#ssh-keygen -t rsa -P ''
    2. [root@test2~]#ssh-copy-id -i .ssh/id_rsa.pub root@192.168.10.56
    复制代码

    5、使用ntp同步时间
    1. 在crontab中加入每5分钟执行一次ntpdate命令,用来保证服务器时间是同步的
    2. [root@test2~]#crontab -e
    3. */5 * * * * /sbin/ntpdate 192.168.10.1 &> /dev/null
    4. 安装配置heartbeat

    5. CentOS直接yum安装报错,提示找不到可用的软件包

    6. 解决办法:
    7. [root@test1src]#wgethttp://mirrors.sohu.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
    8. [root@test1src]#rpm-ivhepel-release-6-8.noarch.rpm
    复制代码

    6.1、安装heartbeat:
    1. [root@test1src]#yum install heartbeat
    复制代码

    6.2、copy配置文件:
    1. [root@test1src]#cp /usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources} /etc/ha.d/
    复制代码

    6.3、配置认证文件:
    1. [root@test1src]#dd if=/dev/random count=1 bs=512 | md5sum #生成随机数
    2. [root@test1src]#vim /etc/ha.d/authkeys
    3. auth 1
    4. 1 md5 d0f70c79eeca5293902aiamheartbeat
    5. [root@test1src]#chmod 600 authkeys
    6. test2节点的heartbeat安装和test1一样,此处略过。
    复制代码

    6.4、heartbeat主配置文件参数:
    1. [root@test2~]#vim /etc/ha.d/ha.cf
    2. #debugfile/var/log/ha-debug#排错日志
    3. logfile#日志位置
    4. keepalive2#多长时间发送一次心跳检测,默认2秒,可以使用ms
    5. deadtime30#多长时间检测不到主机就认为挂掉
    6. warntime10#如果没有收到心跳信息,那么在等待多长时间就认为对方挂掉
    7. initdead120#第一个节点起来后,等待其他节点的时间
    8. baud19200#串行线缆的发送速率是多少
    9. auto_failbackon#故障恢复后是否转移回来
    10. ping10.10.10.254#pingnode,万一节点主机不通,要ping哪个主机
    11. ping_groupgroup110.10.10.25410.10.10.253#pingnodegroup,只要组内有一台主机能ping通就可以
    12. respawnhacluster/usr/lib/heartbeat/ipfail#当一个heartbeat服务停止了,会重启对端的heartbeat服务
    13. deadping30#pingnodes多长时间ping不通,就真的故障了
    14. #serialserialportname...#串行设备是什么
    15. serial/dev/ttyS0#Linux
    16. serial/dev/cuaa0#FreeBSD
    17. serial/dev/cuad0#FreeBSD6.x
    18. serial/dev/cua/a#Solaris
    19. #Whatinterfacestobroadcastheartbeatsover?#如果使用以太网,定义使用单播、组播还是广播发送心跳信息
    20. bcasteth0#广播
    21. mcasteth0225.0.0.169410#组播
    22. ucasteth0192.168.1.2#单播,只有两个节点的时候才用单播
    23. #定义stonith主机
    24. stonith_host*baytech10.0.0.3myloginmysecretpassword
    25. stonith_hostken3rps10/dev/ttyS1kathy0
    26. stonith_hostkathyrps10/dev/ttyS1ken30
    27. #Tellwhatmachinesareinthecluster#告诉集群中有多少个节点,
    28. 每一个节点用node和主机名写一行,主机名要和uname-n保持一致
    29. nodeken3
    30. nodekathy
    31. 一般只要定义心跳信息的发送方式、和集群中的节点就行。
    32. bcasteth0
    33. nodetest1.local
    34. nodetest2.local
    复制代码

    6.5、定义haresources资源配置文件:
    1. [root@test2~]#vim /etc/ha.d/haresources
    2. #node110.0.0.170Filesystem::/dev/sda1::/data1::ext2#默认用作主节点的主机名,要跟uname-n一样。VIP是多少。
    3. 自动挂载哪个设备,到哪个目录下,文件类型是什么。资源类型的参数要用双冒号隔开
    4. #just.linux-ha.org135.9.216.110http#和上面一样,这里使用的资源是在/etc/rc.d/init.d/下面的,
    5. 默认先到/etc/ha.d/resource.d/目录下找资源,找不到在到/etc/rc.d/init.d/目录找
    6. master1.local IPaddr::192.168.10.2/24/eth0 mysqld
    7. master1.localIPaddr::192.168.10.2/24/eth0 drbd disk::dataFilesystem::/dev/drbd1::/data::ext3mysqld
    8. #使用IPaddr脚本来配置VIP
    复制代码

    6.6、拷贝master1.local的配置文件到master2.local上
    1. [root@test1~]#scp -p ha.cf haresources authkeys master2.local:/etc/ha.d/
    复制代码

    7、启动heartbeat
    1. [root@test1~]#service heartbeat start
    2. [root@test1~]#ssh master2.loca 'service heartbeat start' #一定要在test1上通过ssh的方式启动test2节点的heartbeat
    复制代码

    7.1、查看heartbeat启动日志
    1. [root@test1~]#tail -f /var/log/messages
    2. Feb1615:12:45test-1heartbeat:[16056]:info:Configurationvalidated.Startingheartbeat3.0.4
    3. Feb1615:12:45test-1heartbeat:[16057]:info:heartbeat:version3.0.4
    4. Feb1615:12:45test-1heartbeat:[16057]:info:Heartbeatgeneration:1455603909
    5. Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatstartedonport694(694)interfaceeth0
    6. Feb1615:12:45test-1heartbeat:[16057]:info:glib:UDPBroadcastheartbeatclosedonport694interfaceeth0-Status:1
    7. Feb1615:12:45test-1heartbeat:[16057]:info:glib:pingheartbeatstarted.
    8. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler
    9. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_TriggerHandler:Addedsignalmanualhandler
    10. Feb1615:12:45test-1heartbeat:[16057]:info:G_main_add_SignalHandler:Addedsignalhandlerforsignal17
    11. Feb1615:12:45test-1heartbeat:[16057]:info:Localstatusnowsetto:'up'
    12. Feb1615:12:45test-1heartbeat:[16057]:info:Link192.168.10.1:192.168.10.1up.
    13. Feb1615:12:45test-1heartbeat:[16057]:info:Statusupdatefornode192.168.10.1:statusping
    14. Feb1615:12:45test-1heartbeat:[16057]:info:Linktest1.local:eth0up.
    15. Feb1615:12:51test-1heartbeat:[16057]:info:Linktest2.local:eth0up.
    16. Feb1615:12:51test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusup
    17. Feb1615:12:51test-1harc(default)[16068]:info:Running/etc/ha.d//rc.d/statusstatus
    18. Feb1615:12:52test-1heartbeat:[16057]:WARN:1lostpacket(s)for[test2.local][3:5]
    19. Feb1615:12:52test-1heartbeat:[16057]:info:Nopktsmissingfromtest2.local!
    20. Feb1615:12:52test-1heartbeat:[16057]:info:Comm_now_up():updatingstatustoactive
    21. Feb1615:12:52test-1heartbeat:[16057]:info:Localstatusnowsetto:'active'
    22. Feb1615:12:52test-1heartbeat:[16057]:info:Statusupdatefornodetest2.local:statusactive
    23. Feb1615:12:52test-1harc(default)[16086]:info:Running/etc/ha.d//rc.d/statusstatus
    24. Feb1615:13:02test-1heartbeat:[16057]:info:localresourcetransitioncompleted.
    25. Feb1615:13:02test-1heartbeat:[16057]:info:Initialresourceacquisitioncomplete(T_RESOURCES(us))
    26. Feb1615:13:02test-1heartbeat:[16057]:info:remoteresourcetransitioncompleted.
    27. Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16138]:INFO:Resourceisstopped
    28. Feb1615:13:02test-1heartbeat:[16102]:info:LocalResourceacquisitioncompleted.
    29. Feb1615:13:02test-1harc(default)[16219]:info:Running/etc/ha.d//rc.d/ip-request-respip-request-resp
    30. Feb1615:13:02test-1ip-request-resp(default)[16219]:receivedip-request-respIPaddr::192.168.10.2/24/eth0OKyes
    31. Feb1615:13:02test-1ResourceManager(default)[16238]:info:Acquiringresourcegroup:test1.localIPaddr::192.168.10.2/24/eth0mysqld
    32. Feb1615:13:02test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16264]:INFO:Resourceisstopped
    33. Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/ha.d/resource.d/IPaddr192.168.10.2/24/eth0start
    34. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Addinginetaddress192.168.10.2/24withbroadcastaddress192.168.10.255todeviceeth0
    35. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:Bringingdeviceeth0up
    36. Feb1615:13:03test-1IPaddr(IPaddr_192.168.10.2)[16386]:INFO:/usr/libexec/heartbeat/send_arp-i200-r5-p/var/run/resource-agents/send_arp-192.168.10.2eth0192.168.10.2autonot_usednot_used
    37. Feb1615:13:03test-1/usr/lib/ocf/resource.d//heartbeat/IPaddr(IPaddr_192.168.10.2)[16360]:INFO:Success
    38. Feb1615:13:03test-1ResourceManager(default)[16238]:info:Running/etc/init.d/mysqldstart
    39. Feb1615:13:04test-1ntpd[1605]:Listennormallyon15eth0192.168.10.2UDP123
    复制代码

    说明:

    1、Link test1.local:eth0 up、Link test2.local:eth0 up #两个节点连接成功并为UP状态。

    2、Link 192.168.10.1:192.168.10.1 up #ping节点的IP也已经启动

    3、info: Running /etc/init.d/mysqld start #mysql启动成功

    4、Listen normally on 15 eth0 192.168.10.2 UDP 123 #VIP启动成功

    7.2、查看heartbeat的VIP
    1. [root@test1ha.d]#ipadd | grep "10.2"
    2. inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0
    3. inet 192.168.10.2/24 brd 192.168.10.255 scope global secondary eth0
    复制代码
    1. [root@test-2ha.d]#ipadd | grep "10.2"
    2. inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0
    复制代码
    注:可以看到现在VIP是在master1.local主机上。而master2.local上没有VIP

    8、测试效果

    8.1、正常情况下连接mysql
    1. [root@test3ha.d]#mysql -uroot -h'192.168.10.2' -p
    2. Enterpassword:
    3. WelcometotheMySQLmonitor.Commandsendwith;or\g.
    4. YourMySQLconnectionidis2
    5. Serverversion:5.5.44Sourcedistribution
    6. Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    7. OracleisaregisteredtrademarkofOracleCorporationand/orits
    8. affiliates.Othernamesmaybetrademarksoftheirrespective
    9. owners.
    10. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
    11. mysql>showvariableslike'server_id';
    12. +---------------+-------+
    13. |Variable_name|Value|
    14. +---------------+-------+
    15. |server_id|1|
    16. +---------------+-------+
    17. 1rowinset(0.00sec)
    18. mysql>
    复制代码
    8.2、关闭master1.local上的heartbeat
    1. [root@test1ha.d]#service heartbeat stop
    2. StoppingHigh-Availabilityservices:Done.
    3. [root@test1ha.d]#ipadd |grep "192.168.10.2"
    4. inet 192.168.10.55/24 brd 192.168.10.255 scope global eth0
    5. [root@test2ha.d]#ipadd | grep"192.168.10.2"
    6. inet 192.168.10.56/24 brd 192.168.10.255 scope global eth0
    7. inet 192.168.10.2/24 brd 192.168.10.255  scope global secondary eth0
    复制代码
    注:这个时候VIP已经漂移到了master2.local主机上,我们在来看看连接mysql的server_id
    1. [root@test2ha.d]#mysql -uroot -h'192.168.10.2' -p
    2. Enterpassword:
    3. WelcometotheMySQLmonitor.Commandsendwith;or\g.
    4. YourMySQLconnectionidis2
    5. Serverversion:5.5.44Sourcedistribution
    6. Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    7. OracleisaregisteredtrademarkofOracleCorporationand/orits
    8. affiliates.Othernamesmaybetrademarksoftheirrespective
    9. owners.
    10. Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
    11. mysql>showvariableslike'server_id';
    12. +---------------+-------+
    13. |Variable_name|Value|
    14. +---------------+-------+
    15. |server_id|2|
    16. +---------------+-------+
    17. 1rowinset(0.00sec)
    18. mysql>
    复制代码
    注:server_id已经从1变成了2,证明此时访问的是master2.local主机上的mysql服务

    测试完毕。下面配置drbd让两台mysql服务器之间使用同一个文件系统,以实现mysql的写高可用。

    9、配置DRBD

    DRBD:(Distributed Replicated Block Device)分布式复制块设备,是linux内核中的一个模块。DRBD作为磁盘镜像来讲,它一定是主从架构的,它决不允许两个节点同时读写,仅允许一个节点能读写,从节点不能读写和挂载,

    但是DRDB有双主的概念,主、从的角色可以切换。DRBD分别将位于两台主机上的硬盘或磁盘分区做成镜像设备,当我们客户端的程序向主节点发起存储请求的时候,这个数据会在底层以TCP/IP协议按位同布一份到备节点,

    所以这能保证只要我们在主节点上存的数据,备节点上在按位一定有一模一样的一份数据。这是在两台主机上实现的,这意味着DRBD是工作在内核模块当中。不像RAID1的镜像是在同一台主机上实现的。

    DRBD双主模型的实现:一个节点在数据访问的时候,它一定会将数据、元数据载入内存的,而且它对于某个文件内核中加锁的操作,另一个节点的内核是看不到的,那如果它能将它自己施加的锁通知给另一个节点的内核就可以了。

    在这种情况下,我们就只能通过message layer(heartbeat、corosync都可)、pathmaker(把DRBD定义成资源),然后把这两个主机上对应的镜像格式化成集群文件系统(GFS2/OCFS2)。

    这就是基于结合分布式文件锁管理器(DLM Distributed Lock Manager)以及集群文件系统所完成的双主模型。DRBD集群只允许有两个节点,要么双主,要么主从。

    9.1、DRBD的三种工作模型

    A、数据在本地DRBD存储完成后向应用程序返回存储成功的消息,异步模型。效率高,性能好。数据不安全

    B、数据在本地DRBD存储完成后并且通过TCP/IP把所有数据发送到从DRBD中,才向本地的应用程序返回存储成功的消息,半同步模型。一般不用。

    C、数据在本地DRBD存储完成后,通过TCP/IP把所有数据发送到从DRBD中,从DRBD存储完成后才向应用程序返回成功的消息,同步模型。效率低,性能若,但是数据安全可靠性大,用的最多。

    9.2、DRBD的资源

    1、资源名称,可以是任意的ascii码不包含空格的字符

    2、DRBD设备,在双方节点上,此DRBD设备的设备文件,一般为/dev/drbdN,其主设备号相同都是147,此设备号用来标识不通的设备

    3、磁盘配置,在双方节点上,各自提供的存储设备,可以是个分区,可以是任何类型的块设备,也可以是lvm

    4、网络配置,双方数据同步时,所使用的网络属性

    9.3、安装DRBD

    drbd在2.6.33开始,才整合进内核的。

    9.3.1、下载drbd
    1. [root@test1~]#wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz -O /usr/local/src/drbd-8.4.3.tar.gz
    复制代码
    9.3.2、安装drbd软件
    1. [root@test1 ~]#cd /usr/local/src
    2. [root@test1 src]#tar -zxvf drbd-8.4.3.tar.gz
    3. [root@test1 src]#cd /usr/local/src/drbd-8.4.3
    4. [root@test1 drbd-8.4.3]#./configure --prefix=/usr/local/drbd --with-km
    5. [root@test1 drbd-8.4.3]#make
    复制代码
    1. [root@test1drbd-8.4.3]#make install
    2. [root@test1drbd-8.4.3]#mkdir -p /usr/local/drbd/var/run/drbd
    3. [root@test1drbd-8.4.3]#cp /usr/local/drbd/etc/rc.d/init.d/drbd  /etc/rc.d/init.d/
    复制代码
    9.3.3、安装drbd模块
    1. [root@test1 drbd-8.4.3]#cd drbd/
    2. [root@test1 drbd]#make clean
    3. [root@test1 drbd]#make
    4. [root@test1 drbd]#cp drbd.ko /lib/modules/`uname-r` /kernel/lib/
    5. [root@test1 drbd]#modprobe drbd
    6. [root@test1 drbd]#lsmod |grep drbd
    复制代码
    9.3.4、为drbd创建新分区
    1. [root@test1 drbd]#fdisk /dev/sdb
    2. WARNING:DOS-compatiblemodeisdeprecated.It'sstronglyrecommendedto
    3. switchoffthemode(command'c')andchangedisplayunitsto
    4. sectors(command'u').
    5. Command(mforhelp):n
    6. Commandaction
    7. eextended
    8. pprimarypartition(1-4)
    9. p
    10. Partitionnumber(1-4):1
    11. Firstcylinder(1-1305,default1):
    12. Usingdefaultvalue1
    13. Lastcylinder,+cylindersor+size{K,M,G}(1-1305,default1305):+9G
    14. Command(mforhelp):w
    15. Thepartitiontablehasbeenaltered!
    16. Callingioctl()tore-readpartitiontable.
    17. AWRNING:Re-readingthepartitiontablefailedwitherror16:Deviceorresourcebusy.
    18. Thekernelstillusestheoldtable.
    19. Thenewtablewillbeusedatthenextreboot.
    20. Syncingdisks.
    21. [root@test1 drbd]#partprobe /dev/sdb
    复制代码
    test2节点的drbd安装和分区配置步骤略过,和test1上一样安装,test2节点的drbd配置文件保证和test1节点一样,使用scp传到test2节点即可

    10、配置drbd

    10.1、配置drbd的通用配置文件
    1. [root@test1 drbd.d]#cd /usr/local/drbd/etc/drbd.d
    2. [root@test1 drbd.d]#vim global_common.conf
    3. global{#global是全局配置
    4. usage-countno;#官方用来统计有多少个客户使用drbd的
    5. #minor-countdialog-refreshdisable-ip-verification
    6. }
    7. common{#通用配置,用来配置所有资源那些相同属性的。为drbd提供默认属性的
    8. protocolC;#默认使用协议C,即同步模型。
    9. handlers{#处理器段,用来配置drbd的故障切换操作
    10. #TheseareEXAMPLEhandlersonly.
    11. #Theymayhavesevereimplications,
    12. #likehardresettingthenodeundercertaincircumstances.
    13. #Becarefulwhenchosingyourpoison.
    14. pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";#
    15. pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh;echob>/proc/sysrq-trigger;reboot-f";#脑裂之后的操作
    16. local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh;echoo>/proc/sysrq-trigger;halt-f";#本地i/o错误之后的操作
    17. #fence-peer"/usr/lib/drbd/crm-fence-peer.sh";
    18. #split-brain"/usr/lib/drbd/notify-split-brain.shroot";
    19. #out-of-sync"/usr/lib/drbd/notify-out-of-sync.shroot";
    20. #before-resync-target"/usr/lib/drbd/snapshot-resync-target-lvm.sh-p15---c16k";
    21. #after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
    22. }
    23. startup{
    24. #wfc-timeoutdegr-wfc-timeoutoutdated-wfc-timeoutwait-after-sb#设备启动时,两个节点要同步,设置节点的等待时间,超时时间等
    25. }
    26. options{
    27. #cpu-maskon-no-data-accessible
    28. }
    29. disk{
    30. on-io-errordetach;#一旦发生i/o错误,就把磁盘卸载。不继续进行同步
    31. #sizemax-bio-bvecson-io-errorfencingdisk-barrierdisk-flushes
    32. #disk-drainmd-flushesresync-rateresync-afteral-extents
    33. #c-plan-aheadc-delay-targetc-fill-targetc-max-rate
    34. #c-min-ratedisk-timeout
    35. }
    36. net{#设置网络的buffers/cache大小,初始化时间等
    37. #protocoltimeoutmax-epoch-sizemax-buffersunplug-watermark
    38. #connect-intping-intsndbuf-sizercvbuf-sizeko-count
    39. #allow-two-primariescram-hmac-algshared-secretafter-sb-0pri
    40. #after-sb-1priafter-sb-2prialways-asbprr-conflict
    41. #ping-timeoutdata-integrity-algtcp-corkon-congestion
    42. #congestion-fillcongestion-extentscsums-algverify-alg
    43. #use-rle
    44. cram-hmac-alg"sha1";#数据加密使用的算法
    45. shared-secret"mydrbd1fa2jg8";#验证密码
    46. }
    47. syncer{
    48. rate200M;#定义数据传输速率
    49. }
    50. }
    复制代码
    10.2、配置资源文件,资源配置文件的名字要和资源文件中的一样
    1. [root@test1 drbd.d]#vim mydrbd.res
    2. resource mydrbd{#资源名称,可以是任意的ascii码不包含空格的字符
    3. on test1.local{#节点1,每个节点必须要能使用名称的方式解析对方节点
    4. device /dev/drbd0;#drbd设备的文件名叫什么
    5. disk /dev/sdb1;#分区设备是哪个
    6. address 192.168.10.55:7789;#节点ip和监听的端口
    7. meta-disk internal; #drbd的meta(原数据)放在什么地方,internal是放在设备内部
    8. }
    9. on test2.local{
    10. device /dev/drbd0;
    11. disk /dev/sdb1;
    12. address 192.168.10.56:7789;
    13. meta-disk internal;
    14. }
    15. }
    复制代码
    10.3、两个节点的配置文件一样,使用工具把配置文件传到另一个节点
    1. [root@test1 drbd.d]#scp -r /usr/local/drbd/etc/drbd.* test2.local:/usr/local/drbd/etc/
    复制代码
    10.4、在每个节点上初始化已定义的资源
    1. [root@test1 drbd.d]#drbdadm create-md mydrbd
    2. --==Thankyouforparticipatingintheglobalusagesurvey==--
    3. Theserver'sresponseis:
    4. Writingmetadata...
    5. initializingactivitylog
    6. NOTinitializedbitmap
    7. Newdrbdmetadatablocksuccessfullycreated.
    8. [root@test1 drbd.d]#

    9. [root@test2 drbd.d]#drbdadm create-md mydrbd
    10. --==Thankyouforparticipatingintheglobalusagesurvey==--
    11. Theserver'sresponseis:
    12. Writingmetadata...
    13. initializingactivitylog
    14. NOTinitializedbitmap
    15. Newdrbdmetadatablocksuccessfullycreated.
    16. [root@test2 drbd.d]#
    复制代码
    10.5、分别启动两个节点的drbd服务
    1. [root@test1drbd.d]#service drbd start
    2. [root@test2drbd.d]#service drbd start
    复制代码
    11、测试drbd的同步

    11.1、查看drbd的启动状态
    1. [root@test1 drbd.d]#cat /proc/drbd
    2. version:8.4.3(api:1/proto:86-101)
    3. GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@test1.local,2016-02-2310:23:03
    4. 0:cs:Connectedro:Secondary/Secondaryds:Inconsistent/InconsistentCr-----#两个节点都是从,将来可以把一个提升为主。Inconsistent处于没有同步的状态
    5. ns:0nr:0dw:0dr:0al:0bm:0lo:0pe:0ua:0ap:0ep:1wo:boos:0
    复制代码
    11.2、提升一个节点为主,并覆盖从节点的drbd分区数据。在要提升为主的节点上执行
    1. [root@test1 drbd.d]#drbdadm -- --overwrite-data-of-peer primary mydrbd
    复制代码
    11.3、查看主节点同步状态
    1. [root@test1drbd.d]#watch -n 1 cat /proc/drbd
    2. Every1.0s:cat/proc/drbdTueFeb2317:10:552016
    3. version:8.4.3(api:1/proto:86-101)
    4. GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@test1.local,2016-02-2310:23:03
    5. 0:cs:SyncSourcero:Primary/Secondaryds:UpToDate/InconsistentCr---n-
    6. ns:619656nr:0dw:0dr:627840al:0bm:37lo:1pe:8ua:64ap:0ep:1wo:boos:369144
    7. [=============>.......]sync'ed:10.3%(369144/987896)K
    8. finish:0:00:12speed:25,632(25,464)K/sec
    复制代码
    11.4、查看从节点的状态
    1. [root@test2 drbd]#cat /proc/drbd
    2. version:8.4.3(api:1/proto:86-101)
    3. GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@test2.local,2016-02-2216:05:34
    4. 0:cs:Connectedro:Secondary/Primaryds:UpToDate/UpToDateCr-----
    5. ns:4nr:9728024dw:9728028dr:1025al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
    复制代码
    11.5、在主节点格式化分区并挂在写入数据测试
    1. [root@test1 drbd]#mke2fs -j /dev/drbd0
    2. [root@test1 drbd]#mkdir /mydrbd
    3. [root@test1 drbd]#mount /dev/drdb0/mydrbd
    4. [root@test1 drbd]#cd /mydrbd
    5. [root@test1 mydrbd]#touch drbd_test_file
    6. [root@test1 mydrbd]#ls /mydrbd/
    7. drbd_test_filelost+found
    复制代码
    11.6、把主节点降级为从,把从节点提升为主。查看数据是否同步

    11.1、主节点操作

    11.1.1、卸载分区,注意卸载的时候要退出挂在目录,否则会显示设备忙,不能卸载
    1. [root@test1 mydrbd]#cd ~
    2. [root@test1~]#umount /mydrbd
    3. [root@test1~]#drbdadm secondary mydrbd
    复制代码
    11.1.2、查看现在的drbd状态
    1. [root@test1~]#cat /proc/drbd
    2. version:8.4.3(api:1/proto:86-101)
    3. GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@test2.local,2016-02-2216:05:34
    4. 0:cs:Connectedro:Secondary/Secondaryds:UpToDate/UpToDateCr-----
    5. ns:4nr:9728024dw:9728028dr:1025al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
    复制代码
    注:可以看到,现在drbd的两个节点的状态都是secondary的,下面把从节点提升为主

    11.2、从节点操作

    11.2.1、提升操作
    1. [root@test1~]#drdbadm primary mydrbd
    复制代码
    11.2.2、挂在drbd分区
    1. [root@test1~]#mkdir /mydrbd
    2. [root@test1~]#mount /dev/drbd0 /mydrbd/
    复制代码
    11.2.3、查看是否有数据
    1. [root@test2~]#ls /myddrbd/
    2. drbd_test_filelost+found
    复制代码
    注:可以看到从节点切换成主后,已经同步了数据。drbd搭建完成。下面结合corosync+mysql配置双主高可用。

    12、结合corosync+drbd+mysql实现数据库双主高可用

    将drbd配置为corosync双节点高可用集群中的资源,能够实现主从角色的自动切换,注意,要把某一个服务配置为高可用集群的资源,一定不能让这个服务开机自动启动。

    12.1、关闭两台节点的drbd开机自启动

    12.1.1、主节点操作
    1. [root@test1 drbd.d]#chkconfig drbd off
    2. [root@test1 drbd.d]#chkconfig --list | grep drbd
    3. drbd0:off1:off2:off3:off4:off5:off6:off
    复制代码
    12.1.2、从节点操作
    1. [root@test2 drbd.d]#chkconfig drbd off
    2. [root@test2 drbd.d]#chkconfig --list | grep drbd
    3. drbd0:off1:off2:off3:off4:off5:off6:off
    复制代码
    12.2、卸载drbd的文件系统并把主节点降级为从节点

    12.2.1、从节点操作,注意,这里的从节点刚才提升为主了。现在把他降级
    1. [root@test2 drbd]#umount /mydata/
    2. [root@test2 drbd]#drbdadm secondary mydrbd
    3. [root@test2 drbd]#cat /proc/drbd
    4. version:8.4.3(api:1/proto:86-101)
    5. GIT-hash:89a294209144b68adb3ee85a73221f964d3ee515buildbyroot@test2.local,2016-02-2216:05:34
    6. 0:cs:Connectedro:Secondary/Secondaryds:UpToDate/UpToDateCr-----
    7. ns:8nr:9728024dw:9728032dr:1073al:1bm:577lo:0pe:0ua:0ap:0ep:1wo:foos:0
    复制代码
    注:确保两个节点都是secondary

    12.3、停止两个节点的drbd服务

    12.3.1、从节点操作
    1. [root@test2 drbd]#service drbd stop
    2. StoppingallDRBDresources:.
    3. [root@test2 drbd]#
    复制代码
    12.3.2、主节点操作
    1. [root@test1 drbd.d]#service drbd stop
    2. StoppingallDRBDresources:.
    3. [root@test1 drbd.d]#
    复制代码
    12.4、安装corosync并创建日志目录

    12.4.1、主节点操作
    1. [root@test1 drbd.d]#wget -P /etc/yum.repos.d http://download.opensuse.org/repositories/network
    2. [root@test1 drbd.d]#yum install corosync pacemaker crmsh
    3. [root@test1 drbd.d]#mkdir /var/log/cluster
    复制代码
    12.4.2、从节点操作
    [root@test1drbd.d]#wget-P/etc/yum.repos.d<a rel="nofollow"  target="_blank"></a><a rel="nofollow"  target="_blank">http://download.opensuse.org/repositories/network</a>:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
    [root@test2drbd.d]#mkdir/var/log/cluster
    [root@test2drbd.d]#yuminstallcorosyncpacemakercrmsh

    12.5、corosync配置文件

    12.5.1、主节点操作
    [root@test1drbd.d]#cd/etc/corosync/
    [root@test1corosync]#cpcorosync.conf.examplecorosync.conf

    12.6、配置主节点配置文件,生成corosync秘钥文件并复制给从节点(包括主配置文件)

    12.6.1、主节点操作
    [root@test1corosync]#vimcorosync.conf
    #Pleasereadthecorosync.conf.5manualpage
    compatibility:whitetank
    totem{
    version:2#secauth:Enablemutualnodeauthentication.Ifyouchooseto
    #enablethis("on"),thendoremembertocreateashared
    #secretwith"corosync-keygen".
    secauth:on
    threads:2
    #interface:defineatleastoneinterfacetocommunicate
    #over.Ifyoudefinemorethanoneinterfacestanza,youmust
    #alsosetrrp_mode.
    interface{
    #Ringsmustbeconsecutivelynumbered,startingat0.
    ringnumber:0
    #Thisisnormallythe*network*addressofthe
    #interfacetobindto.Thisensuresthatyoucanuse
    #identicalinstancesofthisconfigurationfile
    #acrossallyourclusternodes,withouthavingto
    #modifythisoption.
    bindnetaddr:192.168.10.0
    #However,ifyouhavemultiplephysicalnetwork
    #interfacesconfiguredforthesamesubnet,thenthe
    #networkaddressaloneisnotsufficienttoidentify
    #theinterfaceCorosyncshouldbindto.Inthatcase,
    #configurethe*host*addressoftheinterface
    #instead:
    bindnetaddr:192.168.10.0
    #Whenselectingamulticastaddress,considerRFC
    #2365(which,amongotherthings,specifiesthat
    #239.255.x.xaddressesarelefttothediscretionof
    #thenetworkadministrator).Donotreusemulticast
    #addressesacrossmultipleCorosyncclusterssharing
    #thesamenetwork.
    mcastaddr:239.212.16.19
    #CorosyncusestheportyouspecifyhereforUDP
    #messaging,andalsotheimmediatelypreceding
    #port.Thusifyousetthisto5405,Corosyncsends
    #messagesoverUDPports5405and5404.
    mcastport:5405
    #Time-to-liveforclustercommunicationpackets.The
    #numberofhops(routers)thatthisringwillallow
    #itselftopass.Notethatmulticastroutingmustbe
    #specificallyenabledonmostnetworkrouters.
    ttl:1#每一个数据报文不允许经过路由
    }
    }
    logging{
    #Logthesourcefileandlinewheremessagesarebeing
    #generated.Whenindoubt,leaveoff.Potentiallyusefulfor
    #debugging.
    fileline:off
    #Logtostandarderror.Whenindoubt,settono.Usefulwhen
    #runningintheforeground(wheninvoking"corosync-f")
    to_stderr:no
    #Logtoalogfile.Whensetto"no",the"logfile"option
    #mustnotbeset.
    to_logfile:yes
    logfile:/var/log/cluster/corosync.log
    #Logtothesystemlogdaemon.Whenindoubt,settoyes.
    to_syslog:no
    #Logdebugmessages(veryverbose).Whenindoubt,leaveoff.
    debug:off
    #Logmessageswithtimestamps.Whenindoubt,settoon
    #(unlessyouareonlyloggingtosyslog,wheredouble
    #timestampscanbeannoying).
    timestamp:on
    logger_subsys{
    subsys:AMF
    debug:off
    }
    }
    service{
    ver:0
    name:pacemaker
    }
    aisexec{
    user:root
    group:root
    }
    [root@test1corosync]#corosync-keygen
    [root@test1corosync]#scp-pauthkeycorosync.conftest2.local:/etc/corosync/

    12.7、启动corosync

    12.7.1、主节点操作(注意:两个corosync的启动操作都要在主节点上进行)
    [root@test1corosync]#servicecorosyncstart
    StartingCorosyncClusterEngine(corosync):[OK]
    [root@test1corosync]#sshtest2.local'servicecorosyncstart'
    root@test2.local'spassword:
    StartingCorosyncClusterEngine(corosync):[OK]

    12.8、查看集群状态

    12.8.1、问题:安装之后系统没有crm命令,不能使用crm的交互式模
    [root@test1corosync]#crmstatus
    -bash:crm:commandnotfound
    解决办法:安装ha-cluster的yum源,在安装crmsh软件包
    [root@test1corosync]#wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network
    参考文档:http://www.dwhd.org/20150530_014731.html 中的第二,安装crmsh步骤

    安装完成后就可以使用crm命令行模式了

    12.8.2、查看集群节点状态
    [root@test1corosync]#crm status
    Lastupdated:WedFeb2413:47:172016
    Lastchange:WedFeb2411:26:062016
    Stack:classicopenais(withplugin)
    CurrentDC:test2.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    0Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    注:显示有2个nodes已配置并处于在线状态

    12.8.3、查看pacemaker是否启动
    [root@test1corosync]#greppcmk_startup/var/log/cluster/corosync.log
    Feb2411:05:15corosync[pcmk]info:pcmk_startup:CRM:Initialized
    Feb2411:05:15corosync[pcmk]Logging:Initializedpcmk_startup

    12.8.4、检查集群引擎是否启动
    [root@test1corosync]#grep-e"CorosyncClusterEngine"-e"configurationfile"/var/log/cluster/corosync.log
    Feb2411:04:16corosync[MAIN]CorosyncClusterEngine('1.4.7'):startedandreadytoprovideservice.
    Feb2411:04:16corosync[MAIN]Successfullyreadmainconfigurationfile'/etc/corosync/corosync.conf'.
    注:经过上面的步骤,可以确认corosync的服务已经启动并没有问题了

    12.9、配置corosync的属性

    12.9.1、禁用STONITH设备以保证verify不会出错
    [root@test1corosync]#crmconfigure
    crm(live)configure#propertystonith-enabled=false
    crm(live)configure#verify
    crm(live)configure#commit

    12.9.2、配置当不具备法定票数的时候不能关闭服务
    crm(live)configure#propertyno-quorum-policy=ignore
    crm(live)configure#verify
    crm(live)configure#commit

    12.9.3、配置资源默认粘性值
    crm(live)configure#rsc_defaultsresource-stickiness=100
    crm(live)configure#verify
    crm(live)configure#commit

    12.9.3、查看当前的配置
    crm(live)configure#show
    nodetest1.local
    nodetest2.local
    propertycib-bootstrap-options:\
    dc-version=1.1.11-97629de\
    cluster-infrastructure="classicopenais(withplugin)"\
    expected-quorum-votes=2\
    stonith-enabled=false\
    no-quorum-policy=ignore
    rsc_defaultsrsc-options:\
    resource-stickiness=100

    12.10、配置集群资源

    12.10.1、定义drbd的资源
    crm(live)configure#primitivemysqldrbdocf:linbit:drbdparamsdrbd_resource=mydrbdopstarttimeout=240opstoptimeout=100opmonitorrole=Masterinterval=10timeout=20opmonitorrole=Slaveinterval=20timeout=20

    注:ocf:资源代理,这里用linbit代理drbd。drbd_resource=mydrbd:drbd的资源名字。starttimeout:启动超时时间。stoptimeout:停止超时时间。monitorrole=Master:定义主节点的监控时间,interval:监控间隔,timeout:超时时间。monitorrole=Slave:定义从节点的监控时间,interval:监控间隔,timeout:超时时间
    crm(live)configure#show#查看配置
    nodetest1.local
    nodetest2.local
    primitivemydrbdocf:linbit:drbd\
    paramsdrbd_resource=mydrbd\
    opstarttimeout=240interval=0\
    opstoptimeout=100interval=0\
    opmonitorrole=Masterinterval=10stimeout=20s\
    opmonitorrole=Slaveinterval=20stimeout=20s
    propertycib-bootstrap-options:\
    dc-version=1.1.11-97629de\
    cluster-infrastructure="classicopenais(withplugin)"\
    expected-quorum-votes=2\
    stonith-enabled=false\
    no-quorum-policy=ignore
    rsc_defaultsrsc-options:\
    resource-stickiness=100
    crm(live)configure#verify#验证配置
    12.10.2、定义drbd的主从资源

    crm(live)configure#msms_mysqldrbdmysqldrbdmetamaster-max=1master-node-max=1clone-max=2clone-node-max=1notify=true
    注:msms_mydrbdmydrbd:把mydrbd做成主从,这个资源名称叫ms_mydrbd。meta:定义元数据属性。master-max=1:最多有1个主资源,master-node-max=1:最多有1个主节点,clone-max=2:最多有2个克隆,clone-node-max=1:每个节点上,可以启动几个克隆资源。
    crm(live)configure#verify
    crm(live)configure#commit
    crm(live)configure#show
    nodetest1.local
    nodetest2.local
    primitivemydrbdocf:linbit:drbd\
    paramsdrbd_resource=mydrbd\
    opstarttimeout=240interval=0\
    opstoptimeout=100interval=0\
    opmonitorrole=Masterinterval=10stimeout=20s\
    opmonitorrole=Slaveinterval=20stimeout=20s
    msms_mydrbdmydrbd\
    metamaster-max=1master-node-max=1clone-max=2clone-node-max=1
    propertycib-bootstrap-options:\
    dc-version=1.1.11-97629de\
    cluster-infrastructure="classicopenais(withplugin)"\
    expected-quorum-votes=2\
    stonith-enabled=false\
    no-quorum-policy=ignore
    rsc_defaultsrsc-options:\
    resource-stickiness=100
    注:可以看到现在配置有一个primitive mydrbd的资源,一个ms的主从类型。

    12.10.3、查看集群节点状态
    crm(live)configure#cd
    crm(live)#status
    Lastupdated:ThuFeb2514:45:522016
    Lastchange:ThuFeb2514:44:442016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    2Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test1.local]
    Slaves:[test2.local]
    注:正常状态,已经可以看到主从状态,test1是master。

    错误状态:
    Master/SlaveSet:ms_mydrbd[mydrbd]
    mydrbd(ocf::linbit:drbd):FAILEDtest2.local(unmanaged)
    mydrbd(ocf::linbit:drbd):FAILEDtest1.local(unmanaged)
    Failedactions:
    mydrbd_stop_0ontest2.local'notconfigured'(6):call=87,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms
    mydrbd_stop_0ontest2.local'notconfigured'(6):call=87,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms
    mydrbd_stop_0ontest1.local'notconfigured'(6):call=72,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms
    mydrbd_stop_0ontest1.local'notconfigured'(6):call=72,status=complete,last-rc-change='ThuFeb2514:17:342016',queued=0ms,exec=34ms解决办法:定义资源的时候要注意,drbd_resource=mydrbd的名字是drbd资源的名字且主从资源的名称不能和drbd资源的名称一样,还有各种超时设置中不要加s。测试了很久,才找到这个问题。。。。如有不同看法,请各位大神赐教,谢谢。

    12.10.4、验证主从的切换
    [root@test1~]#crmnodestandbytest1.local#将主节点下线
    [root@test1~]#crmstatus注:查看状态,显示主节点已经不在线,而test2成为了master
    Lastupdated:ThuFeb2514:51:582016
    Lastchange:ThuFeb2514:51:442016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    2Resourcesconfigured
    Nodetest1.local:standby
    Online:[test2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Stopped:[test1.local]
    [root@test1~]#crmnodeonlinetest1.local#重新将test1上线
    [root@test1~]#crmstatus#查看状态,显示test依旧为master,而test1成为了slave
    Lastupdated:ThuFeb2514:52:552016
    Lastchange:ThuFeb2514:52:392016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    2Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Slaves:[test1.local]

    12.10.5、定义文件系统资源
    [root@test1~]#crm
    crm(live)#configure
    crm(live)configure#primitivemystoreocf:heartbeat:Filesystemparamsdevice=/dev/drbd0directory=/mydrbdfstype=ext3opstarttimeout=60opstoptimeout=60注:这里的语句表示,primitivemystore:定义一个资源mystore,使用heartbeat的文件系统,params参数定义:drbd的设备名,挂在目录,文件系统类型,和启动停止超时时间
    crm(live)configure#verify

    12.10.6、定义排列约束以确保Filesystem和主节点在一起。
    crm(live)configure#colocationmystore_withms_mysqldrbdinf:mystorems_mysqldrbd:Master
    crm(live)configure#verify

    12.10.7、定义Order约束,以确保主从资源必须要先成为主节点以后才能挂在文件系统
    crm(live)configure#ordermystore_after_ms_mysqldrbdmandatory:ms_mysqldrbd:promotemystore:start注:这里的语句表示,mystore_after_ms_mysqldrbdmandatory:,mystore在ms_mysqldrbd之后启动,mandatory(代表强制的),先启动ms_mysqldrbd,promote(角色切换成功后),在启动mystore:start
    crm(live)configure#verify
    crm(live)configure#commit
    crm(live)configure#cd
    crm(live)#status#查看状态,可以看到文件系统已经自动挂在到主节点test1.local上了。
    Lastupdated:ThuFeb2515:29:392016
    Lastchange:ThuFeb2515:29:362016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    3Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test1.local]
    Slaves:[test2.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest1.local

    [root@test1~]#ls/mydrbd/#已经有之前创建的文件了。
    inittablost+found

    12.10.8、切换主从节点,验证文件系统是否会自动挂载
    [root@test1~]#crmnodestandbytest1.local
    [root@test1~]#crmnodeonlinetest1.local
    [root@test1~]#crmstatus
    Lastupdated:ThuFeb2515:32:392016
    Lastchange:ThuFeb2515:32:362016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    3Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Slaves:[test1.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest2.local
    [root@test2~]#ls/mydrbd/
    inittablost+found

    12.9.12、配置mysql

    12.9.12.1、关闭mysqld的开机自启动
    [root@test1~]#chkconfigmysqldoff
    [root@test2~]#chkconfigmysqldoff#一定要记住,只要是高可用集群中的资源的服务,一定不能开机自启动

    12.9.12.2、配置主节点1的mysql服务(mysql安装就不写了,直接进入mysql的配置)
    [root@test1mysql]#mkdir/mydrbd/data
    [root@test1mysql]#chown-Rmysql.mysql/mydrbd/data/
    [root@test1mysql]#./scripts/mysql_install_db--user=mysql--datadir=/mydrbd/data/--basedir=/usr/local/mysql/
    InstallingMySQLsystemtables...
    16022516:07:12[Note]/usr/local/mysql//bin/mysqld(mysqld5.5.44)startingasprocess18694...
    OK
    Fillinghelptables...
    16022516:07:18[Note]/usr/local/mysql//bin/mysqld(mysqld5.5.44)startingasprocess18701...
    OK
    Tostartmysqldatboottimeyouhavetocopy
    support-files/mysql.servertotherightplaceforyoursystem
    PLEASEREMEMBERTOSETAPASSWORDFORTHEMySQLrootUSER!
    Todoso,starttheserver,thenissuethefollowingcommands:
    /usr/local/mysql//bin/mysqladmin-urootpassword'new-password'
    /usr/local/mysql//bin/mysqladmin-uroot-htest1.localpassword'new-password'
    Alternativelyyoucanrun:
    /usr/local/mysql//bin/mysql_secure_installation
    whichwillalsogiveyoutheoptionofremovingthetest
    databasesandanonymoususercreatedbydefault.Thisis
    stronglyrecommendedforproductionservers.
    Seethemanualformoreinstructions.
    YoucanstarttheMySQLdaemonwith:
    cd/usr/local/mysql/;/usr/local/mysql//bin/mysqld_safe&
    YoucantesttheMySQLdaemonwithmysql-test-run.pl
    cd/usr/local/mysql//mysql-test;perlmysql-test-run.pl
    Pleasereportanyproblemsat


    [root@test1mysql]#servicemysqldstart
    StartingMySQL.....[OK]


    [root@test1mysql]#mysql-uroot
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis2
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.

    mysql>showdatabases;
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    4rowsinset(0.06sec)
    mysql>createdatabasedrbd_mysql;
    QueryOK,1rowaffected(0.00sec)


    mysql>showdatabases;
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    5rowsinset(0.00sec)
    mysql>
    12.9.12.3、配置从节点的mysql

    注:因为刚才已经在test1.local上的共享存储上初始化了mysql的data目录,在test2.local上就不用重复初始化了。
    [root@test1mysql]#crmstatus
    Lastupdated:ThuFeb2516:14:142016
    Lastchange:ThuFeb2515:35:162016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    3Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test1.local]
    Slaves:[test2.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest1.local

    1、先让test2.local成为master才能继续操作
    [root@test1mysql]#crmnodestandbytest1.local
    [root@test1mysql]#crmnodeonlinetest1.local
    [root@test1mysql]#crmstatus
    Lastupdated:ThuFeb2516:14:462016
    Lastchange:ThuFeb2516:14:302016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    3Resourcesconfigured
    Nodetest1.local:standby
    Online:[test2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Stopped:[test1.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest2.local#要确保让test2.local成为master节点

    2、test2.local的mysql配置
    [root@test2~]#vim/etc/my.cnf
    添加:
    datadir=/mydrbd/data

    [root@test2~]#servicemysqldstart
    StartingMySQL.[OK]

    [root@test2~]#mysql-uroot
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis1
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.

    mysql>showdatabases;
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    5rowsinset(0.10sec)
    mysql>

    12.9.13、定义mysql资源

    1、停止mysql
    [root@test2~]#servicemysqldstop
    ShuttingdownMySQL.[OK]

    2、定义mysql资源
    [root@test1mysql]#crmconfigure
    crm(live)configure#primitivemysqldlsb:mysqld
    crm(live)configure#verify

    3、定义mysql和主节点约束
    crm(live)configure#colocationmysqld_with_mystoreinf:mysqldmystore#注:因为mystore一定和主节点在一起,那么我们就定义mysql和mystore的约束。
    crm(live)configure#verify

    4、定义mysql和mystore启动次序约束
    crm(live)configure#ordermysqld_after_mystoremandatory:mystoremysqld#一定要弄清楚启动的先后次序,mysql是在mystore之后启动的。
    crm(live)configure#verify
    crm(live)configure#commit
    crm(live)configure#cd
    crm(live)#status
    Lastupdated:ThuFeb2516:44:292016
    Lastchange:ThuFeb2516:42:162016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    4Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Slaves:[test1.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest2.local
    mysqld(lsb:mysqld):Startedtest2.local注:现在主节点在test2.local上。

    5、验证test2.local上的mysql登录是否正常和角色切换后是否正常
    [root@test2~]#mysql-uroot
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis2
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.

    mysql>showdatabases;#test2.local上已经启动好了mysql并自动挂在了drbd资源
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    5rowsinset(0.07sec)

    mysql>createdatabasemydb;#创建一个数据库,然后切换到test1.local节点上,看是否正常
    QueryOK,1rowaffected(0.00sec)
    mysql>showdatabases;
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mydb|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    6rowsinset(0.00sec)
    mysql>exit


    [root@test1mysql]#crmnodestandbytest2.local#将test2.local主节点standby,让test1.local自动成为master
    [root@test1mysql]#crmnodeonlinetest2.local
    [root@test1mysql]#crmstatus

    Lastupdated:ThuFeb2516:53:242016
    Lastchange:ThuFeb2516:53:192016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    4Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test1.local]
    Slaves:[test2.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest1.local#test1.local已经成为master
    mysqld(lsb:mysqld):Startedtest1.local

    [root@test1mysql]#mysql-uroot#在test1.local上登录mysql
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis1
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.

    mysql>showdatabases;
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mydb|#已经有刚才在test2.local上创建的mydb数据库。目前,一切正常
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    6rowsinset(0.15sec)
    mysql>

    12.9.14、定义VIP资源及约束关系
    crm(live)configure#primitivemyipocf:heartbeat:IPaddrparamsip=192.168.10.3nic=eth0cidr_netmask=24#这里出了一个错误,浪费半天时间,是因为子网掩码写的255.255.255.0,应该写成24,忘记命令的时候,要多使用table和help。
    crm(live)configure#verify
    crm(live)configure#colocationmyip_with_ms_mysqldrbdinf:ms_mysqldrbd:Mastermyip#定义vip和ms_mysqldrbd的约束关系
    crm(live)configure#verify
    crm(live)configure#commit
    crm(live)configure#cd
    crm(live)#status#查看状态
    Lastupdated:FriFeb2610:05:162016
    Lastchange:FriFeb2610:05:122016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    5Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test1.local]
    Slaves:[test2.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest1.local
    mysqld(lsb:mysqld):Startedtest1.local
    myip(ocf::heartbeat:IPaddr):Startedtest1.local#vip已经启动在test1.local节点上了。

    12.9.15、测试连接VIP
    [root@test1~]#ipaddr#查看test1.local上是否绑定VIP
    1:lo:mtu65536qdiscnoqueuestateUNKNOWN
    link/loopback00:00:00:00:00:00brd00:00:00:00:00:00
    inet127.0.0.1/8scopehostlo
    inet6::1/128scopehost
    valid_lftforeverpreferred_lftforever
    2:eth0:mtu1500qdiscpfifo_faststateUPqlen1000
    link/ether00:0c:29:34:7d:9fbrdff:ff:ff:ff:ff:ff
    inet192.168.10.55/24brd192.168.10.255scopeglobaleth0
    inet192.168.10.3/24brd192.168.10.255scopeglobalsecondaryeth0#VIP已经绑定到test1.local上的eth0接口上
    inet6fe80::20c:29ff:fe34:7d9f/64scopelink
    valid_lftforeverpreferred_lftforever


    [root@test-3~]#mysql-uroot-h192.168.10.3-p#使用VIP连接mysql,这里要给连接的客户端授权,不然不能登录
    Enterpassword:
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis6
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
    mysql>showdatabases;#下面有我们刚才创建的两个库
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mydb|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    6rowsinset(0.08sec)

    模拟test1.local故障:
    [root@test1mysql]#crmnodestandbytest1.local#将test1.local降级为slave,看vip是否会自动切换
    [root@test1mysql]#crmnodeonlinetest1.local
    [root@test1mysql]#crmstatus
    Lastupdated:FriFeb2610:20:382016
    Lastchange:FriFeb2610:20:352016
    Stack:classicopenais(withplugin)
    CurrentDC:test1.local-partitionwithquorum
    Version:1.1.11-97629de
    2Nodesconfigured,2expectedvotes
    5Resourcesconfigured
    Online:[test1.localtest2.local]
    Fulllistofresources:
    Master/SlaveSet:ms_mysqldrbd[mysqldrbd]
    Masters:[test2.local]
    Slaves:[test1.local]
    mystore(ocf::heartbeat:Filesystem):Startedtest2.local
    mysqld(lsb:mysqld):Startedtest2.local
    myip(ocf::heartbeat:IPaddr):Startedtest2.local#vip已经在test2.local节点上了。

    查看test2的IP信息:
    [root@test2~]#ipaddr#在test2.local上查看VIP绑定信息
    1:lo:mtu65536qdiscnoqueuestateUNKNOWN
    link/loopback00:00:00:00:00:00brd00:00:00:00:00:00
    inet127.0.0.1/8scopehostlo
    inet6::1/128scopehost
    valid_lftforeverpreferred_lftforever
    2:eth0:mtu1500qdiscpfifo_faststateUPqlen1000
    link/ether00:0c:29:fd:7f:e5brdff:ff:ff:ff:ff:ff
    inet192.168.10.56/24brd192.168.10.255scopeglobaleth0
    inet192.168.10.3/24brd192.168.10.255scopeglobalsecondaryeth0#VIP已经在test2.local上的eth0接口上。
    inet6fe80::20c:29ff:fefd:7fe5/64scopelink
    valid_lftforeverpreferred_lftforever

    测试连接MySQL:
    [root@test-3~]#mysql-uroot-h192.168.10.3-p#连接mysql
    Enterpassword:
    WelcometotheMySQLmonitor.Commandsendwith;or\g.
    YourMySQLconnectionidis1
    Serverversion:5.5.44Sourcedistribution
    Copyright(c)2000,2013,Oracleand/oritsaffiliates.Allrightsreserved.
    OracleisaregisteredtrademarkofOracleCorporationand/orits
    affiliates.Othernamesmaybetrademarksoftheirrespective
    owners.
    Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.
    mysql>showdatabases;#一切正常。
    +--------------------+
    |Database|
    +--------------------+
    |information_schema|
    |drbd_mysql|
    |mydb|
    |mysql|
    |performance_schema|
    |test|
    +--------------------+
    6rowsinset(0.06sec)

    到此,corosync+drbd+mysql已经配置完毕,文档不够详细,没有corosync、heartbeat、drbd、mysql结合的原理讲解,没有发生脑裂后的处理办法和预防方法。以后有时间在加上吧。

    以上就是corosync+drbd+mysql实现的高可用_MySQL的内容


    帖子永久地址: 

    黑帽联盟 - 论坛版权1、本主题所有言论和图片纯属会员个人意见,与本论坛立场无关
    2、本站所有主题由该帖子作者发表,该帖子作者与黑帽联盟享有帖子相关版权
    3、其他单位或个人使用、转载或引用本文时必须同时征得该帖子作者和黑帽联盟的同意
    4、帖子作者须承担一切因本文发表而直接或间接导致的民事或刑事法律责任
    5、本帖部分内容转载自其它媒体,但并不代表本站赞同其观点和对其真实性负责
    6、如本帖侵犯到任何版权问题,请立即告知本站,本站将及时予与删除并致以最深的歉意
    7、黑帽联盟管理员和版主有权不事先通知发贴者而删除本文

    勿忘初心,方得始终!
    您需要登录后才可以回帖 登录 | 会员注册

    发布主题 !fastreply! 收藏帖子 返回列表 搜索
    回顶部