High Availability
(HA) Cluster provides the continued availability of services without failure
for a long period of time. If one of the cluster nodes fails, the pacemaker service
will start the services on another node. Pacemaker and CoroSync are open source
and widely used software for service high availability in production
environments.
This article
describes How to configure 3 Nodes High Availability Cluster with Pacemaker and CoroSync
on CentOS 7
Step 1. Hostname Entries: Edit the /etc/hosts
file and add the following hostname entries on all the nodes for DNS use:
[root@lc-node1 ~]# vi /etc/hosts
……………
192.168.43.30 lc-node1.linuxcnf.local lc-node1
192.168.43.35 lc-node2.linuxcnf.local lc-node2
192.168.43.40 lc-node3.linuxcnf.local lc-node3
[root@lc-node1 ~]# hostnamectl
set-hostname lc-node1.linuxcnf.local
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# vi /etc/hosts
……………
192.168.43.30 lc-node1.linuxcnf.local lc-node1
192.168.43.35 lc-node2.linuxcnf.local lc-node2
192.168.43.40 lc-node3.linuxcnf.local lc-node3
[root@lc-node2 ~]# hostnamectl
set-hostname lc-node2.linuxcnf.local
[root@lc-node2 ~]#
|
[root@lc-node3 ~]# vi /etc/hosts
……………
192.168.43.30 lc-node1.linuxcnf.local lc-node1
192.168.43.35 lc-node2.linuxcnf.local lc-node2
192.168.43.40 lc-node3.linuxcnf.local lc-node3
[root@lc-node3 ~]# hostnamectl
set-hostname lc-node3.linuxcnf.local
[root@lc-node3 ~]#
|
Step 2. Cluster packages installation: Run the following
commands to Install cluster packages on all the cluster nodes except storage:
[root@lc-node1 ~]# yum install pcs
fence-agents-all
Loaded plugins: fastestmirror
………………………..
Installed:
fence-agents-all.x86_64 0:4.2.1-24.el7
pcs.x86_64 0:0.9.167-3.el7.centos.1
Dependency Installed:
OpenIPMI.x86_64 0:2.0.27-1.el7 OpenIPMI-libs.x86_64
0:2.0.27-1.el7 ……………………….
trousers.x86_64 0:0.3.14-2.el7
unbound-libs.x86_64
0:1.6.6-1.el7
Dependency Updated:
audit.x86_64 0:2.8.5-4.el7
audit-libs.x86_64 0:2.8.5-4.el7 kpartx.x86_64 0:0.4.9-127.el7 policycoreutils.x86_64
0:2.5-33.el7
Complete!
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# yum install pcs
fence-agents-all
Loaded plugins: fastestmirror
……………………………
Installed:
fence-agents-all.x86_64 0:4.2.1-24.el7
pcs.x86_64 0:0.9.167-3.el7.centos.1
Dependency Installed:
OpenIPMI.x86_64 0:2.0.27-1.el7 OpenIPMI-libs.x86_64
0:2.0.27-1.el7 ………………………………
trousers.x86_64
0:0.3.14-2.el7
unbound-libs.x86_64 0:1.6.6-1.el7
Dependency Updated:
audit.x86_64 0:2.8.5-4.el7
audit-libs.x86_64 0:2.8.5-4.el7 kpartx.x86_64 0:0.4.9-127.el7 policycoreutils.x86_64 0:2.5-33.el7
Complete!
[root@lc-node2 ~]#
|
[root@lc-node3 ~]# yum install pcs
fence-agents-all
Loaded plugins: fastestmirror
………………………..
Installed:
fence-agents-all.x86_64 0:4.2.1-24.el7
pcs.x86_64 0:0.9.167-3.el7.centos.1
Dependency Installed:
OpenIPMI.x86_64 0:2.0.27-1.el7 OpenIPMI-libs.x86_64
0:2.0.27-1.el7 ……………………….
trousers.x86_64 0:0.3.14-2.el7 unbound-libs.x86_64
0:1.6.6-1.el7
Dependency Updated:
audit.x86_64 0:2.8.5-4.el7
audit-libs.x86_64 0:2.8.5-4.el7 kpartx.x86_64 0:0.4.9-127.el7 policycoreutils.x86_64
0:2.5-33.el7
Complete!
[root@lc-node3 ~]#
|
Step 3. Firewall configuration: Run the following
commands to allow all high availability application on the firewall to have a
proper communication between nodes.
[root@lc-node1 ~]# firewall-cmd
--permanent --add-service=high-availability
success
[root@lc-node1 ~]# firewall-cmd
--reload
success
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# firewall-cmd
--permanent --add-service=high-availability
success
[root@lc-node2 ~]# firewall-cmd
--reload
success
[root@lc-node2 ~]#
|
[root@lc-node3 ~]# firewall-cmd
--permanent --add-service=high-availability
success
[root@lc-node3 ~]# firewall-cmd
--reload
success
[root@lc-node3 ~]#
|
Step 4. User and pcs service: Run the following
commands to create and configure the Cluster:
Set password for
hacluster user on each cluster nodes
[root@lc-node1 ~]# passwd hacluster
Changing password for user
hacluster.
New password:
BAD PASSWORD: The password contains
the user name in some form
Retype new password:
passwd: all authentication tokens
updated successfully.
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# passwd hacluster
Changing password for user
hacluster.
New password:
BAD PASSWORD: The password contains
the user name in some form
Retype new password:
passwd: all authentication tokens
updated successfully.
[root@lc-node2 ~]#
|
[root@lc-node3 ~]# passwd hacluster
Changing password for user
hacluster.
New password:
BAD PASSWORD: The password contains
the user name in some form
Retype new password:
passwd: all authentication tokens
updated successfully.
[root@lc-node3 ~]#
|
Enable and start
the pacemaker service on each cluster nodes:
[root@lc-node1 ~]# systemctl enable
pcsd
Created symlink from
/etc/systemd/system/multi-user.target.wants/pcsd.service to
/usr/lib/systemd/system/pcsd.service.
[root@lc-node1 ~]# systemctl restart
pcsd
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# systemctl enable
pcsd
Created symlink from
/etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
[root@lc-node2 ~]# systemctl restart
pcsd
[root@lc-node2 ~]#
|
[root@lc-node3 ~]# systemctl enable
pcsd
Created symlink from
/etc/systemd/system/multi-user.target.wants/pcsd.service to
/usr/lib/systemd/system/pcsd.service.
[root@lc-node3 ~]# systemctl restart
pcsd
[root@lc-node3 ~]#
|
Step 5. Create and Start a Cluster: Run the following commands to configure corosync
on any one of the node and authenticate all cluster nodes on any one of the
node:
Nodes
authentication:
[root@lc-node1 ~]# pcs cluster auth
lc-node1.linuxcnf.local lc-node2.linuxcnf.local lc-node3.linuxcnf.local
Username: hacluster
Password:
lc-node1.linuxcnf.local: Authorized
lc-node3.linuxcnf.local: Authorized
lc-node2.linuxcnf.local: Authorized
[root@lc-node1 ~]#
|
Cluster creation:
[root@lc-node1 ~]# pcs cluster setup
--start --name my_cluster lc-node1.linuxcnf.local lc-node2.linuxcnf.local
lc-node3.linuxcnf.local
Destroying cluster on nodes:
lc-node1.linuxcnf.local, lc-node2.linuxcnf.local, lc-node3.linuxcnf.local...
lc-node2.linuxcnf.local: Stopping
Cluster (pacemaker)...
lc-node3.linuxcnf.local: Stopping
Cluster (pacemaker)...
lc-node1.linuxcnf.local: Stopping
Cluster (pacemaker)...
lc-node1.linuxcnf.local:
Successfully destroyed cluster
lc-node3.linuxcnf.local:
Successfully destroyed cluster
lc-node2.linuxcnf.local:
Successfully destroyed cluster
Sending 'pacemaker_remote authkey'
to 'lc-node1.linuxcnf.local', 'lc-node2.linuxcnf.local',
'lc-node3.linuxcnf.local'
lc-node2.linuxcnf.local: successful
distribution of the file 'pacemaker_remote authkey'
lc-node3.linuxcnf.local: successful
distribution of the file 'pacemaker_remote authkey'
lc-node1.linuxcnf.local: successful
distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the
nodes...
lc-node1.linuxcnf.local: Succeeded
lc-node2.linuxcnf.local: Succeeded
lc-node3.linuxcnf.local: Succeeded
Starting cluster on nodes:
lc-node1.linuxcnf.local, lc-node2.linuxcnf.local, lc-node3.linuxcnf.local...
lc-node1.linuxcnf.local: Starting
Cluster (corosync)...
lc-node2.linuxcnf.local: Starting
Cluster (corosync)...
lc-node3.linuxcnf.local: Starting
Cluster (corosync)...
lc-node2.linuxcnf.local: Starting
Cluster (pacemaker)...
lc-node1.linuxcnf.local: Starting
Cluster (pacemaker)...
lc-node3.linuxcnf.local: Starting
Cluster (pacemaker)...
Synchronizing pcsd certificates on
nodes lc-node1.linuxcnf.local, lc-node2.linuxcnf.local,
lc-node3.linuxcnf.local...
lc-node1.linuxcnf.local: Success
lc-node3.linuxcnf.local: Success
lc-node2.linuxcnf.local: Success
Restarting pcsd on the nodes in
order to reload the certificates...
lc-node1.linuxcnf.local: Success
lc-node3.linuxcnf.local: Success
lc-node2.linuxcnf.local: Success
[root@lc-node1 ~]#
|
Enable services
pacemaker and corosync on system bootup:
[root@lc-node1 ~]# pcs cluster
enable --all
lc-node1.linuxcnf.local: Cluster
Enabled
lc-node2.linuxcnf.local: Cluster
Enabled
lc-node3.linuxcnf.local: Cluster
Enabled
[root@lc-node1 ~]#
|
Step 6. Verification: Run the following command to verify
the cluster configuration:
[root@lc-node1 ~]# pcs cluster
status
Cluster Status:
Stack: corosync
Current DC: lc-node1.linuxcnf.local (version
1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Sun Oct 20 18:01:24 2019
Last change: Sun Oct 20 17:59:57 2019 by
hacluster via crmd on lc-node1.linuxcnf.local
3 nodes configured
0 resources configured
PCSD Status:
lc-node2.linuxcnf.local: Online
lc-node3.linuxcnf.local: Online
lc-node1.linuxcnf.local: Online
[root@lc-node1 ~]#
|
[root@lc-node2
~]# pcs cluster status
Cluster
Status:
Stack: corosync
Current DC: lc-node1.linuxcnf.local (version
1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Sun Oct 20 18:01:29 2019
Last change: Sun Oct 20 17:59:57 2019 by
hacluster via crmd on lc-node1.linuxcnf.local
3 nodes configured
0 resources configured
PCSD
Status:
lc-node1.linuxcnf.local: Online
lc-node3.linuxcnf.local: Online
lc-node2.linuxcnf.local: Online
[root@lc-node2
~]#
|
[root@lc-node3
~]# pcs cluster status
Cluster
Status:
Stack: corosync
Current DC: lc-node1.linuxcnf.local (version
1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Sun Oct 20 18:01:33 2019
Last change: Sun Oct 20 17:59:57 2019 by
hacluster via crmd on lc-node1.linuxcnf.local
3 nodes configured
0 resources configured
PCSD
Status:
lc-node2.linuxcnf.local: Online
lc-node1.linuxcnf.local: Online
lc-node3.linuxcnf.local: Online
[root@lc-node3
~]#
|
Done!!! 3 Nodes PCS
Cluster is created and configured successfully.
No comments:
Post a Comment