High Availability (HA)
Cluster (active-passive) provides the continued availability of services
without failure for a long period of time. If one of the cluster nodes fails, the
pacemaker service will start the services on another node. Pacemaker and
Corosync are open source and widely used software for service high availability
in production.
This article describes How to configure Apache Web Server 2 Nodes High Availability Cluster with Pacemaker and Corosync with iSCSI shared storage on CentOS 7
This article describes How to configure Apache Web Server 2 Nodes High Availability Cluster with Pacemaker and Corosync with iSCSI shared storage on CentOS 7
Shared Storage Configuration:
Step 1. Shared storage configuration: Follow the article
How to configure
iSCSI Target and Initiator on CentOS 7 to configure iSCSI target and create lvm
volume with the shared volume on all the nodes. Here the iSCSI target is mapped
with /dev/sdb.
[root@lc-node1 ~]# lsscsi
……………..
[3:0:0:0] disk
LIO-ORG iscsi-disk1 4.0
/dev/sdb
[root@lc-node1 ~]#
|
[root@lc-node2 ~]# lsscsi
……………...
[3:0:0:0] disk
LIO-ORG iscsi-disk1 4.0
/dev/sdb
[root@lc-node2 ~]#
|
Step 2. LVM of the Shared Volume: iSCSI initiator
mapped the volume with /dev/sdb disk on all the cluster nodes. Create LVM
volume with disk /dev/sdb and create a filesystem from any one cluster node as
shown below:
[root@lc-node1 ~]# pvcreate /dev/sdb;
vgcreate vg_data /dev/sdb; lvcreate -n lv_apache -L 5G vg_data; lvs
Physical volume "/dev/sdb" successfully created.
Volume group "vg_data" successfully created
Logical volume "lv_apache" created.
LV VG Attr LSize
Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- 21.99g
swap centos -wi-ao----
2.00g
lv_apache vg_data -wi-ao---- 5.00g
[root@lc-node1 ~]#
|
Create a XFS file
system with the volume from a node:
[root@lc-node1 ~]# mkfs.xfs
/dev/mapper/vg_data-lv_apache
meta-data=/dev/mapper/vg_data-lv_apache
isize=512 agcount=4, agsize=327680
blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@lc-node1 ~]#
|
Run the following
command to get access the share volume on other cluster nodes:
[root@lc-node2 ~]# iscsiadm --mode
node -R ; pvscan ; vgscan ; lvscan
Rescanning session [sid: 1, target:
iqn.2019-10.local.linuxcnf.iscsi-target:iscsi-disk1, portal: 192.168.43.45,3260]
PV /dev/sdb VG vg_data lvm2 [<10.00 GiB / <5.00 GiB
free]
PV /dev/sda2 VG centos lvm2 [<24.00 GiB / 4.00 MiB
free]
Total: 2 [33.99 GiB] / in use: 2 [33.99 GiB] / in no VG: 0 [0 ]
Reading volume groups from cache.
Found volume group "vg_data" using metadata type lvm2
Found volume group "centos" using metadata type lvm2
inactive
'/dev/vg_data/lv_apache' [5.00 GiB] inherit
ACTIVE
'/dev/centos/swap' [2.00 GiB] inherit
ACTIVE '/dev/centos/root'
[21.99 GiB] inherit
[root@lc-node2 ~]#
|
Note: Incase facing issue in scanning lvm, run
the following command to restart lvm service. It would affect existing lvm
configuration.
[root@lc-node2 ~]# systemctl restart
lvm2-lvmetad.service
[root@lc-node2 ~]#
|
Cluster Configuration:
Step 1. Cluster Configuration: In next step follow
the article How to Configure
High Availability Cluster with Pacemaker and Corosync on CentOS 7 and the cluster is
look like the below output:
[root@lc-node1 ~]# pcs cluster
status
Cluster Status:
Stack: corosync
Current DC: lc-node2.linuxcnf.local (version
1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Thu Oct 10 03:04:55 2019
Last change: Thu Oct 10 01:52:55 2019 by
hacluster via crmd on lc-node1.linuxcnf.local
2 nodes configured
0 resources configured
PCSD Status:
lc-node2.linuxcnf.local: Online
lc-node1.linuxcnf.local: Online
[root@lc-node1 ~]#
|
Note: - Here, I am not using any fencing device in
the cluster so we have to disable it. Run the following command to disable the
fencing:
[root@lc-node1
~]# pcs property set stonith-enabled=false
[root@lc-node1
~]#
|
Step 2. Apache Package installation: Follow the article
How to Install
Apache/httpd Web Server on CentOS 7 to install Apache Web server on all the
cluster nodes and web server document root before adding into the cluster.
Step 3. Apache Server Configuration: Mount the shared
storage and create directories on any one of cluster node:
[root@lc-node1 ~]# mount
/dev/mapper/vg_data-lv_apache /var/www/
[root@lc-node1 ~]# mkdir -p
/var/www/html
[root@lc-node1 ~]# mkdir -p
/var/www/cgi-bin
[root@lc-node1 ~]# mkdir -p
/var/www/error
|
Configure SE Linux
to allow access to access contents from the new created directory on the share
storage.
[root@lc-node1 ~]# restorecon -R
/var/www/
[root@lc-node1 ~]#
|
Create a sample
page and put the below contents in the index.html file and umount the /var/www
directory:
[root@lc-node1 ~]# vi
/var/www/html/index.html
<html>
<center>
<body><b/> Welcome!!!
This is a sample page for testing Red hat Hight Availability Cluster.
</body>
</center>
</html>
[root@lc-node1 ~]# umount /var/www/
[root@lc-node1 ~]#
|
Add to below
contents at the end of Apache configuration file /etc/httpd/conf/httpd.conf for
cluster to check Apache server status.
[root@lc-node1
~]# vi /etc/httpd/conf/httpd.conf
…………………….
<Location
/server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
|
[root@lc-node2
~]# vi /etc/httpd/conf/httpd.conf
…………………….
<Location
/server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
|
Step 4. Create Resource: Run the following
command to create resources for pcs cluster from any one of the cluster nodes.
Create a file
system resource to save web server data:
[root@lc-node1 ~]# pcs resource
create apache_fs Filesystem device="/dev/mapper/vg_data-lv_apache"
directory="/var/www" fstype="xfs" --group webserver
Assumed agent name
'ocf:heartbeat:Filesystem' (deduced from 'Filesystem')
[root@lc-node1 ~]#
|
Create an IP
address resource to use as a virtual IP address. All cluster nodes are running
behind that VIP:
[root@lc-node1 ~]# pcs resource
create apache_vip IPaddr2 ip=192.168.43.50 cidr_netmask=24 --group webserver
Assumed agent name
'ocf:heartbeat:IPaddr2' (deduced from 'IPaddr2')
[root@lc-node1 ~]#
|
Create an httpd
service resource to serve the Apache web contents:
[root@lc-node1
~]# pcs resource create web_server apache configfile="/etc/httpd/conf/httpd.conf"
statusurl="http://127.0.0.1/server-status" --group webserver
Assumed
agent name 'ocf:heartbeat:apache' (deduced from 'apache')
[root@lc-node1
~]#
|
Step 5. Verify the Cluster Status: Run the following
commands to verify cluster status:
[root@lc-node1 ~]# pcs status
Cluster name: apache_cluster
Stack: corosync
Current DC: lc-node2.linuxcnf.local
(version 1.1.20-5.el7_7.1-3c4c782f70) - partition with quorum
Last updated: Mon Oct 14 03:59:23
2019
Last change: Mon Oct 14 03:37:57
2019 by root via cibadmin on lc-node1.linuxcnf.local
2 nodes configured
3 resources configured
Online: [ lc-node1.linuxcnf.local
lc-node2.linuxcnf.local ]
Full list of resources:
Resource Group: webserver
apache_fs
(ocf::heartbeat:Filesystem):
Started lc-node2.linuxcnf.local
apache_vip (ocf::heartbeat:IPaddr2): Started lc-node2.linuxcnf.local
web_server (ocf::heartbeat:apache): Started lc-node2.linuxcnf.local
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@lc-node1 ~]#
|
Hit the virtual IP
from any browser to access Apache/httpd Web Server using PCS High Available
Cluster services.
Done!!! 2 Nodes PCS Cluster
configuration has been finished.
No comments:
Post a Comment