Ceph
Storage is a free and open source widely used software defined storage solution
which provides file system storage, object storage and block level centralize
storage.
This article describes How to install and Configure Ceph Storage on CentOS 7.
This article describes How to install and Configure Ceph Storage on CentOS 7.
Step 1. Host name Setup: Set
host name on all the Ceph cluster nodes and put host name and IP addresses in
/etc/hosts file for name resolution:
[root@localhost
~]# hostnamectl set-hostname LC-Ceph-MGMT;exec bash
[root@lc-ceph-mgmt
~]#
|
[root@localhost
~]# hostnamectl set-hostname LC-Storage1;exec bash
[root@lc-storage1
~]#
|
[root@localhost
~]# hostnamectl set-hostname LC-Storage2;exec bash
[root@lc-storage2
~]#
|
[root@lc-ceph-mgmt
~]# vi /etc/hosts
...................
192.168.43.15 lc-ceph-mgmt
192.168.43.20 lc-storage1
192.168.43.25 lc-storage2
[root@lc-ceph-mgmt
~]#
|
[root@lc-storage1
~]# vi /etc/hosts
...................
192.168.43.15 lc-ceph-mgmt
192.168.43.20 lc-storage1
192.168.43.25 lc-storage2
[root@lc-storage1
~]#
|
[root@lc-storage2
~]# vi /etc/hosts
...................
192.168.43.15 lc-ceph-mgmt
192.168.43.20 lc-storage1
192.168.43.25 lc-storage2
[root@lc-storage2
~]#
|
Step 2. NTP Configuration: Follow
the article Howto Install and Configure NTP Server on CentOS 7 to configure NTP Server and
clients for time synchronization between all the cluster nodes.
Step 3. User Creation: Create
a user with sudo access on all storage nodes for deployment and cluster
management:
[root@lc-storage1
~]# useradd ceph;
passwd ceph
Changing
password for user ceph.
New
password:
BAD
PASSWORD: The password is shorter than 8 characters
Retype
new password:
passwd:
all authentication tokens updated successfully.
[root@lc-storage1
~]# echo "ceph
ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph;chmod 0440
/etc/sudoers.d/ceph
[root@lc-storage1
~]#
|
[root@lc-storage2
~]# useradd ceph;
passwd ceph
Changing
password for user ceph.
New
password:
BAD
PASSWORD: The password is shorter than 8 characters
Retype
new password:
passwd:
all
authentication tokens updated successfully.
[root@lc-storage2
~]# echo "ceph
ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph;chmod 0440
/etc/sudoers.d/ceph
[root@lc-storage2
~]#
|
Configure Password Less Authentication: Follow the article How to Configure Password Less Authentication on CentOS 7 to enable
password less authentication from Ceph MGMT node to all other Cluster nodes (lc-storage1 and lc-storage2 nodes) for
cluster deployment and management.
Step 4. Repository Configuration: Follow the article Howto install EPEL repository on Centos7 to setup EPEL repository on Ceph MGMT node as some Ceph dependencies will be installed from it.
Step 4. Repository Configuration: Follow the article Howto install EPEL repository on Centos7 to setup EPEL repository on Ceph MGMT node as some Ceph dependencies will be installed from it.
Ceph Repository Configuration: Ceph
packages can be installed by its official repository. Configure the Ceph
repository as below on Ceph-MGMT Node:
[root@lc-ceph-mgmt
~]# yum install
https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum-plugin-priorities -y
Loaded
plugins: fastestmirror
..................
Installed:
ceph-release.noarch 0:1-1.el7 yum-plugin-priorities.noarch
0:1.1.31-52.el7
Complete!
[root@lc-ceph-mgmt
~]#
|
Step 5. Install Ceph Deployment:
Install ceph deploy tool on Ceph-MGMT Node to deploy Ceph Storage Cluster on
lc-storage1 and lc-storage2 nodes:
[root@lc-ceph-mgmt
~]# yum install ceph-deploy -y
Loaded
plugins: fastestmirror
.............
Installed:
ceph-deploy.noarch 0:2.0.1-0
Complete!
[root@lc-ceph-mgmt
~]#
|
Step 6. Ceph Storage Cluster Deployment: Run
the bellow commands to install Ceph Cluster packages:
Create
a directory to save cluster deployment configuration and log files:
[root@lc-ceph-mgmt
~]# mkdir ~/ceph-cluster
[root@lc-ceph-mgmt
~]# cd ~/ceph-cluster
[root@lc-ceph-mgmt
ceph-cluster]#
|
Run the
below command to Generate configuration file:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy new lc-storage1 lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new
lc-storage1 lc-storage2
..............
[ceph_deploy.new][DEBUG
] Creating a random mon key...
[ceph_deploy.new][DEBUG
] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG
] Writing
initial config to ceph.conf...
[root@lc-ceph-mgmt
ceph-cluster]#
|
Run the
following command to install Ceph packages on all the nodes:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy install lc-storage1 lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy install
lc-storage1 lc-storage2
..............
[lc-storage1][DEBUG
] Complete!
[lc-storage1][INFO ] Running command: sudo ceph --version
[lc-storage1][DEBUG
] ceph
version 13.2.6
(7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)
[ceph_deploy.install][DEBUG
] Detecting platform for host lc-storage2 ...
[lc-storage2][DEBUG
] connection detected need for sudo
[lc-storage2][DEBUG
] connected to host: lc-storage2
..............
[lc-storage2][DEBUG
] Complete!
[lc-storage2][INFO ] Running command: sudo ceph --version
[lc-storage2][DEBUG
] ceph
version 13.2.6
(7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 7. Initial Monitor and Keys: Run
the following command add the initial monitor and gather the keys:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mon
create-initial
..............
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp5Mr6KO
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 8. Copy Admin Keys: Run
the following command to copy the configuration and admin key Ceph MGMT Node1
to all other cluster nodes:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy admin lc-storage1 lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy admin
lc-storage1 lc-storage2
................
[lc-storage2][DEBUG
] write
cluster configuration to /etc/ceph/{cluster}.conf
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 9. Deploy Manager Daemon: Run
the following command to deploy manager daemon:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy mgr create lc-storage1 lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy mgr
create lc-storage1 lc-storage2
..................
[lc-storage2][INFO ] Running command: sudo
systemctl start ceph-mgr@lc-storage2
[lc-storage2][INFO ] Running command: sudo systemctl enable ceph.target
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 10. Hard Disk List: Run
the following command list out hard disks on Storage nodes to create OSDs:
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy disk list lc-storage1
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk
list lc-storage1
.....................
[lc-storage1][INFO ] Running command: sudo fdisk -l
[lc-storage1][INFO ] Disk /dev/vda: 42.9 GB, 42949672960
bytes, 83886080 sectors
[lc-storage1][INFO ] Disk /dev/vdb: 5368 MB, 5368709120 bytes, 10485760
sectors
[root@lc-ceph-mgmt
ceph-cluster]#
|
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy disk list lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy disk
list lc-storage2
....................
[lc-storage2][INFO ] Running command: sudo fdisk -l
[lc-storage2][INFO ] Disk /dev/vda: 42.9 GB, 42949672960
bytes, 83886080 sectors
[lc-storage2][INFO ] Disk /dev/vdb: 5368 MB, 5368709120 bytes, 10485760
sectors
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 11. Add OSD Hard Disks: Run
the following commands to create OSD and add disks for Ceph storage cluster
(Make sure all disks should be unused, all the data will be erased permanently):
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy osd create --data /dev/vdb lc-storage1
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd
create --data /dev/vdb lc-storage1
....................
[lc-storage1][DEBUG
] --> ceph-volume
lvm create successful for: /dev/vdb
[lc-storage1][INFO ] checking OSD status...
[lc-storage1][DEBUG
] find the location of an executable
[lc-storage1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd
stat --format=json
[ceph_deploy.osd][DEBUG
] Host lc-storage1 is now ready for osd use.
[root@lc-ceph-mgmt
ceph-cluster]#
|
[root@lc-ceph-mgmt
ceph-cluster]# ceph-deploy osd create --data /dev/vdb lc-storage2
[ceph_deploy.conf][DEBUG
] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy osd
create --data /dev/vdb lc-storage2
....................
[lc-storage2][DEBUG
] --> ceph-volume
lvm create successful for: /dev/vdb
[lc-storage2][INFO ] checking OSD status...
[lc-storage2][DEBUG
] find the location of an executable
[lc-storage2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd
stat --format=json
[ceph_deploy.osd][DEBUG
] Host lc-storage2 is now ready for osd use.
[root@lc-ceph-mgmt
ceph-cluster]#
|
Step 12. Validate Installation: Login
to any once Storage Node and Run following command to check and verify the
installation and configuration status of Ceph Storage Cluster.
[root@lc-storage1
~]# ceph
ceph>
health
HEALTH_OK
ceph>
|
ceph>
status
cluster:
id:
143918ca-8ba4-4176-8636-9ad0a6d004d8
health: HEALTH_OK
services:
mon: 2 daemons, quorum
lc-storage2,lc-storage1
mgr: lc-storage1(active), standbys: lc-storage2
osd: 2 osds: 2 up, 2 in
data:
pools:
0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 2.0
GiB used, 8.0 GiB / 10 GiB avail
pgs:
ceph>
q
[root@lc-storage1
~]#
|
Done!!!
Ceph Storage Cluster Configuration has been done.
No comments:
Post a Comment