Install and Configure Inktank Ceph storage

Latest response

Ceph is an open source storage platform which is designed for modern storage needs. Ceph is scalable to the exabyte level and designed to have no single points of failure making it ideal for applications which require highly available flexible storage.

The below diagram shows the layout of a demostration 5 nodes cluster with Ceph storage in addition two ceph clients. Two network interfaces can be used to increase bandwidth and redundancy. This can help to maintain sufficient bandwidth for storage requirements without affecting client applications.

Before getting started with setting up ceph cluster, lets overview systems hardware and requirement software:

Because this is a demonstration and not used on production, I suggest the below:

* ceph-admin (node to manage all ceph node withtin a cluster):

  • 1GB RAM ore more
  • 40 GB disk space
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • nic1 -> 192.168.100.20, nic2 -> 192.168.101.1

* ceph-mon (Monitor nodes)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • Note: you must use odd number for ceph-mon (1,3,5,,.., to prevent single of failuer)
  • nic1 -> 192.168.100.21, nic2 -> 192.168.101.2

* ceph-osd (Object store daemon, the storage cluster)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 50 GB sidk space for storage purpose.
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • ceph-osd-01: nic1 -> 192.168.100.22, nic2 -> 192.168.101.3
  • ceph-osd-02: nic1 -> 192.168.100.23, nic2 -> 192.168.101.4
  • ceph-osd-03: nic1 -> 192.168.100.24, nic2 -> 192.168.101.5

* ceph-client (mounting point for ceph storage)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • ceph-client-01: nic1 -> 192.168.100.25, nic2 -> 192.168.101.6
  • ceph-client-02: nic1 -> 192.168.100.26, nic2 -> 192.168.101.7

* ceph-mds I install it on ceph-admin, you could install it sepratly.

* Operating system -> RHEL 7

* Ceph Software:

On node ceph-admin add this repository ceph.repo, ceph-el7.repo, check the attach files :

Ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-firefly/rhel7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/rhel7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-firefly/rhel7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

Ceph-el7.repo

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
gpgcheck=0


* Note: I add two repos, due to missing package 'python-jinja' in the main repo named "firefly", the scond repo "ceph-el7.repo" must be on all nodes.

lets prepare the ceph-admin node:

  • We need add user 'ceph' on all nodes, follow these steps:
useradd -d /home/ceph -m ceph
passwd ceph
  • Add sudo privileges for the user on each Ceph Node.
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph
  • Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen // dont set pass phrase

ssh-copy-id to other nodes (ceph-mon, ceph-osd-x, ceph-clients-x)
  • Configure /etc/ssh/ssh_config, and those lines.
Host ceph-mon
     Hostname ceph-mon
     User ceph

Host ceph-osd-01
     Hostname ceph-osd-01
     User ceph

Host ceph-osd-02
     Hostname ceph-osd-02
     User ceph

Host ceph-osd-03
     Hostname ceph-osd-03
     User ceph

Host ceph-client-01
     Hostname ceph-client-01
     User ceph

Host ceph-client-02
     Hostname ceph-client-02
     User ceph
  • Update your repository and install ceph-deploy
    yum update -y && yum install ceph-deploy // this done one time only on ceph-admin node

Create ceph storage cluster:

On ceph-admin node run these steps:

  • Login as ceph user to cep-admin node and create a folder to obtain cluster files.
mkdir ceph-cluster
cd ceph-cluster
  • Create cluster by execute this command and to initiate the ceph monitor node
ceph-deploy new ceph-mon
  • Change the default number of replicas in the Ceph configuration file
ceph osd pool set data size 2 // this mean store two copies of file

ceph osd pool set data min_size 2
  • Install ceph on all nodes.
ceph-deploy purgedata ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 // if something wrong happen
ceph-deploy purge ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 // remove installed packages on one or more nodes
  • Add the initial monitor(s) and gather the keys.
ceph-deploy mon create-initial // this indicate only ceph-mon will act as ceph monitor node

Note:  if you need more monitor nodes  run this, ceph-deploy install mon-01 mon-02 , etc

Prepare OSD nodes:

As I mnetion before each OSD has additional disk of 50GB /dev/vdb, so follow these steps to create an OSDs:

  • list all disk on OSD's nodes
ceph-deploy disk list
  • Zap a disk (wipe all data on that disk)
ceph-deploy disk zap ceph-osd-01:vdb
ceph-deploy disk zap ceph-osd-02:vdb
ceph-deploy disk zap ceph-osd-03:vdb 
  • Create OSD :
ceph-deploy osd create ceph-osd-01:vdb
ceph-deploy osd create ceph-osd-02:vdb
ceph-deploy osd create ceph-osd-03:vdb

* Note: commnad ceph-deploy osd create did prepare,and activate automaticlly the OSD daemons on that node.

  • Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
ceph-deploy admin ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 
sudo chmod +r /etc/ceph/ceph.client.admin.keyring // on each node
  • Check ceph health and status
ceph health // Result  HEALTH_OK
ceph -w // show the total storage and other information
  • Adding MSD server on ceph-admin node:
ceph-deploy mds create ceph-admin
Note: if you don't need to mount using nfs,samba,cifs,ceph-fuse, no need to install MDS server (Metadata server)
  • How to start and stop ceph services
sudo service ceph -a start // -a mean all service
sudo service ceph -a start osd // only for OSD nodes

  • Create a pool storage to use later .
    ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated]
    PG-num has a default value = 100
    PGP-num has adefault value = 100
    To calculate your placement group count, multiply the amount of OSDs you have by 100 and divide it by the number of number of times each part of data is stored. The default is to store each part of data twice which means that if a disk fails, you won’t loose the data because it’s stored twice.

3 OSDs * 100 = 300
Divided by 2 replicas, 300 / 2 = 150
~~~
ceph osd pool create datastore 150 150


## For now the ceph storage cluster ready and operational as well. ## In the next section I show you how to mount ceph storage on client server using Block Device. ### Preparing ceph clients: * Check if rbd kernel module is loaded.

modprobe rbd

Note: if the loaded failed, you must install rbd kmod.

sudo yum install kmod-rbd kmod-libceph -y
Then modprobe rbd

* Create an image or block device.

rbd create vol1 --size 4096 --pool datastore

sudo rbd map vol1 --pool datastore

rbd ls -p datastore // list block device in pool datastore

sudo mkfs.ext4 -m0 /dev/rbd/datastore/vol1 // make fs on block device

sudo mkdir /mnt/vol1

sudo mount /dev/rbd/datastore/vol1 /mnt/vol1

Add to fstab to mount on boot as any other device block
~~~

At this point Ceph stroage cluster operate as well and mount the client to ceph cluster.

So what next!. How to install Calamari GUI for ceph storage .

Useful limks:

Installation Ceph

Ceph block device

Pools

PG,placemnet group

Attachments

Responses