Install and Configure Inktank Ceph storage

Latest response

Ceph is an open source storage platform which is designed for modern storage needs. Ceph is scalable to the exabyte level and designed to have no single points of failure making it ideal for applications which require highly available flexible storage.

The below diagram shows the layout of a demostration 5 nodes cluster with Ceph storage in addition two ceph clients. Two network interfaces can be used to increase bandwidth and redundancy. This can help to maintain sufficient bandwidth for storage requirements without affecting client applications.

Before getting started with setting up ceph cluster, lets overview systems hardware and requirement software:

Because this is a demonstration and not used on production, I suggest the below:

* ceph-admin (node to manage all ceph node withtin a cluster):

  • 1GB RAM ore more
  • 40 GB disk space
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • nic1 -> 192.168.100.20, nic2 -> 192.168.101.1

* ceph-mon (Monitor nodes)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • Note: you must use odd number for ceph-mon (1,3,5,,.., to prevent single of failuer)
  • nic1 -> 192.168.100.21, nic2 -> 192.168.101.2

* ceph-osd (Object store daemon, the storage cluster)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 50 GB sidk space for storage purpose.
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • ceph-osd-01: nic1 -> 192.168.100.22, nic2 -> 192.168.101.3
  • ceph-osd-02: nic1 -> 192.168.100.23, nic2 -> 192.168.101.4
  • ceph-osd-03: nic1 -> 192.168.100.24, nic2 -> 192.168.101.5

* ceph-client (mounting point for ceph storage)

  • 1GB RAM or more
  • 40 GB disk space (for production use plenty space)
  • 1x2 NIC (nic1 as public, nic2 as private internal communication among ceph's nodes)
  • ceph-client-01: nic1 -> 192.168.100.25, nic2 -> 192.168.101.6
  • ceph-client-02: nic1 -> 192.168.100.26, nic2 -> 192.168.101.7

* ceph-mds I install it on ceph-admin, you could install it sepratly.

* Operating system -> RHEL 7

* Ceph Software:

On node ceph-admin add this repository ceph.repo, ceph-el7.repo, check the attach files :

Ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-firefly/rhel7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/rhel7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-firefly/rhel7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

Ceph-el7.repo

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
gpgcheck=0


* Note: I add two repos, due to missing package 'python-jinja' in the main repo named "firefly", the scond repo "ceph-el7.repo" must be on all nodes.

lets prepare the ceph-admin node:

  • We need add user 'ceph' on all nodes, follow these steps:
useradd -d /home/ceph -m ceph
passwd ceph
  • Add sudo privileges for the user on each Ceph Node.
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph
  • Configure your ceph-deploy admin node with password-less SSH access to each Ceph Node.
ssh-keygen // dont set pass phrase

ssh-copy-id to other nodes (ceph-mon, ceph-osd-x, ceph-clients-x)
  • Configure /etc/ssh/ssh_config, and those lines.
Host ceph-mon
     Hostname ceph-mon
     User ceph

Host ceph-osd-01
     Hostname ceph-osd-01
     User ceph

Host ceph-osd-02
     Hostname ceph-osd-02
     User ceph

Host ceph-osd-03
     Hostname ceph-osd-03
     User ceph

Host ceph-client-01
     Hostname ceph-client-01
     User ceph

Host ceph-client-02
     Hostname ceph-client-02
     User ceph
  • Update your repository and install ceph-deploy
    yum update -y && yum install ceph-deploy // this done one time only on ceph-admin node

Create ceph storage cluster:

On ceph-admin node run these steps:

  • Login as ceph user to cep-admin node and create a folder to obtain cluster files.
mkdir ceph-cluster
cd ceph-cluster
  • Create cluster by execute this command and to initiate the ceph monitor node
ceph-deploy new ceph-mon
  • Change the default number of replicas in the Ceph configuration file
ceph osd pool set data size 2 // this mean store two copies of file

ceph osd pool set data min_size 2
  • Install ceph on all nodes.
ceph-deploy purgedata ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 // if something wrong happen
ceph-deploy purge ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 // remove installed packages on one or more nodes
  • Add the initial monitor(s) and gather the keys.
ceph-deploy mon create-initial // this indicate only ceph-mon will act as ceph monitor node

Note:  if you need more monitor nodes  run this, ceph-deploy install mon-01 mon-02 , etc

Prepare OSD nodes:

As I mnetion before each OSD has additional disk of 50GB /dev/vdb, so follow these steps to create an OSDs:

  • list all disk on OSD's nodes
ceph-deploy disk list
  • Zap a disk (wipe all data on that disk)
ceph-deploy disk zap ceph-osd-01:vdb
ceph-deploy disk zap ceph-osd-02:vdb
ceph-deploy disk zap ceph-osd-03:vdb 
  • Create OSD :
ceph-deploy osd create ceph-osd-01:vdb
ceph-deploy osd create ceph-osd-02:vdb
ceph-deploy osd create ceph-osd-03:vdb

* Note: commnad ceph-deploy osd create did prepare,and activate automaticlly the OSD daemons on that node.

  • Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
ceph-deploy admin ceph-admin ceph-mon  ceph-osd-01 ceph-osd-02 ceph-osd-03 ceph-client-01 ceph-client-02 
sudo chmod +r /etc/ceph/ceph.client.admin.keyring // on each node
  • Check ceph health and status
ceph health // Result  HEALTH_OK
ceph -w // show the total storage and other information
  • Adding MSD server on ceph-admin node:
ceph-deploy mds create ceph-admin
Note: if you don't need to mount using nfs,samba,cifs,ceph-fuse, no need to install MDS server (Metadata server)
  • How to start and stop ceph services
sudo service ceph -a start // -a mean all service
sudo service ceph -a start osd // only for OSD nodes

  • Create a pool storage to use later .
    ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated]
    PG-num has a default value = 100
    PGP-num has adefault value = 100
    To calculate your placement group count, multiply the amount of OSDs you have by 100 and divide it by the number of number of times each part of data is stored. The default is to store each part of data twice which means that if a disk fails, you won’t loose the data because it’s stored twice.

3 OSDs * 100 = 300
Divided by 2 replicas, 300 / 2 = 150
~~~
ceph osd pool create datastore 150 150


## For now the ceph storage cluster ready and operational as well. ## In the next section I show you how to mount ceph storage on client server using Block Device. ### Preparing ceph clients: * Check if rbd kernel module is loaded.

modprobe rbd

Note: if the loaded failed, you must install rbd kmod.

sudo yum install kmod-rbd kmod-libceph -y
Then modprobe rbd

* Create an image or block device.

rbd create vol1 --size 4096 --pool datastore

sudo rbd map vol1 --pool datastore

rbd ls -p datastore // list block device in pool datastore

sudo mkfs.ext4 -m0 /dev/rbd/datastore/vol1 // make fs on block device

sudo mkdir /mnt/vol1

sudo mount /dev/rbd/datastore/vol1 /mnt/vol1

Add to fstab to mount on boot as any other device block
~~~

At this point Ceph stroage cluster operate as well and mount the client to ceph cluster.

So what next!. How to install Calamari GUI for ceph storage .

Useful limks:

Installation Ceph

Ceph block device

Pools

PG,placemnet group

Attachments

Responses

Amjad, did you intend to include a diagram with this post?

I suppose the article now finished.

I see the diagram now, thanks for completing it!

I am still facing the python-jinja error

Error: Package: 1:python-flask-0.10.1-3.el7.noarch (Ceph-noarch)
Requires: python-jinja2

Could you please help

Hi,

You must include the both repos on all systems you need to configure even the clietns.

Thanks

Hi,

I am just installing it on one system. "yum install -y ceph"

Any other settings might worth to look at?

Thanks

Hi,

1- What is OS you used rhel 6 or rhel 7?
2- what that systems suppose to be?
3- Are you install these packages useing ceph-deploy install node OR directly on the same host node.
4- the right command is yum install ceph ceph-common OR better manage from ceph-admin.

Hi,

1) I am using RHEL 7 Evaluation version
2) I want to make sure all repo's are working fine before deploying into production
3) This node will act as OSD and mon node. ceph-deploy fail's at "yum install ceph"
I tried both installing using ceph-deploy and yum install ceph- NO LUCK

4) In production deployment I will use ceph-deploy. But as mentioned before wanted to check whether all repos' are working fine on RHEl7 and firefly

Please let me know if you need more details

Ok.
Clear Yum cache.
yum clean all, rm -Rf /var/cache/yum/*

From my side I check all repose and works fine.
Could you please reproduce ecaxtly yuor steps and explain how many nodes you used.

Thanks

I just want to install ceph on one of my server

Here are the contents of the files:

cat ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-firefly/rhel7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/rhel7/noarch

baseurl=http://eu.ceph.com/rpms/rhel7/x86_64/

enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-firefly/rhel7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

cat ceph.el7.repo

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/x86_64/
enabled=1
gpgcheck=0

yum clean all
Loaded plugins: product-id, subscription-manager
Cleaning repos: Ceph Ceph-el7 Ceph-noarch ceph-source rhel-7-server-rpms rhel-ha-for-rhel-7-server-rpms
Cleaning up everything

yum update
Loaded plugins: product-id, subscription-manager
Ceph | 951 B 00:00:00
Ceph-el7 | 951 B 00:00:00
Ceph-noarch | 951 B 00:00:00
ceph-source | 951 B 00:00:00
rhel-7-server-rpms | 3.7 kB 00:00:00
rhel-ha-for-rhel-7-server-rpms | 3.7 kB 00:00:00
(1/2): rhel-ha-for-rhel-7-server-rpms/7Server/x86_64/primary_db | 37 kB 00:00:00
(2/2): rhel-7-server-rpms/7Server/x86_64/primary_db | 6.0 MB 00:00:02
(1/8): Ceph-noarch/primary | 3.6 kB 00:00:00
(2/8): ceph-source/primary | 2.5 kB 00:00:00
(3/8): Ceph/x86_64/primary | 24 kB 00:00:00
(4/8): Ceph-el7/primary | 7.0 kB 00:00:00
(5/8): rhel-7-server-rpms/7Server/x86_64/group_gz | 133 kB 00:00:00
(6/8): rhel-ha-for-rhel-7-server-rpms/7Server/x86_64/group_gz | 3.5 kB 00:00:00
(7/8): rhel-7-server-rpms/7Server/x86_64/updateinfo | 67 kB 00:00:00
(8/8): rhel-ha-for-rhel-7-server-rpms/7Server/x86_64/updateinfo | 2.7 kB 00:00:00
Ceph 79/79
Ceph-el7 18/18
Ceph-noarch 15/15
ceph-source 16/16
No packages marked for update

Yum install ceph

>

Error: Package: 1:python-flask-0.10.1-3.el7.noarch (Ceph-noarch)
Requires: python-jinja2
You could try using --skip-broken to work around the problem

For Ceph.repo:

[Ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-firefly/rhel7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/rhel7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-firefly/rhel7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
priority=1

And for Ceph-7el (to fix dependancy)

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/x86_64/
enabled=1
gpgcheck=0

Try to install only python-flask and check .

Hi

still no luck. Same error

yum install python-flask
Loaded plugins: product-id, subscription-manager
Repository Ceph-el7 is listed more than once in the configuration
Resolving Dependencies
--> Running transaction check
---> Package python-flask.noarch 1:0.10.1-3.el7 will be installed
--> Processing Dependency: python-itsdangerous for package: 1:python-flask-0.10.1-3.el7.noarch
--> Processing Dependency: python-werkzeug for package: 1:python-flask-0.10.1-3.el7.noarch
--> Processing Dependency: python-jinja2 for package: 1:python-flask-0.10.1-3.el7.noarch
--> Running transaction check
---> Package python-flask.noarch 1:0.10.1-3.el7 will be installed
--> Processing Dependency: python-jinja2 for package: 1:python-flask-0.10.1-3.el7.noarch
---> Package python-itsdangerous.noarch 0:0.23-1.el7 will be installed
---> Package python-werkzeug.noarch 0:0.9.1-1.el7 will be installed
--> Finished Dependency Resolution
Error: Package: 1:python-flask-0.10.1-3.el7.noarch (Ceph-noarch)
Requires: python-jinja2
You could try using --skip-broken to work around the problem

I tried the same thing before. It was not working.

Hi,

I simulate the issue and now fix by update the Ceph-el.repo, please downlaod the update one or copy from here:

[Ceph-el7]
name=Ceph-el7
baseurl=http://eu.ceph.com/rpms/rhel7/noarch/
enabled=1
gpgcheck=0

I am soory for any convenience.

Thanks

Thanks a lot

Its working now :-)

Hi Amjad hope this message finds you well, what about calamari do you have the link?

Thanks!

I have found the link =) but it seems like I need username and password from Inktank to get the RPMs for Calamari, do you know who can I speak with for it?

Thanks!

""ceph-deploy mds create ceph-admin"" has not worked yet with RHCS.

[ceph_deploy.mds][ERROR ] RHEL RHCS systems do not have the ability to deploy MDS yet [ceph_deploy][ERROR ] GenericError: Failed to create 1 MDSs

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.