Persistent storage for OpenShift using a static Ceph RBD
Environment
- Red Hat Ceph Storage
- 2.x or higher
- Ceph RADOS Block Device (RBD)
- Red Hat OpenShift Container Platform
- 3.1 or higher
Issue
- I want to use a static Ceph Rados Block Device (RBD) as persistent storage for my OpenShift cluster. How do I do this?
Resolution
-
Any client that connects to the Ceph cluster should have the same
ceph-comon
package as that of the Ceph cluster. Ensureceph-common
package on the OpenShift cluster, matches the version of the Ceph cluster. Failure to do so, can cause a feature mismatch when communicating to the Ceph cluster. If they do not match, then you may need to# yum update ceph-common
. -
Create the Ceph pool on the Ceph cluster:
# ceph osd pool create <poolname> <pgcount>
2.1. Example:
# ceph osd pool create openshift 128
-
Create the OpenShift client user on the Ceph cluster with proper privileges and a
keyring
:# ceph auth get-or-create <client.name> mon 'allow r' osd 'allow rwx pool=<poolname>' -o /etc/ceph/<name>.keyring
3.1. Example:
# ceph auth get-or-create client.openshift mon 'allow r' osd 'allow rwx pool=openshift' -o /etc/ceph/openshift.keyring
-
On the Ceph Cluster create the
rbd
image with layering the desired size:# rbd create <pool-name>/<image-name> -s <size> --image-feature=layering
4.1. Example:
# rbd create openshift/ceph-image-test -s 2048 --image-feature=layering
-
On the Ceph cluster gather the client information, and convert it to a
base64
key. This output will be needed to insert into thesecret.yml
on the OpenShift cluster for this step. Note that either theclient.admin
or the newly createclient.openshift
can be used. For the rest of this document we will use theclient.openshift
key for the lowest level of privileges. Example:# ceph auth list ~~~ ~~~ ....... client.admin key: AQAa/KtacWl3GxAAG90+vdWtTdx7l6Grg0zpZQ== caps: [mds] allow caps: [mon] allow * caps: [osd] allow * ....... client.openshift key: AQAVX7Fa5vWbGRAAkorZCZKHUDO95XZA6E0B/g== caps: [mon] allow r caps: [osd] allow rwx pool=openshift ~~~ ~~~ # echo AQAVX7Fa5vWbGRAAkorZCZKHUDO95XZA6E0B/g== | base64 QVFBVlg3RmE1dldiR1JBQWtvclpDWktIVURPOTVYWkE2RTBCL2c9PQo=
-
On the OpenShift Master node create the secret definition and insert the
base64
. Example:# vi ceph-secret-test.yaml ~~~ ~~~ apiVersion: v1 kind: Secret metadata: name: ceph-secret-test data: key: QVFBVlg3RmE1dldiR1JBQWtvclpDWktIVURPOTVYWkE2RTBCL2c9PQo= ~~~ ~~~ # oc create -f ceph-secret-test.yaml # oc get secret ceph-secret-test NAME TYPE DATA AGE ceph-secret-test Opaque 1 1h
-
On OpenShift master node create the Persistent Volume updating the necessary information. Example:
# vi ceph-pv-test.yaml ~~~ ~~~ apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv-test spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce rbd: monitors: - 192.168.122.133:6789 pool: openshift image: ceph-image-test user: openshift secretRef: name: ceph-secret-test fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: Retain ~~~ ~~~ # oc create -f ceph-pv-test.yaml # oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE ceph-pv-test <none> 2147483648 RWO Available 2s
- Note that in the
ceph-pv-test.yaml
example above, that the user isuser: openshift
and notuser: client.openshift
as Ceph will imply the "client" when viewing this and could cause authorization issues.
-
Create the Persistent Volume Claim. Example:
# vi ceph-claim-test.yaml ~~~ ~~~ kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-claim-test spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi ~~~ ~~~ # oc create -f ceph-claim-test.yaml # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE ceph-claim-test Bound ceph-pv-test 2Gi RWO 5m
-
Create the pod ensuring the
name:
metadata, containers, and volumes sections are updated to match the appropriate names. Example:# vi ceph-pod-test.yaml ~~~ ~~~ apiVersion: v1 kind: Pod metadata: name: ceph-pod-test spec: containers: - name: ceph-busybox image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-vol-test mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol-test persistentVolumeClaim: claimName: ceph-claim-test ~~~ ~~~ # oc create -f ceph-pod-test.yaml # oc get pod NAME READY STATUS RESTARTS AGE ceph-pod-test 1/1 Running 1 5m docker-registry-1-mkrp6 1/1 Running 0 1d registry-console-1-h499l 1/1 Running 0 1d router-1-srr79 1/1 Running 0 1d
- Note that the transition from creating container status to the running status can sometimes take several minutes to complete as the container has to be created and mounted.
-
Now that the pod is in a running status, login into the node where your RBD image is mapped and run the following commands to check the RBD image is mounted properly:
# rbd showmapped id pool image snap device 0 openshift ceph-image -test /dev/rbd0 ~~~ ~~~ # df -h | grep ceph /dev/rbd0 1998672 6144 1871288 1% /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/rbd/rbd/openshift-image-ceph-image-test
Root Cause
- For further information on this, please refer to your version of the OpenShift Installation and Configuration Guide.
- Note that this article was created to provide supplementary information from the Ceph perspective on this process that may not be provided from the OpenShift side.
- For information on creating dynamic Ceph RBD's to use as storage for OpenShift, please refer to Ceph - Storage for OpenShift using dynamic Ceph RBDs
Diagnostic Steps
- After telling the Master node to create the pod, run
# oc get events
on the Master node to identify the status. This command will also display the node that is tasked with running the pod. - Once you identify the node tasked, you can monitor
/var/log/messages
on that node. - The combination of the entries in these two locations can be used to troubleshoot issues during this process.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments