Ceph - Storage for OpenShift using dynamic Ceph RBDs
Environment
- Red Hat Ceph Storage 2.x or higher
- OpenShift 3.1 or higher
Issue
- I want to use dynamic Ceph Rados Block Devices (RBD) as storage for my OpenShift cluster, How do I do this?
Resolution
-
Step 1. Any client that connects to the Ceph cluster should have the same
ceph-common
package as that of the Ceph cluster. Ensure theceph-common
package on the Openshift cluster , matches the version of the Ceph cluster. Failure to do so, can cause a feature mismatch when communicating to the Ceph cluster. If they do not match, then you may need to update the package with a command similar to:# yum update ceph-common
-
Step 2. Create the Ceph pool on the Ceph cluster.
# ceph osd pool create <poolname> <pgcount>
Example:
# ceph osd pool create openshift-dynamic 128
- Step 3. Create the OpenShift client user on the Ceph cluster with proper privileges and a keyring:
# ceph auth get-or-create <client.name> mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=<poolname>' -o /etc/ceph/<name>.keyring
Example:
# ceph auth get-or-create client.openshift-dynamic mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=openshift-dynamic' -o ceph.client.dynamic.keyring
- Step 4. Generate the client.admin base 64 encoded key.
Example:
# ceph auth get client.admin
exported keyring for client.admin
[client.admin]
key = AQCQjRZbxCtTDxAAmx4jjiChSPFyPNWzvDDI8g==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[root@mons-0 ceph]# echo AQCQjRZbxCtTDxAAmx4jjiChSPFyPNWzvDDI8g== |base64
QVFDUWpSWmJ4Q3RURHhBQW14NGpqaUNoU1BGeVBOV3p2RERJOGc9PQo=
- Step 5. On the OpenShift Master node, Create the ceph-secret for the client.admin
Example:
# vi ceph-secret-admin.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
namespace: kube-system
data:
key: QVFDUWpSWmJ4Q3RURHhBQW14NGpqaUNoU1BGeVBOV3p2RERJOGc9PQo=
type: kubernetes.io/rbd
- Step 6. From the Ceph Monitor node, generate the base64 encoding for the user.
Example:
# ceph auth get client.openshift-dynamic
exported keyring for client.openshift-dynamic
[client.openshift-dynamic]
key = AQD7IRhbu5zVHBAAXe/V3rZZH0f+czKCqr1lgw==
caps mon = "allow r"
caps osd = "allow class-read object_prefix rbd_children, allow rwx pool=openshift-dynamic"
# echo AQD7IRhbu5zVHBAAXe/V3rZZH0f+czKCqr1lgw== |base64
QVFEN0lSaGJ1NXpWSEJBQVhlL1YzclpaSDBmK2N6S0NxcjFsZ3c9PQo=
- Step 7. From the OpenShift Master node, create the ceph-secret for the user
Example:
# vi ceph-secret-user.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
namespace: default
data:
key: QVFEN0lSaGJ1NXpWSEJBQVhlL1YzclpaSDBmK2N6S0NxcjFsZ3c9PQo=
type: kubernetes.io/rbd
# oc create -f ceph-secret-user.yaml
secret "ceph-user-secret" created
- Step 8. Create the storage class
Example:
# vi ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: dynamic
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.10.92.77:6789,10.10.93.77:6789,10.10.94.77:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: kube-system
pool: openshift-dynamic
userId: openshift-dynamic
userSecretName: ceph-user-secret
# oc create -f ceph-storageclass.yaml
storageclass "dynamic" created
# oc get storageclasses
NAME TYPE
dynamic (default) kubernetes.io/rbd
- Step 9. Generate the PV claim and create it.
Example:
# vi ceph-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim-dynamic
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
# oc create -f ceph-pvc.yaml
persistentvolumeclaim "ceph-claim-dynamic" created
- Step 10. Create pod
Example:
vi ceph-pod-dynamic.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1-dynamic
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1-dynamic
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1-dynamic
persistentVolumeClaim:
claimName: ceph-claim-dynamic
# oc create -f ceph-pod-dynamic.yaml
pod "ceph-pod1-dynamic" created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1-dynamic 1/1 Running 0 42s
docker-registry-1-7grll 1/1 Running 0 1d
registry-console-1-572zv 1/1 Running 0 1d
router-1-0bf9l 1/1 Running 0 1d
- Note that the transition from creating container status to the running status can sometimes take several minutes to complete as the container has to be created and mounted.
- Now that the pod is in a running status, login into the node where your RBD image is mapped and run the following commands to verify the RBD image is mounted properly:
# rbd showmapped
id pool image snap device
0 openshift-joe kubernetes-dynamic-pvc-ae6d3fff-69b9-11e8-8a28-fa163ee58524 - /dev/rbd0
# df -h | grep dev/rbd0
/dev/rbd0 2.0G 6.0M 1.8G 1% /var/lib/origin/openshift.local.volumes/pods/b5524088-69b9-11e8-8a28-fa163ee58524/volumes/kubernetes.io~rbd/pvc-ae6b6521-69b9-11e8-8a28-fa163ee58524
Root Cause
- For more information on this, please refer to your version of the OpenShift "Installation and Configuration guide"
- Note this KCS article was created to provide supplementary information from the Ceph perspective on this process that may not be provided from the OpenShift side.
- For information on creating a static Ceph RBD to use as persistent storage for OpenShift, please refer to Persistent storage for OpenShift using a static Ceph RBD
Diagnostic Steps
- After telling the Master node to create the pod you can run
# oc get events
on the Master node to identify the status. This command will also display the node that is tasked with running the pod. - Once you identify the node tasked, you can monitor
/var/log/messages
on that node. - The combination of the entries in these two locations can be used to troubleshoot issues during this process.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments