Red Hat Training

A Red Hat training course is available for OpenShift Container Platform

Chapter 23. Persistent Storage Examples

23.1. Overview

The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.

23.2. Sharing an NFS mount across two persistent volume claims

23.2.1. Overview

The following use case describes how a cluster administrator wanting to leverage shared storage for use by two separate containers would configure the solution. This example highlights the use of NFS, but can easily be adapted to other shared storage types, such as GlusterFS. In addition, this example will show configuration of pod security as it relates to shared storage.

Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage. This topic shows and end-to-end example of using an existing NFS cluster and OpenShift Container Platform persistent store, and assumes an existing NFS server and exports exist in your OpenShift Container Platform infrastructure.

Note

All oc commands are executed on the OpenShift Container Platform master host.

23.2.2. Creating the Persistent Volume

Before creating the PV object in OpenShift Container Platform, the persistent volume (PV) file is defined:

Example 23.1. Persistent Volume Object Definition Using NFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv 1
spec:
  capacity:
    storage: 1Gi 2
  accessModes:
    - ReadWriteMany 3
  persistentVolumeReclaimPolicy: Retain 4
  nfs: 5
    path: /opt/nfs 6
    server: nfs.f22 7
    readOnly: false
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4
The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates.
5
This defines the volume type being used, in this case the NFS plug-in.
6
This is the NFS mount path.
7
This is the NFS server. This can also be specified by IP address.

Save the PV definition to a file, for example nfs-pv.yaml, and create the persistent volume:

# oc create -f nfs-pv.yaml
persistentvolume "nfs-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
nfs-pv       <none>    1Gi        RWX           Available                       37s

23.2.3. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC. This is the use case we are highlighting in this example.

Example 23.2. PVC Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc  1
spec:
  accessModes:
  - ReadWriteMany      2
  resources:
     requests:
       storage: 1Gi    3
1
The claim name is referenced by the pod under its volumes section.
2
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
3
This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example nfs-pvc.yaml, and create the PVC:

# oc create -f nfs-pvc.yaml
persistentvolumeclaim "nfs-pvc" created

Verify that the PVC was created and bound to the expected PV:

# oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
nfs-pvc         <none>    Bound     nfs-pv       1Gi        RWX           24s
                                    1
1
The claim, nfs-pvc, was bound to the nfs-pv PV.

23.2.4. Ensuring NFS Volume Access

Access is necessary to a node in the NFS server. On this node, examine the NFS export mount:

[root@nfs nfs]# ls -lZ /opt/nfs/
total 8
-rw-r--r--. 1 root 100003  system_u:object_r:usr_t:s0     10 Oct 12 23:27 test2b
              1
                     2
1
the owner has ID 0.
2
the group has ID 100003.

In order to access the NFS mount, the container must match the SELinux label, and either run with a UID of 0, or with 100003 in its supplemental groups range. Gain access to the volume by matching the NFS mount’s groups, which will be defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote NFS server. To enable writing to NFS volumes with SELinux enforcing on each node, run:

# setsebool -P virt_use_nfs on

23.2.5. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the NFS volume for read-write access:

Example 23.3. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: hello-openshift-nfs-pod 1
  labels:
    name: hello-openshift-nfs-pod
spec:
  containers:
    - name: hello-openshift-nfs-pod
      image: openshift/hello-openshift 2
      ports:
        - name: web
          containerPort: 80
      volumeMounts:
        - name: nfsvol 3
          mountPath: /usr/share/nginx/html 4
  securityContext:
      supplementalGroups: [100003] 5
      privileged: false
  volumes:
    - name: nfsvol
      persistentVolumeClaim:
        claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created in the previous step.

Save the pod definition to a file, for example nfs.yaml, and create the pod:

# oc create -f nfs.yaml
pod "hello-openshift-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-openshift-nfs-pod       1/1       Running   0          4s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod hello-openshift-nfs-pod
Name:				hello-openshift-nfs-pod
Namespace:			default 1
Image(s):			fedora/S3
Node:				ose70.rh7/192.168.234.148 2
Start Time:			Mon, 21 Mar 2016 09:59:47 -0400
Labels:				name=hello-openshift-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.4
Replication Controllers:	<none>
Containers:
  hello-openshift-nfs-pod:
    Container ID:	docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    Image:		fedora/S3
    Image ID:		docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 09:59:49 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc 3
    ReadOnly:	false
  default-token-a06zb:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-a06zb
Events: 4
  FirstSeen	LastSeen	Count	From			SubobjectPath				                      Reason		Message
  ─────────	────────	─────	────			─────────────				                      ──────		───────
  4m		4m		1	{scheduler }							                                      Scheduled	Successfully assigned hello-openshift-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Created		Created with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Started		Started with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Pulled		Container image "fedora/S3" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Created		Created with docker id a3292104d6c2
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Started		Started with docker id a3292104d6c2
1
The project (namespace) name.
2
The IP address of the OpenShift Container Platform node running the pod.
3
The PVC name used by the pod.
4
The list of events resulting in the pod being launched and the NFS volume being mounted. The container will not start correctly if the volume cannot mount.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more, shown in the oc get pod <name> -o yaml command:

[root@ose70 nfs]# oc get pod hello-openshift-nfs-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: restricted 1
  creationTimestamp: 2016-03-21T13:59:47Z
  labels:
    name: hello-openshift-nfs-pod
  name: hello-openshift-nfs-pod
  namespace: default 2
  resourceVersion: "2814411"
  selflink: /api/v1/namespaces/default/pods/hello-openshift-nfs-pod
  uid: 2c22d2ea-ef6d-11e5-adc7-000c2900f1e3
spec:
  containers:
  - image: fedora/S3
    imagePullPolicy: IfNotPresent
    name: hello-openshift-nfs-pod
    ports:
    - containerPort: 80
      name: web
      protocol: TCP
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /usr/share/S3/html
      name: nfsvol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-a06zb
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ose70.rh7
  imagePullSecrets:
  - name: default-dockercfg-xvdew
  nodeName: ose70.rh7
  restartPolicy: Always
  securityContext:
    supplementalGroups:
    - 100003 3
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: nfsvol
    persistentVolumeClaim:
      claimName: nfs-pvc 4
  - name: default-token-a06zb
    secret:
      secretName: default-token-a06zb
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-03-21T13:59:49Z
    status: "True"
    type: Ready
  containerStatuses:
  - containerID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    image: fedora/S3
    imageID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    lastState: {}
    name: hello-openshift-nfs-pod
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-03-21T13:59:49Z
  hostIP: 192.168.234.148
  phase: Running
  podIP: 10.1.0.4
  startTime: 2016-03-21T13:59:47Z
1
The SCC used by the pod.
2
The project (namespace) name.
3
The supplemental group ID for the pod (all containers).
4
The PVC name used by the pod.

23.2.6. Creating an Additional Pod to Reference the Same PVC

This pod definition, created in the same namespace, uses a different container. However, we can use the same backing storage by specifying the claim name in the volumes section below:

Example 23.4. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: busybox-nfs-pod 1
  labels:
    name: busybox-nfs-pod
spec:
  containers:
  - name: busybox-nfs-pod
    image: busybox 2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: nfsvol-2 3
      mountPath: /usr/share/busybox  4
      readOnly: false
  securityContext:
    supplementalGroups: [100003] 5
    privileged: false
  volumes:
  - name: nfsvol-2
    persistentVolumeClaim:
      claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created earlier and is also being used by a different container.

Save the pod definition to a file, for example nfs-2.yaml, and create the pod:

# oc create -f nfs-2.yaml
pod "busybox-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                READY     STATUS    RESTARTS   AGE
busybox-nfs-pod     1/1       Running   0          3s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod busybox-nfs-pod
Name:				busybox-nfs-pod
Namespace:			default
Image(s):			busybox
Node:				ose70.rh7/192.168.234.148
Start Time:			Mon, 21 Mar 2016 10:19:46 -0400
Labels:				name=busybox-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.5
Replication Controllers:	<none>
Containers:
  busybox-nfs-pod:
    Container ID:	docker://346d432e5a4824ebf5a47fceb4247e0568ecc64eadcc160e9bab481aecfb0594
    Image:		busybox
    Image ID:		docker://17583c7dd0dae6244203b8029733bdb7d17fccbb2b5d93e2b24cf48b8bfd06e2
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 10:19:48 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol-2:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc
    ReadOnly:	false
  default-token-32d2z:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-32d2z
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath				Reason		Message
  ─────────	────────	─────	────			─────────────				──────		───────
  4m		4m		1	{scheduler }							Scheduled	Successfully assigned busybox-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Created		Created with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Started		Started with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Pulled		Container image "busybox" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Created		Created with docker id 346d432e5a48
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Started		Started with docker id 346d432e5a48

As you can see, both containers are using the same storage claim that is attached to the same NFS mount on the back end.

23.3. Complete Example Using Ceph RBD

23.3.1. Overview

This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.

Note

All oc …​ commands are executed on the OpenShift Container Platform master host.

23.3.2. Installing the ceph-common Package

The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:

Note

The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

# yum install -y ceph-common

23.3.3. Creating the Ceph Secret

The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:

Example 23.5. Ceph Secret Definition

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
1
This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.

Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:

$ oc create -f ceph-secret.yaml
secret "ceph-secret" created

Verify that the secret was created:

# oc get secret ceph-secret
NAME          TYPE      DATA      AGE
ceph-secret   Opaque    1         23d

23.3.4. Creating the Persistent Volume

Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:

Example 23.6. Persistent Volume Object Definition Using Ceph RBD

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv     1
spec:
  capacity:
    storage: 2Gi    2
  accessModes:
    - ReadWriteOnce 3
  rbd:              4
    monitors:       5
      - 192.168.122.133:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret 6
    fsType: ext4        7
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).
4
This defines the volume type being used. In this case, the rbd plug-in is defined.
5
This is an array of Ceph monitor IP addresses and ports.
6
This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
7
This is the file system type mounted on the Ceph RBD block device.

Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:

# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
ceph-pv                  <none>    2147483648   RWO           Available                       2s

23.3.5. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 23.7. PVC Object Definition

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     1
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi 2
1
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
2
This claim will look for PVs offering 2Gi or greater capacity.

Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:

# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created

#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME         LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
ceph-claim   <none>    Bound     ceph-pv   1Gi        RWX           21s
                                 1
1
the claim was bound to the ceph-pv PV.

23.3.6. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:

Example 23.8. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1           1
spec:
  containers:
  - name: ceph-busybox
    image: busybox          2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1       3
      mountPath: /usr/share/busybox 4
      readOnly: false
  volumes:
  - name: ceph-vol1         5
    persistentVolumeClaim:
      claimName: ceph-claim 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod. In this case, we are telling busybox to sleep.
3 5
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
6
The PVC that is bound to the Ceph RBD cluster.

Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:

# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created

#verify pod was created
# oc get pod
NAME        READY     STATUS    RESTARTS   AGE
ceph-pod1   1/1       Running   0          2m
                      1
1
After a minute or so, the pod will be in the Running state.

23.3.7. Defining Group and Owner IDs (Optional)

When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup, as shown in the following pod definition fragment:

Example 23.9. Group ID Pod Definition

...
spec:
  containers:
    - name:
    ...
  securityContext: 1
    fsGroup: 7777  2
...
1
securityContext must be defined at the pod level, not under a specific container.
2
All containers in the pod will have the same fsGroup ID.

23.3.8. Setting ceph-user-secret as Default for Projects

If you would like to make the persistent storage available to every project you have to modify the default project template. You can read more on modifying the default project template. Read more on modifying the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster.

Example 23.10. Default Project Example

...
apiVersion: v1
kind: Template
metadata:
  creationTimestamp: null
  name: project-request
objects:
- apiVersion: v1
  kind: Project
  metadata:
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
      openshift.io/requester: ${PROJECT_REQUESTING_USER}
    creationTimestamp: null
    name: ${PROJECT_NAME}
  spec: {}
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    name: ceph-user-secret
  data:
    key: yoursupersecretbase64keygoeshere 1
  type:
    kubernetes.io/rbd
...
1
Place your super secret Ceph user key here in base64 format. See Creating the Ceph Secret.

23.4. Using Ceph RBD for dynamic provisioning

23.4.1. Overview

This topic provides a complete example of using an existing Ceph cluster for OpenShift Container Platform persistent storage. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.

Note
  • Run all oc commands on the OpenShift Container Platform master host.
  • The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

23.4.2. Creating a pool for dynamic volumes

  1. Install the latest ceph-common package:

    yum install -y ceph-common
    Note

    The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes.

  2. From an administrator or MON node, create a new pool for dynamic volumes, for example:

    $ ceph osd pool create kube 1024
    $ ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
    Note

    Using the default pool of RBD is an option, but not recommended.

23.4.3. Using an existing Ceph cluster for dynamic persistent storage

To use an existing Ceph cluster for dynamic persistent storage:

  1. Generate the client.admin base64-encoded key:

    $ ceph auth get client.admin

    Ceph secret definition example

    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
      namespace: kube-system
    data:
      key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
    type: kubernetes.io/rbd 2

    1
    This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
    2
    This value is required for Ceph RBD to work with dynamic provisioning.
  2. Create the Ceph secret for the client.admin:

    $ oc create -f ceph-secret.yaml
    secret "ceph-secret" created
  3. Verify that the secret was created:

    $ oc get secret ceph-secret
    NAME          TYPE                DATA      AGE
    ceph-secret   kubernetes.io/rbd   1         5d
  4. Create the storage class:

    $ oc create -f ceph-storageclass.yaml
    storageclass "dynamic" created

    Ceph storage class example

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: dynamic
      annotations:
         storageclass.beta.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/rbd
    parameters:
      monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 1
      adminId: admin 2
      adminSecretName: ceph-secret 3
      adminSecretNamespace: kube-system 4
      pool: kube  5
      userId: kube  6
      userSecretName: ceph-user-secret 7

    1
    A comma-delimited list of IP addresses Ceph monitors. This value is required.
    2
    The Ceph client ID that is capable of creating images in the pool. The default is admin.
    3
    The secret name for adminId. This value is required. The secret that you provide must have kubernetes.io/rbd.
    4
    The namespace for adminSecret. The default is default.
    5
    The Ceph RBD pool. The default is rbd, but this value is not recommended.
    6
    The Ceph client ID used to map the Ceph RBD image. The default is the same as the secret name for adminId.
    7
    The name of the Ceph secret for userId to map the Ceph RBD image. It must exist in the same namespace as the PVCs. Unless you set the Ceph secret as the default in new projects, you must provide this parameter value.
  5. Verify that the storage class was created:

    $ oc get storageclasses
    NAME                TYPE
    dynamic (default)   kubernetes.io/rbd
  6. Create the PVC object definition:

    PVC object definition example

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-claim-dynamic
    spec:
      accessModes:  1
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi 2

    1
    The accessModes do not enforce access rights but instead act as labels to match a PV to a PVC.
    2
    This claim looks for PVs that offer 2Gi or greater capacity.
  7. Create the PVC:

    $ oc create -f ceph-pvc.yaml
    persistentvolumeclaim "ceph-claim-dynamic" created
  8. Verify that the PVC was created and bound to the expected PV:

    $ oc get pvc
    NAME        STATUS  VOLUME                                   CAPACITY ACCESSMODES  AGE
    ceph-claim  Bound   pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi      RWO          1m
  9. Create the pod object definition:

    Pod object definition example

    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-pod1 1
    spec:
      containers:
      - name: ceph-busybox
        image: busybox 2
        command: ["sleep", "60000"]
        volumeMounts:
        - name: ceph-vol1 3
          mountPath: /usr/share/busybox 4
          readOnly: false
      volumes:
      - name: ceph-vol1
        persistentVolumeClaim:
          claimName: ceph-claim 5

    1
    The name of this pod as displayed by oc get pod.
    2
    The image run by this pod. In this case, busybox is set to sleep.
    3
    The name of the volume. This name must be the same in both the containers and volumes sections.
    4
    The mount path in the container.
    5
    The PVC that is bound to the Ceph RBD cluster.
  10. Create the pod:

    $ oc create -f ceph-pod1.yaml
    pod "ceph-pod1" created
  11. Verify that the pod was created:

    $ oc get pod
    NAME        READY     STATUS   RESTARTS   AGE
    ceph-pod1   1/1       Running  0          2m

After a minute or so, the pod status changes to Running.

23.4.4. Setting ceph-user-secret as the default for projects

To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See modifying the default project template for more information.

Default project example

...
apiVersion: v1
kind: Template
metadata:
  creationTimestamp: null
  name: project-request
objects:
- apiVersion: v1
  kind: Project
  metadata:
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
      openshift.io/requester: ${PROJECT_REQUESTING_USER}
    creationTimestamp: null
    name: ${PROJECT_NAME}
  spec: {}
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    name: ceph-user-secret
  data:
    key: QVFCbEV4OVpmaGJtQ0JBQW55d2Z0NHZtcS96cE42SW1JVUQvekE9PQ== 1
  type:
    kubernetes.io/rbd
...

1
Place your Ceph user key here in base64 format.

23.5. Complete Example Using GlusterFS

23.5.1. Overview

This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Container Platform persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.

Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.

For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example of Dynamic Provisioning Using GlusterFS. The persistent volume (PV) and endpoints are both created dynamically by GlusterFS.

Note

All oc …​ commands are executed on the OpenShift Container Platform master host.

23.5.2. Installing the glusterfs-fuse Package

The glusterfs-fuse library must be installed on all schedulable OpenShift Container Platform nodes:

# yum install -y glusterfs-fuse
Note

The OpenShift Container Platform all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.

23.5.3. Creating the Gluster Endpoints and Gluster Service for Persistence

The named endpoints define each node in the Gluster-trusted storage pool:

Example 23.11. GlusterFS Endpoint Definition

apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-cluster 1
subsets:
- addresses:              2
  - ip: 192.168.122.21
  ports:                  3
  - port: 1
    protocol: TCP
- addresses:
  - ip: 192.168.122.22
  ports:
  - port: 1
    protocol: TCP
1
The endpoints name. If using a service, then the endpoints name must match the service name.
2
An array of IP addresses for each node in the Gluster pool. Currently, host names are not supported.
3
The port numbers are ignored, but must be legal port numbers. The value 1 is commonly used.

Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:

# oc create -f gluster-endpoints.yaml
endpoints "gluster-cluster" created

Verify that the endpoints were created:

# oc get endpoints gluster-cluster
NAME                ENDPOINTS                           AGE
gluster-cluster     192.168.122.21:1,192.168.122.22:1   1m
Note

To persist the Gluster endpoints, you also need to create a service.

Note

Endpoints are name-spaced. Each project accessing the Gluster volume needs its own endpoints.

Example 23.12. GlusterFS Service Definition

apiVersion: v1
kind: Service
metadata:
  name: gluster-cluster 1
spec:
  ports:
  - port: 1 2
1
The name of the service. If using a service, then the endpoints name must match the service name.
2
The port should match the same port used in the endpoints.

Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:

# oc create -f gluster-service.yaml
endpoints "gluster-cluster" created

Verify that the service was created:

# oc get service gluster-cluster
NAME                CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
gluster-cluster     10.0.0.130   <none>        1/TCP     9s

23.5.4. Creating the Persistent Volume

Next, before creating the PV object, define the persistent volume in OpenShift Container Platform:

Persistent Volume Object Definition Using GlusterFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-pv   1
spec:
  capacity:
    storage: 1Gi     2
  accessModes:
  - ReadWriteMany    3
  glusterfs:         4
    endpoints: gluster-cluster 5
    path: /HadoopVol 6
    readOnly: false
  persistentVolumeReclaimPolicy: Retain 7

1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4
This defines the volume type being used. In this case, the glusterfs plug-in is defined.
5
This references the endpoints named above.
6
This is the Gluster volume name, preceded by /.
7
The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates. For GlusterFS, the accepted values include Retain, and Delete.

Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:

# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
gluster-pv   <none>    1Gi        RWX           Available                       37s

23.5.5. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 23.13. PVC Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-claim  1
spec:
  accessModes:
  - ReadWriteMany      2
  resources:
     requests:
       storage: 1Gi    3
1
The claim name is referenced by the pod under its volumes section.
2
As mentioned above for PVs, the accessModes do not enforce access rights, but rather act as labels to match a PV to a PVC.
3
This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:

# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created

Verify the PVC was created and bound to the expected PV:

# oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
                                    1
1
The claim was bound to the gluster-pv PV.

23.5.6. Defining GlusterFS Volume Access

Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:

# ls -lZ /mnt/glusterfs/
drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0    HadoopVol

# id yarn
uid=592(yarn) gid=590(hadoop) groups=590(hadoop)
    1
                  2
1
The owner has ID 592.
2
The group has ID 590.

In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:

# setsebool -P virt_sandbox_use_fusefs on
Note

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

23.5.7. Creating the Pod using NGINX Web Server image

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:

Note

The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC):

$ oc adm policy add-scc-to-user privileged myuser

Then, add the privileged: true to the containers securityContext: section of the YAML file (as seen in the example below).

Managing Security Context Constraints provides additional information regarding SCCs.

Example 23.14. Pod Object Definition using NGINX image

apiVersion: v1
kind: Pod
metadata:
  name: gluster-pod1
  labels:
    name: gluster-pod1   1
spec:
  containers:
  - name: gluster-pod1
    image: nginx       2
    ports:
    - name: web
      containerPort: 80
    securityContext:
      privileged: true
    volumeMounts:
    - name: gluster-vol1 3
      mountPath: /usr/share/nginx/html 4
      readOnly: false
  securityContext:
    supplementalGroups: [590]       5
  volumes:
  - name: gluster-vol1   6
    persistentVolumeClaim:
      claimName: gluster-claim      7
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod. In this case, we are using a standard NGINX image.
3 6
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The SupplementalGroup ID (Linux Groups) to be assigned at the pod level and as discussed this should match the POSIX permissions on the Gluster volume.
7
The PVC that is bound to the Gluster cluster.

Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:

# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created

Verify the pod was created:

# oc get pod
NAME           READY     STATUS    RESTARTS   AGE
gluster-pod1   1/1       Running   0          31s

                         1
1
After a minute or so, the pod will be in the Running state.

More details are shown in the oc describe pod command:

# oc describe pod gluster-pod1
Name:			gluster-pod1
Namespace:		default  1
Security Policy:	privileged
Node:			ose1.rhs/192.168.122.251
Start Time:		Wed, 24 Aug 2016 12:37:45 -0400
Labels:			name=gluster-pod1
Status:			Running
IP:			172.17.0.2  2
Controllers:		<none>
Containers:
  gluster-pod1:
    Container ID:	docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
    Image:		nginx
    Image ID:		docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
    Port:		80/TCP
    State:		Running
      Started:		Wed, 24 Aug 2016 12:37:52 -0400
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /usr/share/nginx/html/test from glustervol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1n70u (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True
  Ready 	True
  PodScheduled 	True
Volumes:
  glustervol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	gluster-claim  3
    ReadOnly:	false
  default-token-1n70u:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-1n70u
QoS Tier:	BestEffort
Events:    4
  FirstSeen	LastSeen	Count	From			SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  10s		10s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned gluster-pod1 to ose1.rhs
  9s		9s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Pulling		pulling image "nginx"
  4s		4s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Pulled		Successfully pulled image "nginx"
  3s		3s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Created		Created container with docker id e67ed01729e1
  3s		3s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Started		Started container with docker id e67ed01729e1
1
The project (namespace) name.
2
The IP address of the OpenShift Container Platform node running the pod.
3
The PVC name used by the pod.
4
The list of events resulting in the pod being launched and the Gluster volume being mounted.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the ⁠SELinux label, and more shown in the oc get pod <name> -o yaml command:

# oc get pod gluster-pod1 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: privileged  1
  creationTimestamp: 2016-08-24T16:37:45Z
  labels:
    name: gluster-pod1
  name: gluster-pod1
  namespace: default  2
  resourceVersion: "482"
  selfLink: /api/v1/namespaces/default/pods/gluster-pod1
  uid: 15afda77-6a19-11e6-aadb-525400f7256d
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: gluster-pod1
    ports:
    - containerPort: 80
      name: web
      protocol: TCP
    resources: {}
    securityContext:
      privileged: true  3
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: glustervol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-1n70u
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ose1.rhs
  imagePullSecrets:
  - name: default-dockercfg-20xg9
  nodeName: ose1.rhs
  restartPolicy: Always
  securityContext:
    supplementalGroups:
    - 590   4
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: glustervol
    persistentVolumeClaim:
      claimName: gluster-claim  5
  - name: default-token-1n70u
    secret:
      secretName: default-token-1n70u
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:45Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:53Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:45Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
    image: nginx
    imageID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
    lastState: {}
    name: gluster-pod1
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-08-24T16:37:52Z
  hostIP: 192.168.122.251
  phase: Running
  podIP: 172.17.0.2
  startTime: 2016-08-24T16:37:45Z
1
The SCC used by the pod.
2
The project (namespace) name.
3
The security context level requested, in this case privileged
4
The supplemental group ID for the pod (all containers).
5
The PVC name used by the pod.

23.6. Complete Example of Dynamic Provisioning Using Containerized GlusterFS

23.6.1. Overview

Note

This example assumes a functioning OpenShift Container Platform cluster along with Heketi and GlusterFS. All oc commands are executed on the OpenShift Container Platform master host.

This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example, a simple NGINX HelloWorld application is deployed using the Red Hat Container Native Storage (CNS) solution. CNS hyper-converges GlusterFS storage by containerizing it into the OpenShift Container Platform cluster.

The Red Hat Gluster Storage Administration Guide can also provide additional information about GlusterFS.

To get started, follow the gluster-kubernetes quickstart guide for an easy Vagrant-based installation and deployment of a working OpenShift Container Platform cluster with Heketi and GlusterFS containers.

23.6.2. Verify the Environment and Gather Needed Information

Note

At this point, there should be a working OpenShift Container Platform cluster deployed, and a working Heketi server with GlusterFS.

  1. Verify and view the cluster environment, including nodes and pods:

    $ oc get nodes,pods
    NAME      STATUS    AGE
    master    Ready     22h
    node0     Ready     22h
    node1     Ready     22h
    node2     Ready     22h
    NAME                               READY     STATUS              RESTARTS   AGE           1/1       Running             0          1d
    glusterfs-node0-2509304327-vpce1   1/1       Running   0          1d        192.168.10.100   node0
    glusterfs-node1-3290690057-hhq92   1/1       Running   0          1d        192.168.10.101   node1 1
    glusterfs-node2-4072075787-okzjv   1/1       Running   0          1d        192.168.10.102   node2
    heketi-3017632314-yyngh            1/1       Running   0          1d        10.42.0.0        node0 2
    1
    Example of GlusterFS storage pods running. There are three in this example.
    2
    Heketi server pod.
  2. If not already set in the environment, export the HEKETI_CLI_SERVER:

    $ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
  3. Identify the Heketi REST URL and server IP address:

    $ echo $HEKETI_CLI_SERVER
    http://10.42.0.0:8080
  4. Identify the Gluster endpoints that are needed to pass in as a parameter into the storage class, which is used in a later step (heketi-storage-endpoints).

    $ oc get endpoints
    NAME                       ENDPOINTS                                            AGE
    heketi                     10.42.0.0:8080                                       22h
    heketi-storage-endpoints   192.168.10.100:1,192.168.10.101:1,192.168.10.102:1   22h 1
    kubernetes                 192.168.10.90:6443                                   23h
    1
    The defined GlusterFS endpoints. In this example, they are called heketi-storage-endpoints.
Note

By default, user_authorization is disabled. If enabled, you may need to find the rest user and rest user secret key. (This is not applicable for this example, as any values will work).

23.6.3. Create a Storage Class for Your GlusterFS Dynamic Provisioner

Storage classes manage and enable persistent storage in OpenShift Container Platform. Below is an example of a Storage class requesting 5GB of on-demand storage to be used with your HelloWorld application.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi  1
provisioner: kubernetes.io/glusterfs  2
 parameters:
  endpoint: "heketi-storage-endpoints"  3
  resturl: "http://10.42.0.0:8080"  4
  restuser: "joe"  5
  restuserkey: "My Secret Life"  6
1
Name of the storage class.
2
The provisioner.
3
The GlusterFS-defined endpoint (oc get endpoints).
4
Heketi REST URL, taken from Step 1 above (echo $HEKETI_CLI_SERVER).
5
Rest username. This can be any value since authorization is turned off.
6
Rest user key. This can be any value.
  1. Create the Storage Class YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f gluster-storage-class.yaml
    storageclass "gluster-heketi" created
  2. View the storage class:

    $ oc get storageclass
    NAME              TYPE
    gluster-heketi    kubernetes.io/glusterfs

23.6.4. Create a PVC to Request Storage for Your Application

  1. Create a persistent volume claim (PVC) requesting 5GB of storage.

    During that time, the Dynamic Provisioning Framework and Heketi will automatically provision a new GlusterFS volume and generate the persistent volume (PV) object:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: gluster1
spec:
 accessModes:
  - ReadWriteOnce
 storageClassName: gluster-heketi  1
 resources:
   requests:
     storage: 5Gi 2
1
The name of the storage class.
2
The amount of storage requested.
  1. Create the PVC YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f gluster-pvc.yaml
    persistentvolumeclaim "gluster1" created
  2. View the PVC:

    $ oc get pvc
    NAME       STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    gluster1   Bound     pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180   5Gi        RWO           14h

    Notice that the PVC is bound to a dynamically created volume.

  3. View the persistent volume (PV):

    $ oc get pv
    NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM              REASON    AGE
    pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180   5Gi        RWO           Delete          Bound     default/gluster1             14h

23.6.5. Create a NGINX Pod That Uses the PVC

At this point, you have a dynamically created GlusterFS volume, bound to a PVC. Now, you can use this claim in a pod. Create a simple NGINX pod:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx-pod
    image: gcr.io/google_containers/nginx-slim:0.8
    ports:
    - name: web
      containerPort: 80
    securityContext:
      privileged: true
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1 1
1
The name of the PVC created in the previous step.
  1. Create the Pod YAML file, save it, then submit it to OpenShift Container Platform:

    $ oc create -f nginx-pod.yaml
    pod "gluster-pod1" created
  2. View the pod:

    $ oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    nginx-pod                          1/1       Running   0          9m        10.38.0.0        node1
    glusterfs-node0-2509304327-vpce1   1/1       Running   0          1d        192.168.10.100   node0
    glusterfs-node1-3290690057-hhq92   1/1       Running   0          1d        192.168.10.101   node1
    glusterfs-node2-4072075787-okzjv   1/1       Running   0          1d        192.168.10.102   node2
    heketi-3017632314-yyngh            1/1       Running   0          1d        10.42.0.0        node0
    Note

    This may take a few minutes, as the the pod may need to download the image if it does not already exist.

  3. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    $ oc exec -ti nginx-pod /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello World from GlusterFS!!!' > index.html
    $ ls
    index.html
    $ exit
  4. Using the curl command from the master node, curl the URL of the pod:

    $ curl http://10.38.0.0
    Hello World from GlusterFS!!!
  5. Check your Gluster pod to ensure that the index.html file was written. Choose any of the Gluster pods:

    $ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh
    $ mount | grep heketi
    /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    
    $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    $ ls
    index.html
    $ cat index.html
    Hello World from GlusterFS!!!

23.7. Complete Example of Dynamic Provisioning Using Dedicated GlusterFS

23.7.1. Overview

Note

This example assumes a functioning OpenShift Container Platform cluster along with Heketi and GlusterFS. All oc commands are executed on the OpenShift Container Platform master host.

Container Native Storage (CNS) using GlusterFS and Heketi is a great way to perform dynamic provisioning for shared filesystems in a Kubernetes-based cluster like OpenShift Container Platform. However, if an existing, dedicated Gluster cluster is available external to the OpenShift Container Platform cluster, you can also provision storage from it rather than a containerized GlusterFS implementation.

This example:

  • Shows how simple it is to install and configure a Heketi server to work with OpenShift Container Platform to perform dynamic provisioning.
  • Assumes some familiarity with Kubernetes and the Kubernetes Persistent Storage model.
  • Assumes you have access to an existing, dedicated GlusterFS cluster that has raw devices available for consumption and management by a Heketi server. If you do not have this, you can create a three node cluster using your virtual machine solution of choice. Ensure sure you create a few raw devices and give plenty of space (at least 100GB recommended). See Red Hat Gluster Storage Installation Guide.

23.7.2. Environment and Prerequisites

This example uses the following environment and prerequisites:

  • GlusterFS cluster running Red Hat Gluster Storage (RHGS) 3.1. Three nodes, each with at least two 100GB RAW devices:

    • gluster23.rhs (192.168.1.200)
    • gluster24.rhs (192.168.1.201)
    • gluster25.rhs (192.168.1.202)
  • Heketi service/client node running Red Hat Enterprise Linux (RHEL) 7.x or RHGS 3.1. Heketi can be installed on one of the Gluster nodes:

    • glusterclient2.rhs (192.168.1.203)
  • OpenShift Container Platform node. This example uses an all-in-one OpenShift Container Platform cluster (master and node on a single host), though it can work using a standard, multi-node cluster as well.

    • k8dev2.rhs (192.168.1.208)

23.7.3. Installing and Configuring Heketi

Heketi is used to manage the Gluster cluster storage (adding volumes, removing volumes, etc.). As stated, this can be RHEL or RHGS, and can be installed on one of the existing Gluster storage nodes. This example uses a stand-alone RHGS 3.1 node running Heketi.

The Red Hat Gluster Storage Administration Guide can be used a reference during this process.

  1. Install Heketi and the Heketi client. From the host designated to run Heketi and the Heketi client, run:

    # yum install heketi heketi-client -y
    Note

    The Heketi server can be any of the existing hosts, though typically this will be the OpenShift Container Platform master host. This example, however, uses a separate host not part of the GlusterFS or OpenShift Container Platform cluster.

  2. Create and install Heketi private keys on each GlusterFS cluster node. From the host that is running Heketi:

    # ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster23.rhs
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster24.rhs
    # ssh-copy-id -i /etc/heketi/heketi_key.pub root@gluster25.rhs
    # chown heketi:heketi /etc/heketi/heketi_key*
  3. Edit the /etc/heketi/heketi.json file to setup the SSH executor. Below is an excerpt from the /etc/heketi/heketi.json file; the parts to configure are the executor and SSH sections:

    	"executor": "ssh", 1
    
    	"_sshexec_comment": "SSH username and private key file information",
    	"sshexec": {
      	  "keyfile": "/etc/heketi/heketi_key", 2
      	  "user": "root", 3
      	  "port": "22", 4
      	  "fstab": "/etc/fstab" 5
    	},
    1
    Change executor from mock to ssh.
    2
    Add in the public key directory specified in previous step.
    3
    Update user to a user that has sudo or root access.
    4
    Set port to 22 and remove all other text.
    5
    Set fstab to the default, /etc/fstab and remove all other text.
  4. Restart and enable service:

    # systemctl restart heketi
    # systemctl enable heketi
  5. Test the connection to Heketi:

    # curl http://glusterclient2.rhs:8080/hello
    Hello from Heketi
  6. Set an environment variable for the Heketi server:

    # export HEKETI_CLI_SERVER=http://glusterclient2.rhs:8080

23.7.4. Loading Topology

Topology is used to tell Heketi about the environment and what nodes and devices it will manage.

Note

Heketi is currently limited to managing raw devices only. If a device is already a Gluster volume, it will be skipped and ignored.

  1. Create and load the topology file. There is a sample file located in /usr/share/heketi/topology-sample.json by default, or /etc/heketi depending on how it was installed.

    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster23.rhs"
                  ],
                  "storage": [
                    "192.168.1.200"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster24.rhs"
                  ],
                  "storage": [
                    "192.168.1.201"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster25.rhs"
                  ],
                  "storage": [
                    "192.168.1.202"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            }
          ]
        }
      ]
    }
  2. Using heketi-cli, run the following command to load the topology of your environment.

    # heketi-cli topology load --json=topology.json
    
        	Found node gluster23.rhs on cluster bdf9d8ca3fa269ff89854faf58f34b9a
       		Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
        	Creating node gluster24.rhs ... ID: 8e677d8bebe13a3f6846e78a67f07f30
       	 	Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
    ...
    ...
  3. Create a Gluster volume to verify Heketi:

    # heketi-cli volume create --size=50
  4. View the volume information from one of the the Gluster nodes:

    # gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f 1
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
    ...
    ...
    1
    Volume created by heketi-cli.

23.7.5. Dynamically Provision a Volume

  1. Create a StorageClass object definition. The definition below is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gluster-dyn
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://glusterclient2.rhs:8080" 1
      restauthenabled: "false" 2
    1
    The Heketi server from the HEKETI_CLI_SERVER environment variable.
    2
    Since authentication is not turned on in this example, set to false.
  2. From the OpenShift Container Platform master host, create the storage class:

    # oc create -f glusterfs-storageclass1.yaml
    storageclass "gluster-dyn" created
  3. Create a persistent volume claim (PVC), requesting the newly-created storage class. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: gluster-dyn-pvc
    spec:
     accessModes:
      - ReadWriteMany
     resources:
       requests:
            storage: 30Gi
     storageClassName: gluster-dyn
  4. From the OpenShift Container Platform master host, create the PVC:

    # oc create -f glusterfs-pvc-storageclass.yaml
    persistentvolumeclaim "gluster-dyn-pvc" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    # oc get pvc
    NAME          	STATUS	VOLUME                                 		CAPACITY   	ACCESSMODES   	STORAGECLASS   	AGE
    gluster-dyn-pvc Bound	pvc-78852230-d8e2-11e6-a3fa-0800279cf26f   	30Gi   		RWX       	gluster-dyn	42s
  6. Verify and view the new volume on one of the Gluster nodes:

    # gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f 1
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
            ...
    	Volume Name: vol_f1404b619e6be6ef673e2b29d58633be 2
    	Type: Distributed-Replicate
    	Volume ID: 7dc234d0-462f-4c6c-add3-fb9bc7e8da5e
    	Status: Started
    	Number of Bricks: 2 x 2 = 4
    	...
    1
    Volume created by heketi-cli.
    2
    New dynamically created volume triggered by Kubernetes and the storage class.

23.7.6. Creating a NGINX Pod That Uses the PVC

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, create a simple NGINX pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: gluster-pod1
      labels:
        name: gluster-pod1
    spec:
      containers:
      - name: gluster-pod1
        image: gcr.io/google_containers/nginx-slim:0.8
        ports:
        - name: web
          containerPort: 80
        securityContext:
          privileged: true
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster-dyn-pvc 1
    1
    The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    # oc create -f nginx-pod.yaml
    pod "gluster-pod1" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    gluster-pod1                       1/1       Running   0          9m        10.38.0.0        node1
  4. Now remote into the container with oc exec and create an index.html file:

    # oc exec -ti gluster-pod1 /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello World from GlusterFS!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    # curl http://10.38.0.0
    Hello World from GlusterFS!!!

23.8. Example: Containerized Heketi for managing dedicated GlusterFS storage

23.8.1. Overview

This example provides information about the integration, deployment, and management of GlusterFS containerized storage nodes by using Heketi running on OpenShift Container Platform.

This example:

  • Shows how to install and configure a Heketi server on OpenShift to perform dynamic provisioning.
  • Assumes you have familiarity with Kubernetes and the Kubernetes Persistent Storage model.
  • Assumes you have access to an existing, dedicated GlusterFS cluster that has raw devices available for consumption and management by a Heketi server. If you do not have this, you can create a three node cluster using your virtual machine solution of choice. Ensure sure you create a few raw devices and give plenty of space (at least 100GB recommended). See Red Hat Gluster Storage Installation Guide.

23.8.2. Environment and Prerequisites

This example uses the following environment and prerequisites:

  • GlusterFS cluster running Red Hat Gluster Storage (RHGS) 3.1. Three nodes, each with at least two 100GB RAW devices:

    • gluster23.rhs (192.168.1.200)
    • gluster24.rhs (192.168.1.201)
    • gluster25.rhs (192.168.1.202)
  • This example uses an all-in-one OpenShift Container Platform cluster (master and node on a single host), though it can work using a standard, multi-node cluster as well.

    • k8dev2.rhs (192.168.1.208)

23.8.3. Installing and Configuring Heketi

Heketi is used to manage the Gluster cluster storage (adding volumes, removing volumes, etc.). Download deploy-heketi-template to install Heketi on OpenShift.

Note

This template file places the database in an EmptyDir volume. Adjust the database accordingly for a reliable persistent storage.

  1. Create a new project:

    $ oc new-project <project-name>
  2. Enable privileged containers in the new project:

    $ oc adm policy add-scc-to-user privileged -z default
  3. Register the deploy-heketi template:

    $ oc create -f <template-path>/deploy-heketi-template
  4. Deploy the bootstrap Heketi container:

    $ oc process deploy-heketi -v \
             HEKETI_KUBE_NAMESPACE=<project-name> \
             HEKETI_KUBE_APIHOST=<master-url-and-port> \
             HEKETI_KUBE_INSECURE=y \
             HEKETI_KUBE_USER=<cluster-admin-username> \
             HEKETI_KUBE_PASSWORD=<cluster-admin-password> | oc create -f -
  5. Wait until the deploy-heketi pod starts and all services are running. Then get Heketi service details:

    $ oc get svc
    NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    deploy-heketi   172.30.96.173   <none>        8080/TCP   2m
  6. Check if Heketi services are running properly, it must return Hello from Heketi.

    $ curl http://<cluster-ip>:8080/hello
    Hello from Heketi
  7. Set an environment variable for the Heketi server:

    $ export HEKETI_CLI_SERVER=http://<cluster-ip>:8080

23.8.4. Loading Topology

Topology is used to tell Heketi about the environment and what nodes and devices it will manage.

Note

Heketi is currently limited to managing raw devices only. If a device is already a Gluster volume, it is skipped and ignored.

  1. Create and load the topology file. There is a sample file located in /usr/share/heketi/topology-sample.json by default, or /etc/heketi depending on how it was installed.

    Note

    Depending upon your method of installation this file may not exist. If it is missing, manually create the topology-sample.json file, as shown in the following example.

    {
      "clusters": [
        {
          "nodes": [
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster23.rhs"
                  ],
                  "storage": [
                    "192.168.1.200"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster24.rhs"
                  ],
                  "storage": [
                    "192.168.1.201"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            },
            {
              "node": {
                "hostnames": {
                  "manage": [
                    "gluster25.rhs"
                  ],
                  "storage": [
                    "192.168.1.202"
                  ]
                },
                "zone": 1
              },
              "devices": [
                "/dev/sde",
                "/dev/sdf"
              ]
            }
          ]
        }
      ]
    }
  2. Run the following command to load the topology of your environment.

    $ heketi-cli topology load --json=topology-sample.json
    
        	Found node gluster23.rhs on cluster bdf9d8ca3fa269ff89854faf58f34b9a
       		Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
        	Creating node gluster24.rhs ... ID: 8e677d8bebe13a3f6846e78a67f07f30
       	 	Adding device /dev/sde ... OK
       	 	Adding device /dev/sdf ... OK
    ...
  3. Create a Gluster volume to verify Heketi:

    $ heketi-cli volume create --size=50
  4. View the volume information from one of the the Gluster nodes:

    $ gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f 1
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
    ...
    1
    Volume created by heketi-cli.

23.8.5. Dynamically Provision a Volume

Note

If you installed OpenShift Container Platform by using the BYO (Bring your own) OpenShift Ansible inventory configuration files for either native or external GlusterFS instance, the GlusterFS StorageClass automatically get created during the installation. For such cases you can skip the following storage class creation steps and directly proceed with creating persistent volume claim instruction.

  1. Create a StorageClass object definition. The following definition is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: gluster-dyn
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://glusterclient2.rhs:8080" 1
      restauthenabled: "false" 2
    1
    The Heketi server from the HEKETI_CLI_SERVER environment variable.
    2
    Since authentication is not turned on in this example, set to false.
  2. From the OpenShift Container Platform master host, create the storage class:

    $ oc create -f glusterfs-storageclass1.yaml
    storageclass "gluster-dyn" created
  3. Create a persistent volume claim (PVC), requesting the newly-created storage class. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
     name: gluster-dyn-pvc
    spec:
     accessModes:
      - ReadWriteMany
     resources:
       requests:
            storage: 30Gi
     storageClassName: gluster-dyn
  4. From the OpenShift Container Platform master host, create the PVC:

    $ oc create -f glusterfs-pvc-storageclass.yaml
    persistentvolumeclaim "gluster-dyn-pvc" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    $ oc get pvc
    NAME          	STATUS	VOLUME                                 		CAPACITY   	ACCESSMODES   	STORAGECLASS   	AGE
    gluster-dyn-pvc Bound	pvc-78852230-d8e2-11e6-a3fa-0800279cf26f   	30Gi   		RWX       	gluster-dyn	42s
  6. Verify and view the new volume on one of the Gluster nodes:

    $ gluster volume info
    
    	Volume Name: vol_335d247ac57ecdf40ac616514cc6257f 1
    	Type: Distributed-Replicate
    	Volume ID: 75be7940-9b09-4e7f-bfb0-a7eb24b411e3
    	Status: Started
            ...
    	Volume Name: vol_f1404b619e6be6ef673e2b29d58633be 2
    	Type: Distributed-Replicate
    	Volume ID: 7dc234d0-462f-4c6c-add3-fb9bc7e8da5e
    	Status: Started
    	Number of Bricks: 2 x 2 = 4
    	...
    1
    Volume created by heketi-cli.
    2
    New dynamically created volume triggered by Kubernetes and the storage class.

23.8.6. Creating a NGINX Pod That Uses the PVC

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now now utilize this PVC in a pod. In this example, create a simple NGINX pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: gluster-pod1
      labels:
        name: gluster-pod1
    spec:
      containers:
      - name: gluster-pod1
        image: gcr.io/google_containers/nginx-slim:0.8
        ports:
        - name: web
          containerPort: 80
        securityContext:
          privileged: true
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster-dyn-pvc 1
    1
    The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    $ oc create -f nginx-pod.yaml
    pod "gluster-pod1" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    $ oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    gluster-pod1                       1/1       Running   0          9m        10.38.0.0        node1
  4. Now remote into the container with oc exec and create an index.html file:

    $ oc exec -ti gluster-pod1 /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello World from GlusterFS!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    $ curl http://10.38.0.0
    Hello World from GlusterFS!!!

23.9. Mounting Volumes on Privileged Pods

23.9.1. Overview

Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.

Note

While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.

23.9.2. Prerequisites

23.9.3. Creating the Persistent Volume

Creating the PersistentVolume makes the storage accessible to users, regardless of projects.

  1. As the admin, create the service, endpoint object, and persistent volume:

    $ oc create -f gluster-endpoints-service.yaml
    $ oc create -f gluster-endpoints.yaml
    $ oc create -f gluster-pv.yaml
  2. Verify that the objects were created:

    $ oc get svc
    NAME              CLUSTER_IP      EXTERNAL_IP   PORT(S)   SELECTOR   AGE
    gluster-cluster   172.30.151.58   <none>        1/TCP     <none>     24s
    $ oc get ep
    NAME              ENDPOINTS                           AGE
    gluster-cluster   192.168.59.102:1,192.168.59.103:1   2m
    $ oc get pv
    NAME                     LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    gluster-default-volume   <none>    2Gi        RWX           Available                       2d

23.9.4. Creating a Regular User

Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:

  1. As the admin, add a user to the SCC:

    $ oc adm policy add-scc-to-user privileged <username>
  2. Log in as the regular user:

    $ oc login -u <username> -p <password>
  3. Then, create a new project:

    $ oc new-project <project_name>

23.9.5. Creating the Persistent Volume Claim

  1. As a regular user, create the PersistentVolumeClaim to access the volume:

    $ oc create -f gluster-pvc.yaml -n <project_name>
  2. Define your pod to access the claim:

    Example 23.15. Pod Definition

    apiVersion: v1
    id: gluster-S3-pvc
    kind: Pod
    metadata:
      name: gluster-nginx-priv
    spec:
      containers:
        - name: gluster-nginx-priv
          image: fedora/nginx
          volumeMounts:
            - mountPath: /mnt/gluster 1
              name: gluster-volume-claim
          securityContext:
            privileged: true
      volumes:
        - name: gluster-volume-claim
          persistentVolumeClaim:
            claimName: gluster-claim 2
    1
    Volume mount within the pod.
    2
    The gluster-claim must reflect the name of the PersistentVolume.
  3. Upon pod creation, the mount directory is created and the volume is attached to that mount point.

    As regular user, create a pod from the definition:

    $ oc create -f gluster-S3-pod.yaml
  4. Verify that the pod created successfully:

    $ oc get pods
    NAME                 READY     STATUS    RESTARTS   AGE
    gluster-S3-pod   1/1       Running   0          36m

    It can take several minutes for the pod to create.

23.9.6. Verifying the Setup

23.9.6.1. Checking the Pod SCC

  1. Export the pod configuration:

    $ oc export pod <pod_name>
  2. Examine the output. Check that openshift.io/scc has the value of privileged:

    Example 23.16. Export Snippet

    metadata:
      annotations:
        openshift.io/scc: privileged

23.9.6.2. Verifying the Mount

  1. Access the pod and check that the volume is mounted:

    $ oc rsh <pod_name>
    [root@gluster-S3-pvc /]# mount
  2. Examine the output for the Gluster volume:

    Example 23.17. Volume Mount

    192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

23.10. Backing Docker Registry with GlusterFS Storage

23.10.1. Overview

This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.

It is assumed that the Docker registry service has already been started and the Gluster volume has been created.

23.10.2. Prerequisites

Note

All oc commands are executed on the master node as the admin user.

23.10.3. Create the Gluster Persistent Volume

First, make the Gluster volume available to the registry.

$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml

Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.

$ oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
gluster-pv   <none>    1Gi        RWX           Available                       37s
$ oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
Note

If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.

23.10.4. Attach the PVC to the Docker Registry

Before moving forward, ensure that the docker-registry service is running.

$ oc get svc
NAME              CLUSTER_IP       EXTERNAL_IP   PORT(S)                 SELECTOR                  AGE
docker-registry   172.30.167.194   <none>        5000/TCP                docker-registry=default   18m
Note

If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.

Then, attach the PVC:

$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
     --claim-name=gluster-claim --overwrite

Deploying a Docker Registry provides more information on using the Docker registry.

23.10.5. Known Issues

23.10.5.1. Pod Cannot Resolve the Volume Host

In non-production cases where the dnsmasq server is located on the same node as the OpenShift Container Platform master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift Container Platform DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.

In /etc/dnsmasq.conf, add:

# Reverse DNS record for master
host-record=master.example.com,<master-IP>
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/<master-IP>
# Forward .local queries to SkyDNS
server=/local/127.0.0.1#8053
# Forward reverse queries for service network to SkyDNS.
# This is for default OpenShift SDN - change as needed.
server=/17.30.172.in-addr.arpa/127.0.0.1#8053

With these settings, dnsmasq will pull from the /etc/hosts file on the master node.

Add the appropriate host names and IPs for all necessary hosts.

In master-config.yaml, change bindAddress to:

dnsConfig:
 bindAddress: 127.0.0.1:8053

When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.

In /etc/resolv.conf all scheduled nodes:

nameserver 192.168.1.100  1
nameserver 192.168.1.1    2
1
Add the internal DNS server.
2
Pre-existing external DNS server.

Once the configurations are changed, restart the OpenShift Container Platform master and dnsmasq services.

$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
$ systemctl restart dnsmasq

23.11. Binding Persistent Volumes by Labels

23.11.1. Overview

This topic provides an end-to-end example for binding persistent volume claims (PVCs) to persistent volumes (PVs), by defining labels in the PV and matching selectors in the PVC. This feature is available for all storage options. It is assumed that a OpenShift Container Platform cluster contains persistent storage resources which are available for binding by PVCs.

A Note on Labels and Selectors

Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV. For a more in-depth look at labels, see Pods and Services.

Note

For this example, we will be using modified GlusterFS PV and PVC specifications. However, implementation of selectors and labels is generic across for all storage options. See the relevant storage option for your volume provider to learn more about its unique configuration.

23.11.1.1. Assumptions

It is assumed that you have:

  • An existing OpenShift Container Platform cluster with at least one master and one node
  • At least one supported storage volume
  • A user with cluster-admin privileges

23.11.2. Defining Specifications

Note

These specifications are tailored to GlusterFS. Consult the relevant storage option for your volume provider to learn more about its unique configuration.

23.11.2.1. Persistent Volume with Labels

Example 23.18. glusterfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-volume
  labels: 1
    storage-tier: gold
    aws-availability-zone: us-east-1
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: glusterfs-cluster 2
    path: myVol1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
1
Use labels to identify common attributes or characteristics shared among volumes. In this case, we defined the Gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
2
Endpoints define the Gluster trusted pool and are discussed below.

23.11.2.2. Persistent Volume Claim with Selectors

A claim with a selector stanza (see example below) attempts to match existing, unclaimed, and non-prebound PVs. The existence of a PVC selector ignores a PV’s capacity. However, accessModes are still considered in the matching criteria.

It is important to note that a claim must match all of the key-value pairs included in its selector stanza. If no PV matches the claim, then the PVC will remain unbound (Pending). A PV can subsequently be created and the claim will automatically check for a label match.

Example 23.19. glusterfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-claim
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 1Gi
  selector: 1
    matchLabels:
      storage-tier: gold
      aws-availability-zone: us-east-1
1
The selector stanza defines all labels necessary in a PV in order to match this claim.

23.11.2.3. Volume Endpoints

To attach the PV to the Gluster volume, endpoints should be configured before creating our objects.

Example 23.20. glusterfs-ep.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
  - addresses:
      - ip: 192.168.122.221
    ports:
      - port: 1
  - addresses:
      - ip: 192.168.122.222
    ports:
      - port: 1

23.11.2.4. Deploy the PV, PVC, and Endpoints

For this example, run the oc commands as a cluster-admin privileged user. In a production environment, cluster clients might be expected to define and create the PVC.

# oc create -f glusterfs-ep.yaml
endpoints "glusterfs-cluster" created
# oc create -f glusterfs-pv.yaml
persistentvolume "gluster-volume" created
# oc create -f glusterfs-pvc.yaml
persistentvolumeclaim "gluster-claim" created

Lastly, confirm that the PV and PVC bound successfully.

# oc get pv,pvc
NAME              CAPACITY   ACCESSMODES      STATUS     CLAIM                     REASON    AGE
gluster-volume    2Gi        RWX              Bound      gfs-trial/gluster-claim             7s
NAME              STATUS     VOLUME           CAPACITY   ACCESSMODES               AGE
gluster-claim     Bound      gluster-volume   2Gi        RWX                       7s
Note

PVCs are local to a project, whereas PVs are a cluster-wide, global resource. Developers and non-administrator users may not have access to see all (or any) of the available PVs.

23.12. Using Storage Classes for Dynamic Provisioning

23.12.1. Overview

In these examples we will walk through a few scenarios of various configuratons of StorageClasses and Dynamic Provisioning using Google Cloud Platform Compute Engine (GCE). These examples assume some familiarity with Kubernetes, GCE and Persistent Disks and OpenShift Container Platform is installed and properly configured to use GCE.

23.12.2. Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses

StorageClasses can be used to differentiate and delineate storage levels and usages. In this case, the cluster-admin or storage-admin sets up two distinct classes of storage in GCE.

  • slow: Cheap, efficient, and optimized for sequential data operations (slower reading and writing)
  • fast: Optimized for higher rates of random IOPS and sustained throughput (faster reading and writing)

By creating these StorageClasses, the cluster-admin or storage-admin allows users to create claims requesting a particular level or service of StorageClass.

Example 23.21. StorageClass Slow Object Definitions

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow 1
provisioner: kubernetes.io/gce-pd 2
parameters:
  type: pd-standard 3
  zone: us-east1-d  4
1
Name of the StorageClass.
2
The provisioner plug-in to be used. This is a required field for StorageClasses.
3
PD type. This example uses pd-standard, which has a slightly lower cost, rate of sustained IOPS, and throughput versus pd-ssd, which carries more sustained IOPS and throughput.
4
The zone is required.

Example 23.22. StorageClass Fast Object Definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: us-east1-d

As a cluster-admin or storage-admin, save both definitions as YAML files. For example, slow-gce.yaml and fast-gce.yaml. Then create the StorageClasses.

# oc create -f slow-gce.yaml
storageclass "slow" created

# oc create -f fast-gce.yaml
storageclass "fast" created

# oc get storageclass
NAME       TYPE
fast       kubernetes.io/gce-pd
slow       kubernetes.io/gce-pd
Important

cluster-admin or storage-admin users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.

As a regular user, create a new project:

# oc new-project rh-eng

Create the claim YAML definition, save it to a file (pvc-fast.yaml):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 storageClassName: fast

Add the claim with the oc create command:

# oc create -f pvc-fast.yaml
persistentvolumeclaim "pvc-engineering" created

Check to see if your claim is bound:

# oc get pvc
NAME              STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering   Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           2m
Important

Since this claim was created and bound in the rh-eng project, it can be shared by any user in the same project.

As a cluster-admin or storage-admin user, view the recent dynamically provisioned Persistent Volume (PV).

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                     REASON    AGE
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound     rh-eng/pvc-engineering              5m
Important

Notice the RECLAIMPOLICY is Delete by default for all dynamically provisioned volumes. This means the volume only lasts as long as the claim still exists in the system. If you delete the claim, the volume is also deleted and all data on the volume is lost.

Finally, check the GCE console. The new disk has been created and is ready for use.

kubernetes-dynamic-pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 	SSD persistent disk 	10 GB 	us-east1-d

Pods can now reference the persistent volume claim and start using the volume.

23.12.3. Scenario 2: How to enable Default StorageClass behavior for a Cluster

In this example, a cluster-admin or storage-admin enables a default storage class for all other users and projects that do not implicitly specify a StorageClass in their claim. This is useful for a cluster-admin or storage-admin to provide easy management of a storage volume without having to set up or communicate specialized StorageClasses across the cluster.

This example builds upon Section 23.12.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses. The cluster-admin or storage-admin will create another StorageClass for designation as the defaultStorageClass.

Example 23.23. Default StorageClass Object Definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: generic 1
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" 2
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-east1-d
1
Name of the StorageClass, which needs to be unique in the cluster.
2
Annotation that marks this StorageClass as the default class. You must use "true" quoted in this version of the API. Without this annotation, OpenShift Container Platform considers this not the default StorageClass.

As a cluster-admin or storage-admin save the definition to a YAML file (generic-gce.yaml), then create the StorageClasses:

# oc create -f generic-gce.yaml
storageclass "generic" created

# oc get storageclass
NAME       TYPE
generic    kubernetes.io/gce-pd
fast       kubernetes.io/gce-pd
slow       kubernetes.io/gce-pd

As a regular user, create a new claim definition without any StorageClass requirement and save it to a file (generic-pvc.yaml).

Example 23.24. default Storage Claim Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering2
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 5Gi

Execute it and check the claim is bound:

# oc create -f generic-pvc.yaml
persistentvolumeclaim "pvc-engineering2" created
                                                                   3s
# oc get pvc
NAME               STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering    Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           41m
pvc-engineering2   Bound     pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           7s  1
1
pvc-engineering2 is bound to a dynamically provisioned Volume by default.

As a cluster-admin or storage-admin, view the Persistent Volumes defined so far:

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                     REASON    AGE
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound     rh-eng/pvc-engineering2             5m 1
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound     mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound     rh-eng/pvc-engineering              46m 2
1
This PV was bound to our default dynamic volume from the default StorageClass.
2
This PV was bound to our first PVC from Section 23.12.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses with our fast StorageClass.

Create a manually provisioned disk using GCE (not dynamically provisioned). Then create a Persistent Volume that connects to the new GCE disk (pv-manual-gce.yaml).

Example 23.25. Manual PV Object Defition

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-manual-gce
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-newly-created-gce-PD
   fsType: ext4

Execute the object definition file:

# oc create -f pv-manual-gce.yaml

Now view the PVs again. Notice that a pv-manual-gce volume is Available.

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                     REASON    AGE
pv-manual-gce                              35Gi       RWX           Retain          Available                                       4s
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound       rh-eng/pvc-engineering2             12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound       mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound       rh-eng/pvc-engineering              53m

Now create another claim identical to the generic-pvc.yaml PVC definition but change the name and do not set a storage class name.

Example 23.26. Claim Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering3
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 15Gi

Because default StorageClass is enabled in this instance, the manually created PV does not satisfy the claim request. The user receives a new dynamically provisioned Persistent Volume.

# oc get pvc
NAME               STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering    Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           1h
pvc-engineering2   Bound     pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           19m
pvc-engineering3   Bound     pvc-6fa8e73b-8c00-11e6-9962-42010af00004   15Gi       RWX           6s
Important

Since the default StorageClass is enabled on this system, for the manually created Persistent Volume to get bound by the above claim and not have a new dynamic provisioned volume be bound, the PV would need to have been created in the default StorageClass.

Since the default StorageClass is enabled on this system, you would need to create the PV in the default StorageClass for the manually created Persistent Volume to get bound to the above claim and not have a new dynamic provisioned volume bound to the claim.

To fix this, the cluster-admin or storage-admin user simply needs to create another GCE disk or delete the first manual PV and use a PV object definition that assigns a StorageClass name (pv-manual-gce2.yaml) if necessary:

Example 23.27. Manual PV Spec with default StorageClass name

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-manual-gce2
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-newly-created-gce-PD
   fsType: ext4
 storageClassName: generic 1
1
The name for previously created generic StorageClass.

Execute the object definition file:

# oc create -f pv-manual-gce2.yaml

List the PVs:

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                     REASON    AGE
pv-manual-gce                              35Gi       RWX           Retain          Available                                       4s 1
pv-manual-gce2                             35Gi       RWX           Retain          Bound       rh-eng/pvc-engineering3             4s 2
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound       rh-eng/pvc-engineering2             12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound       mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound       rh-eng/pvc-engineering              53m
1
The original manual PV, still unbound and Available. This is because it was not created in the default StorageClass.
2
The second PVC (other than the name) is bound to the Available manually created PV pv-manual-gce2.
Important

Notice that all dynamically provisioned volumes by default have a RECLAIMPOLICY of Delete. Once the PVC dynamically bound to the PV is deleted, the GCE volume is deleted and all data is lost. However, the manually created PV has a default RECLAIMPOLICY of Retain.

23.13. Using Storage Classes for Existing Legacy Storage

23.13.1. Overview

In this example, a legacy data volume exists and a cluster-admin or storage-admin needs to make it available for consumption in a particular project. Using StorageClasses decreases the likelihood of other users and projects gaining access to this volume from a claim because the claim would have to have an exact matching value for the StorageClass name. This example also disables dynamic provisioning. This example assumes:

23.13.1.1. Scenario 1: Link StorageClass to existing Persistent Volume with Legacy Data

As a cluster-admin or storage-admin, define and create the StorageClass for historical financial data.

Example 23.28. StorageClass finance-history Object Definitions

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: finance-history 1
provisioner: no-provisioning 2
parameters: 3
1
Name of the StorageClass.
2
This is a required field, but since there is to be no dynamic provisioning, a value must be put here as long as it is not an actual provisioner plug-in type.
3
Parameters can simply be left blank, since these are only used for the dynamic provisioner.

Save the definitions to a YAML file (finance-history-storageclass.yaml) and create the StorageClass.

# oc create -f finance-history-storageclass.yaml
storageclass "finance-history" created


# oc get storageclass
NAME              TYPE
finance-history   no-provisioning
Important

cluster-admin or storage-admin users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.

The StorageClass exists. A cluster-admin or storage-admin can create the Persistent Volume (PV) for use with the StorageClass. Create a manually provisioned disk using GCE (not dynamically provisioned) and a Persistent Volume that connects to the new GCE disk (gce-pv.yaml).

Example 23.29. Finance History PV Object

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-finance-history
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-existing-PD-volume-name-that-contains-the-valuable-data 1
   fsType: ext4
 storageClassName: finance-history 2
2
The StorageClass name, that must match exactly.
1
The name of the GCE disk that already exists and contains the legacy data.

As a cluster-admin or storage-admin, create and view the PV.

# oc create -f gce-pv.yaml
persistentvolume "pv-finance-history" created

# oc get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                        REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Available                                          2d

Notice you have a pv-finance-history Available and ready for consumption.

As a user, create a Persistent Volume Claim (PVC) as a YAML file and specify the correct StorageClass name:

Example 23.30. Claim for finance-history Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-finance-history
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 20Gi
 storageClassName: finance-history 1
1
The StorageClass name, that must match exactly or the claim will go unbound until it is deleted or another StorageClass is created that matches the name.

Create and view the PVC and PV to see if it is bound.

# oc create -f pvc-finance-history.yaml
persistentvolumeclaim "pvc-finance-history" created

# oc get pvc
NAME                  STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
pvc-finance-history   Bound     pv-finance-history   35Gi       RWX           9m


# oc get pv  (cluster/storage-admin)
NAME                 CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                         REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Bound       default/pvc-finance-history             5m
Important

You can use StorageClasses in the same cluster for both legacy data (no dynamic provisioning) and with dynamic provisioning.

23.14. Configuring Azure Blob Storage for Integrated Docker Registry

23.14.1. Overview

This topic reviews how to configure Microsoft Azure Blob Storage for OpenShift integrated Docker registry.

23.14.2. Before You Begin

  • Create a storage container using Microsoft Azure Portal, Microsoft Azure CLI, or Microsoft Azure Storage Explorer. Keep a note of the storage account name, storage account key and container name.
  • Deploy the integrated Docker registry if it is not deployed.

23.14.3. Overriding Registry Configuration

To create a new registry pod and replace the old pod automatically:

  1. Create a new registry configuration file called registryconfig.yaml and add the following information:

    version: 0.1
    log:
      level: debug
    http:
      addr: :5000
    storage:
      cache:
        blobdescriptor: inmemory
      delete:
        enabled: true
      azure: 1
        accountname: azureblobacc
        accountkey:  azureblobacckey
        container: azureblobname
        realm: core.windows.net 2
    auth:
      openshift:
        realm: openshift
    middleware:
      registry:
        - name: openshift
      repository:
        - name: openshift
          options:
            acceptschema2: false
            pullthrough: true
            enforcequota: false
            projectcachettl: 1m
            blobrepositorycachettl: 10m
      storage:
        - name: openshift
    1
    Replace the values for accountname, acountkey, and container with storage account name, storage account key, and storage container name respectively.
    2
    If using Azure regional cloud, set to the desired realm. For example, core.cloudapi.de for the Germany regional cloud.
  2. Create a new registry configuration:

    $ oc secrets new registry-config config.yaml=registryconfig.yaml
  3. Add the secret:

    $ oc volume dc/docker-registry --add --type=secret \
        --secret-name=registry-config -m /etc/docker/registry/
  4. Set the REGISTRY_CONFIGURATION_PATH environment variable:

    $ oc set env dc/docker-registry \
        REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yaml
  5. If you already created a registry configuration:

    1. Delete the secret:

      $ oc delete secret registry-config
    2. Create a new registry configuration:

      $ oc secrets new registry-config config.yaml=registryconfig.yaml
    3. Update the configuration by starting a new rollout:

      $ oc rollout latest docker-registry