Red Hat Training

A Red Hat training course is available for OpenShift Container Platform

Chapter 28. Persistent Storage Examples

28.1. Overview

The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.

28.2. Sharing an NFS mount across two persistent volume claims

28.2.1. Overview

The following use case describes how a cluster administrator wanting to leverage shared storage for use by two separate containers would configure the solution. This example highlights the use of NFS, but can easily be adapted to other shared storage types, such as GlusterFS. In addition, this example will show configuration of pod security as it relates to shared storage.

Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage. This topic shows and end-to-end example of using an existing NFS cluster and OpenShift Container Platform persistent store, and assumes an existing NFS server and exports exist in your OpenShift Container Platform infrastructure.

Note

All oc commands are executed on the OpenShift Container Platform master host.

28.2.2. Creating the Persistent Volume

Before creating the PV object in OpenShift Container Platform, the persistent volume (PV) file is defined:

Example 28.1. Persistent Volume Object Definition Using NFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv 1
spec:
  capacity:
    storage: 1Gi 2
  accessModes:
    - ReadWriteMany 3
  persistentVolumeReclaimPolicy: Retain 4
  nfs: 5
    path: /opt/nfs 6
    server: nfs.f22 7
    readOnly: false
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4
The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates.
5
This defines the volume type being used, in this case the NFS plug-in.
6
This is the NFS mount path.
7
This is the NFS server. This can also be specified by IP address.

Save the PV definition to a file, for example nfs-pv.yaml, and create the persistent volume:

# oc create -f nfs-pv.yaml
persistentvolume "nfs-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
nfs-pv       <none>    1Gi        RWX           Available                       37s

28.2.3. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC. This is the use case we are highlighting in this example.

Example 28.2. PVC Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc  1
spec:
  accessModes:
  - ReadWriteMany      2
  resources:
     requests:
       storage: 1Gi    3
1
The claim name is referenced by the pod under its volumes section.
2
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
3
This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example nfs-pvc.yaml, and create the PVC:

# oc create -f nfs-pvc.yaml
persistentvolumeclaim "nfs-pvc" created

Verify that the PVC was created and bound to the expected PV:

# oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
nfs-pvc         <none>    Bound     nfs-pv       1Gi        RWX           24s
                                    1
1
The claim, nfs-pvc, was bound to the nfs-pv PV.

28.2.4. Ensuring NFS Volume Access

Access is necessary to a node in the NFS server. On this node, examine the NFS export mount:

[root@nfs nfs]# ls -lZ /opt/nfs/
total 8
-rw-r--r--. 1 root 100003  system_u:object_r:usr_t:s0     10 Oct 12 23:27 test2b
              1
                     2
1
the owner has ID 0.
2
the group has ID 100003.

In order to access the NFS mount, the container must match the SELinux label, and either run with a UID of 0, or with 100003 in its supplemental groups range. Gain access to the volume by matching the NFS mount’s groups, which will be defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote NFS server. To enable writing to NFS volumes with SELinux enforcing on each node, run:

# setsebool -P virt_use_nfs on

28.2.5. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the NFS volume for read-write access:

Example 28.3. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: hello-openshift-nfs-pod 1
  labels:
    name: hello-openshift-nfs-pod
spec:
  containers:
    - name: hello-openshift-nfs-pod
      image: openshift/hello-openshift 2
      ports:
        - name: web
          containerPort: 80
      volumeMounts:
        - name: nfsvol 3
          mountPath: /usr/share/nginx/html 4
  securityContext:
      supplementalGroups: [100003] 5
      privileged: false
  volumes:
    - name: nfsvol
      persistentVolumeClaim:
        claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created in the previous step.

Save the pod definition to a file, for example nfs.yaml, and create the pod:

# oc create -f nfs.yaml
pod "hello-openshift-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-openshift-nfs-pod       1/1       Running   0          4s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod hello-openshift-nfs-pod
Name:				hello-openshift-nfs-pod
Namespace:			default 1
Image(s):			fedora/S3
Node:				ose70.rh7/192.168.234.148 2
Start Time:			Mon, 21 Mar 2016 09:59:47 -0400
Labels:				name=hello-openshift-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.4
Replication Controllers:	<none>
Containers:
  hello-openshift-nfs-pod:
    Container ID:	docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    Image:		fedora/S3
    Image ID:		docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 09:59:49 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc 3
    ReadOnly:	false
  default-token-a06zb:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-a06zb
Events: 4
  FirstSeen	LastSeen	Count	From			SubobjectPath				                      Reason		Message
  ─────────	────────	─────	────			─────────────				                      ──────		───────
  4m		4m		1	{scheduler }							                                      Scheduled	Successfully assigned hello-openshift-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Created		Created with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Started		Started with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Pulled		Container image "fedora/S3" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Created		Created with docker id a3292104d6c2
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Started		Started with docker id a3292104d6c2
1
The project (namespace) name.
2
The IP address of the OpenShift Container Platform node running the pod.
3
The PVC name used by the pod.
4
The list of events resulting in the pod being launched and the NFS volume being mounted. The container will not start correctly if the volume cannot mount.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more, shown in the oc get pod <name> -o yaml command:

[root@ose70 nfs]# oc get pod hello-openshift-nfs-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: restricted 1
  creationTimestamp: 2016-03-21T13:59:47Z
  labels:
    name: hello-openshift-nfs-pod
  name: hello-openshift-nfs-pod
  namespace: default 2
  resourceVersion: "2814411"
  selflink: /api/v1/namespaces/default/pods/hello-openshift-nfs-pod
  uid: 2c22d2ea-ef6d-11e5-adc7-000c2900f1e3
spec:
  containers:
  - image: fedora/S3
    imagePullPolicy: IfNotPresent
    name: hello-openshift-nfs-pod
    ports:
    - containerPort: 80
      name: web
      protocol: TCP
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /usr/share/S3/html
      name: nfsvol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-a06zb
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ose70.rh7
  imagePullSecrets:
  - name: default-dockercfg-xvdew
  nodeName: ose70.rh7
  restartPolicy: Always
  securityContext:
    supplementalGroups:
    - 100003 3
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: nfsvol
    persistentVolumeClaim:
      claimName: nfs-pvc 4
  - name: default-token-a06zb
    secret:
      secretName: default-token-a06zb
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-03-21T13:59:49Z
    status: "True"
    type: Ready
  containerStatuses:
  - containerID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    image: fedora/S3
    imageID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    lastState: {}
    name: hello-openshift-nfs-pod
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-03-21T13:59:49Z
  hostIP: 192.168.234.148
  phase: Running
  podIP: 10.1.0.4
  startTime: 2016-03-21T13:59:47Z
1
The SCC used by the pod.
2
The project (namespace) name.
3
The supplemental group ID for the pod (all containers).
4
The PVC name used by the pod.

28.2.6. Creating an Additional Pod to Reference the Same PVC

This pod definition, created in the same namespace, uses a different container. However, we can use the same backing storage by specifying the claim name in the volumes section below:

Example 28.4. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: busybox-nfs-pod 1
  labels:
    name: busybox-nfs-pod
spec:
  containers:
  - name: busybox-nfs-pod
    image: busybox 2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: nfsvol-2 3
      mountPath: /usr/share/busybox  4
      readOnly: false
  securityContext:
    supplementalGroups: [100003] 5
    privileged: false
  volumes:
  - name: nfsvol-2
    persistentVolumeClaim:
      claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created earlier and is also being used by a different container.

Save the pod definition to a file, for example nfs-2.yaml, and create the pod:

# oc create -f nfs-2.yaml
pod "busybox-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                READY     STATUS    RESTARTS   AGE
busybox-nfs-pod     1/1       Running   0          3s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod busybox-nfs-pod
Name:				busybox-nfs-pod
Namespace:			default
Image(s):			busybox
Node:				ose70.rh7/192.168.234.148
Start Time:			Mon, 21 Mar 2016 10:19:46 -0400
Labels:				name=busybox-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.5
Replication Controllers:	<none>
Containers:
  busybox-nfs-pod:
    Container ID:	docker://346d432e5a4824ebf5a47fceb4247e0568ecc64eadcc160e9bab481aecfb0594
    Image:		busybox
    Image ID:		docker://17583c7dd0dae6244203b8029733bdb7d17fccbb2b5d93e2b24cf48b8bfd06e2
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 10:19:48 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol-2:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc
    ReadOnly:	false
  default-token-32d2z:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-32d2z
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath				Reason		Message
  ─────────	────────	─────	────			─────────────				──────		───────
  4m		4m		1	{scheduler }							Scheduled	Successfully assigned busybox-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Created		Created with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Started		Started with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Pulled		Container image "busybox" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Created		Created with docker id 346d432e5a48
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Started		Started with docker id 346d432e5a48

As you can see, both containers are using the same storage claim that is attached to the same NFS mount on the back end.

28.3. Complete Example Using Ceph RBD

28.3.1. Overview

This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.

Note

All oc …​ commands are executed on the OpenShift Container Platform master host.

28.3.2. Installing the ceph-common Package

The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:

Note

The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

# yum install -y ceph-common

28.3.3. Creating the Ceph Secret

The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:

Example 28.5. Ceph Secret Definition

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
type: kubernetes.io/rbd 2
1
This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
2
This value is required for Ceph RBD to work with dynamic provisioning.

Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:

$ oc create -f ceph-secret.yaml
secret "ceph-secret" created

Verify that the secret was created:

# oc get secret ceph-secret
NAME          TYPE               DATA      AGE
ceph-secret   kubernetes.io/rbd  1         23d

28.3.4. Creating the Persistent Volume

Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:

Example 28.6. Persistent Volume Object Definition Using Ceph RBD

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv     1
spec:
  capacity:
    storage: 2Gi    2
  accessModes:
    - ReadWriteOnce 3
  rbd:              4
    monitors:       5
      - 192.168.122.133:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret 6
    fsType: ext4        7
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).
4
This defines the volume type being used. In this case, the rbd plug-in is defined.
5
This is an array of Ceph monitor IP addresses and ports.
6
This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
7
This is the file system type mounted on the Ceph RBD block device.

Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:

# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
ceph-pv                  <none>    2147483648   RWO           Available                       2s

28.3.5. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 28.7. PVC Object Definition

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     1
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi 2
1
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
2
This claim will look for PVs offering 2Gi or greater capacity.

Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:

# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created

#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME         LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
ceph-claim   <none>    Bound     ceph-pv   1Gi        RWX           21s
                                 1
1
the claim was bound to the ceph-pv PV.

28.3.6. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:

Example 28.8. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1           1
spec:
  containers:
  - name: ceph-busybox
    image: busybox          2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1       3
      mountPath: /usr/share/busybox 4
      readOnly: false
  volumes:
  - name: ceph-vol1         5
    persistentVolumeClaim:
      claimName: ceph-claim 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod. In this case, we are telling busybox to sleep.
3 5
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
6
The PVC that is bound to the Ceph RBD cluster.

Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:

# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created

#verify pod was created
# oc get pod
NAME        READY     STATUS    RESTARTS   AGE
ceph-pod1   1/1       Running   0          2m
                      1
1
After a minute or so, the pod will be in the Running state.

28.3.7. Defining Group and Owner IDs (Optional)

When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup, as shown in the following pod definition fragment:

Example 28.9. Group ID Pod Definition

...
spec:
  containers:
    - name:
    ...
  securityContext: 1
    fsGroup: 7777  2
...
1
securityContext must be defined at the pod level, not under a specific container.
2
All containers in the pod will have the same fsGroup ID.

28.3.8. Setting ceph-user-secret as Default for Projects

If you would like to make the persistent storage available to every project you have to modify the default project template. You can read more on modifying the default project template. Read more on modifying the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster.

Default Project Example

...
apiVersion: v1
kind: Template
metadata:
  creationTimestamp: null
  name: project-request
objects:
- apiVersion: v1
  kind: Project
  metadata:
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
      openshift.io/requester: ${PROJECT_REQUESTING_USER}
    creationTimestamp: null
    name: ${PROJECT_NAME}
  spec: {}
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    name: ceph-user-secret
  data:
    key: yoursupersecretbase64keygoeshere 1
  type:
    kubernetes.io/rbd
...

1
Place your Ceph user key here in base64 format.

28.4. Using Ceph RBD for dynamic provisioning

28.4.1. Overview

This topic provides a complete example of using an existing Ceph cluster for OpenShift Container Platform persistent storage. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.

Note
  • Run all oc commands on the OpenShift Container Platform master host.
  • The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

28.4.2. Creating a pool for dynamic volumes

  1. Install the latest ceph-common package:

    yum install -y ceph-common
    Note

    The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes.

  2. From an administrator or MON node, create a new pool for dynamic volumes, for example:

    $ ceph osd pool create kube 1024
    $ ceph auth get-or-create client.kube mon 'allow r, allow command "osd blacklist"' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
    Note

    Using the default pool of RBD is an option, but not recommended.

28.4.3. Using an existing Ceph cluster for dynamic persistent storage

To use an existing Ceph cluster for dynamic persistent storage:

  1. Generate the client.admin base64-encoded key:

    $ ceph auth get client.admin

    Ceph secret definition example

    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
      namespace: kube-system
    data:
      key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
    type: kubernetes.io/rbd 2

    1
    This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
    2
    This value is required for Ceph RBD to work with dynamic provisioning.
  2. Create the Ceph secret for the client.admin:

    $ oc create -f ceph-secret.yaml
    secret "ceph-secret" created
  3. Verify that the secret was created:

    $ oc get secret ceph-secret
    NAME          TYPE                DATA      AGE
    ceph-secret   kubernetes.io/rbd   1         5d
  4. Create the storage class:

    $ oc create -f ceph-storageclass.yaml
    storageclass "dynamic" created

    Ceph storage class example

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: dynamic
      annotations:
        storageclass.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/rbd
    parameters:
      monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 1
      adminId: admin 2
      adminSecretName: ceph-secret 3
      adminSecretNamespace: kube-system 4
      pool: kube  5
      userId: kube  6
      userSecretName: ceph-user-secret 7

    1
    A comma-delimited list of IP addresses Ceph monitors. This value is required.
    2
    The Ceph client ID that is capable of creating images in the pool. The default is admin.
    3
    The secret name for adminId. This value is required. The secret that you provide must have kubernetes.io/rbd.
    4
    The namespace for adminSecret. The default is default.
    5
    The Ceph RBD pool. The default is rbd, but this value is not recommended.
    6
    The Ceph client ID used to map the Ceph RBD image. The default is the same as the secret name for adminId.
    7
    The name of the Ceph secret for userId to map the Ceph RBD image. It must exist in the same namespace as the PVCs. Unless you set the Ceph secret as the default in new projects, you must provide this parameter value.
  5. Verify that the storage class was created:

    $ oc get storageclasses
    NAME                TYPE
    dynamic (default)   kubernetes.io/rbd
  6. Create the PVC object definition:

    PVC object definition example

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-claim-dynamic
    spec:
      accessModes:  1
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi 2

    1
    The accessModes do not enforce access rights but instead act as labels to match a PV to a PVC.
    2
    This claim looks for PVs that offer 2Gi or greater capacity.
  7. Create the PVC:

    $ oc create -f ceph-pvc.yaml
    persistentvolumeclaim "ceph-claim-dynamic" created
  8. Verify that the PVC was created and bound to the expected PV:

    $ oc get pvc
    NAME        STATUS  VOLUME                                   CAPACITY ACCESSMODES  AGE
    ceph-claim  Bound   pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi      RWO          1m
  9. Create the pod object definition:

    Pod object definition example

    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-pod1 1
    spec:
      containers:
      - name: ceph-busybox
        image: busybox 2
        command: ["sleep", "60000"]
        volumeMounts:
        - name: ceph-vol1 3
          mountPath: /usr/share/busybox 4
          readOnly: false
      volumes:
      - name: ceph-vol1
        persistentVolumeClaim:
          claimName: ceph-claim-dynamic 5

    1
    The name of this pod as displayed by oc get pod.
    2
    The image run by this pod. In this case, busybox is set to sleep.
    3
    The name of the volume. This name must be the same in both the containers and volumes sections.
    4
    The mount path in the container.
    5
    The PVC that is bound to the Ceph RBD cluster.
  10. Create the pod:

    $ oc create -f ceph-pod1.yaml
    pod "ceph-pod1" created
  11. Verify that the pod was created:

    $ oc get pod
    NAME        READY     STATUS   RESTARTS   AGE
    ceph-pod1   1/1       Running  0          2m

After a minute or so, the pod status changes to Running.

28.4.4. Setting ceph-user-secret as the default for projects

To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See modifying the default project template for more information.

Default project example

...
apiVersion: v1
kind: Template
metadata:
  creationTimestamp: null
  name: project-request
objects:
- apiVersion: v1
  kind: Project
  metadata:
    annotations:
      openshift.io/description: ${PROJECT_DESCRIPTION}
      openshift.io/display-name: ${PROJECT_DISPLAYNAME}
      openshift.io/requester: ${PROJECT_REQUESTING_USER}
    creationTimestamp: null
    name: ${PROJECT_NAME}
  spec: {}
  status: {}
- apiVersion: v1
  kind: Secret
  metadata:
    name: ceph-user-secret
  data:
    key: QVFCbEV4OVpmaGJtQ0JBQW55d2Z0NHZtcS96cE42SW1JVUQvekE9PQ== 1
  type:
    kubernetes.io/rbd
...

1
Place your Ceph user key here in base64 format.

28.5. Complete Example Using GlusterFS

28.5.1. Overview

This topic provides an end-to-end example of how to use an existing converged mode, independent mode, or standalone Red Hat Gluster Storage cluster as persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing converged mode or independent mode, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.

For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example Using GlusterFS for Dynamic Provisioning.

Note

All oc commands are executed on the OpenShift Container Platform master host.

28.5.2. Prerequisites

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

# yum install glusterfs-fuse

This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:

# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

# yum update glusterfs-fuse

By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:

$ sudo setsebool -P virt_sandbox_use_fusefs on 1
$ sudo setsebool -P virt_use_fusefs on
1
The -P option makes the boolean persistent between reboots.
Note

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

Note

If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.

28.5.3. Static Provisioning

  1. To enable static provisioning, first create a GlusterFS volume. See the Red Hat Gluster Storage Administration Guide for information on how to do this using the gluster command-line interface or the heketi project site for information on how to do this using heketi-cli. For this example, the volume will be named myVol1.
  2. Define the following Service and Endpoints in gluster-endpoints.yaml:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: glusterfs-cluster 1
    spec:
      ports:
      - port: 1
    ---
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: glusterfs-cluster 2
    subsets:
      - addresses:
          - ip: 192.168.122.221 3
        ports:
          - port: 1 4
      - addresses:
          - ip: 192.168.122.222 5
        ports:
          - port: 1 6
      - addresses:
          - ip: 192.168.122.223 7
        ports:
          - port: 1 8
    1 2
    These names must match.
    3 5 7
    The ip values must be the actual IP addresses of a Red Hat Gluster Storage server, not hostnames.
    4 6 8
    The port number is ignored.
  3. From the OpenShift Container Platform master host, create the Service and Endpoints:

    $ oc create -f gluster-endpoints.yaml
    service "glusterfs-cluster" created
    endpoints "glusterfs-cluster" created
  4. Verify that the Service and Endpoints were created:

    $ oc get services
    NAME                       CLUSTER_IP       EXTERNAL_IP   PORT(S)    SELECTOR        AGE
    glusterfs-cluster          172.30.205.34    <none>        1/TCP      <none>          44s
    
    $ oc get endpoints
    NAME                ENDPOINTS                                               AGE
    docker-registry     10.1.0.3:5000                                           4h
    glusterfs-cluster   192.168.122.221:1,192.168.122.222:1,192.168.122.223:1   11s
    kubernetes          172.16.35.3:8443                                        4d
    Note

    Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.

  5. In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:

    $ mkdir -p /mnt/glusterfs/myVol1
    
    $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
    
    $ ls -lnZ /mnt/glusterfs/
    drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0    myVol1 1 2
    1
    The UID is 592.
    2
    The GID is 590.
  6. Define the following PersistentVolume (PV) in gluster-pv.yaml:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gluster-default-volume 1
      annotations:
        pv.beta.kubernetes.io/gid: "590" 2
    spec:
      capacity:
        storage: 2Gi 3
      accessModes: 4
        - ReadWriteMany
      glusterfs:
        endpoints: glusterfs-cluster 5
        path: myVol1 6
        readOnly: false
      persistentVolumeReclaimPolicy: Retain
    1
    The name of the volume.
    2
    The GID on the root of the GlusterFS volume.
    3
    The amount of storage allocated to this volume.
    4
    accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    5
    The Endpoints resource previously created.
    6
    The GlusterFS volume that will be accessed.
  7. From the OpenShift Container Platform master host, create the PV:

    $ oc create -f gluster-pv.yaml
  8. Verify that the PV was created:

    $ oc get pv
    NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
    gluster-default-volume   <none>    2147483648   RWX           Available                       2s
  9. Create a PersistentVolumeClaim (PVC) that will bind to the new PV in gluster-claim.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gluster-claim  1
    spec:
      accessModes:
      - ReadWriteMany      2
      resources:
         requests:
           storage: 1Gi    3
    1
    The claim name is referenced by the pod under its volumes section.
    2
    Must match the accessModes of the PV.
    3
    This claim will look for PVs offering 1Gi or greater capacity.
  10. From the OpenShift Container Platform master host, create the PVC:

    $ oc create -f gluster-claim.yaml
  11. Verify that the PV and PVC are bound:

    $ oc get pv
    NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM          REASON    AGE
    gluster-pv   <none>    1Gi        RWX           Available   gluster-claim            37s
    
    $ oc get pvc
    NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
    gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
Note

PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.

28.5.4. Using the Storage

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-openshift-pod
      labels:
        name: hello-openshift-pod
    spec:
      containers:
      - name: hello-openshift-pod
        image: openshift/hello-openshift
        ports:
        - name: web
          containerPort: 80
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
          readOnly: false
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster1 1
    1
    The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.38.0.0        node1
  4. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    $ oc exec -ti hello-openshift-pod /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello OpenShift!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    # curl http://10.38.0.0
    Hello OpenShift!!!
  6. Delete the pod, recreate it, and wait for it to come up:

    # oc delete pod hello-openshift-pod
    pod "hello-openshift-pod" deleted
    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.37.0.0        node1
  7. Now curl the pod again and it should still have the same data as before. Note that its IP address may have changed:

    # curl http://10.37.0.0
    Hello OpenShift!!!
  8. Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:

    $ mount | grep heketi
    /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    
    $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    $ ls
    index.html
    $ cat index.html
    Hello OpenShift!!!

28.6. Complete Example Using GlusterFS for Dynamic Provisioning

28.6.1. Overview

This topic provides an end-to-end example of how to use an existing converged mode, independent mode, or standalone Red Hat Gluster Storage cluster as dynamic persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing converged mode or independent mode, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.

Note

All oc commands are executed on the OpenShift Container Platform master host.

28.6.2. Prerequisites

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

# yum install glusterfs-fuse

This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:

# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

# yum update glusterfs-fuse

By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:

$ sudo setsebool -P virt_sandbox_use_fusefs on 1
$ sudo setsebool -P virt_use_fusefs on
1
The -P option makes the boolean persistent between reboots.
Note

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

Note

If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.

28.6.3. Dynamic Provisioning

  1. To enable dynamic provisioning, first create a StorageClass object definition. The definition below is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: glusterfs
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://10.42.0.0:8080" 1
      restauthenabled: "false" 2
    1
    The heketi server URL.
    2
    Since authentication is not turned on in this example, set to false.
  2. From the OpenShift Container Platform master host, create the StorageClass:

    # oc create -f gluster-storage-class.yaml
    storageclass "glusterfs" created
  3. Create a PVC using the newly-created StorageClass. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gluster1
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 30Gi
      storageClassName: glusterfs
  4. From the OpenShift Container Platform master host, create the PVC:

    # oc create -f glusterfs-dyn-pvc.yaml
    persistentvolumeclaim "gluster1" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    # oc get pvc
    NAME       STATUS   VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
    gluster1   Bound    pvc-78852230-d8e2-11e6-a3fa-0800279cf26f   30Gi       RWX           glusterfs      42s

28.6.4. Using the Storage

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.

  1. Create the pod object definition:

    apiVersion: v1
    kind: Pod
    metadata:
      name: hello-openshift-pod
      labels:
        name: hello-openshift-pod
    spec:
      containers:
      - name: hello-openshift-pod
        image: openshift/hello-openshift
        ports:
        - name: web
          containerPort: 80
        volumeMounts:
        - name: gluster-vol1
          mountPath: /usr/share/nginx/html
          readOnly: false
      volumes:
      - name: gluster-vol1
        persistentVolumeClaim:
          claimName: gluster1 1
    1
    The name of the PVC created in the previous step.
  2. From the OpenShift Container Platform master host, create the pod:

    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.38.0.0        node1
  4. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    $ oc exec -ti hello-openshift-pod /bin/sh
    $ cd /usr/share/nginx/html
    $ echo 'Hello OpenShift!!!' > index.html
    $ ls
    index.html
    $ exit
  5. Now curl the URL of the pod:

    # curl http://10.38.0.0
    Hello OpenShift!!!
  6. Delete the pod, recreate it, and wait for it to come up:

    # oc delete pod hello-openshift-pod
    pod "hello-openshift-pod" deleted
    # oc create -f hello-openshift-pod.yaml
    pod "hello-openshift-pod" created
    # oc get pods -o wide
    NAME                               READY     STATUS    RESTARTS   AGE       IP               NODE
    hello-openshift-pod                          1/1       Running   0          9m        10.37.0.0        node1
  7. Now curl the pod again and it should still have the same data as before. Note that its IP address may have changed:

    # curl http://10.37.0.0
    Hello OpenShift!!!
  8. Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:

    $ mount | grep heketi
    /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    
    $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    $ ls
    index.html
    $ cat index.html
    Hello OpenShift!!!

28.7. Mounting Volumes on Privileged Pods

28.7.1. Overview

Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.

Note

While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.

28.7.2. Prerequisites

28.7.3. Creating the Persistent Volume

Creating the PersistentVolume makes the storage accessible to users, regardless of projects.

  1. As the admin, create the service, endpoint object, and persistent volume:

    $ oc create -f gluster-endpoints-service.yaml
    $ oc create -f gluster-endpoints.yaml
    $ oc create -f gluster-pv.yaml
  2. Verify that the objects were created:

    $ oc get svc
    NAME              CLUSTER_IP      EXTERNAL_IP   PORT(S)   SELECTOR   AGE
    gluster-cluster   172.30.151.58   <none>        1/TCP     <none>     24s
    $ oc get ep
    NAME              ENDPOINTS                           AGE
    gluster-cluster   192.168.59.102:1,192.168.59.103:1   2m
    $ oc get pv
    NAME                     LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    gluster-default-volume   <none>    2Gi        RWX           Available                       2d

28.7.4. Creating a Regular User

Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:

  1. As the admin, add a user to the SCC:

    $ oc adm policy add-scc-to-user privileged <username>
  2. Log in as the regular user:

    $ oc login -u <username> -p <password>
  3. Then, create a new project:

    $ oc new-project <project_name>

28.7.5. Creating the Persistent Volume Claim

  1. As a regular user, create the PersistentVolumeClaim to access the volume:

    $ oc create -f gluster-pvc.yaml -n <project_name>
  2. Define your pod to access the claim:

    Example 28.10. Pod Definition

    apiVersion: v1
    id: gluster-S3-pvc
    kind: Pod
    metadata:
      name: gluster-nginx-priv
    spec:
      containers:
        - name: gluster-nginx-priv
          image: fedora/nginx
          volumeMounts:
            - mountPath: /mnt/gluster 1
              name: gluster-volume-claim
          securityContext:
            privileged: true
      volumes:
        - name: gluster-volume-claim
          persistentVolumeClaim:
            claimName: gluster-claim 2
    1
    Volume mount within the pod.
    2
    The gluster-claim must reflect the name of the PersistentVolume.
  3. Upon pod creation, the mount directory is created and the volume is attached to that mount point.

    As regular user, create a pod from the definition:

    $ oc create -f gluster-S3-pod.yaml
  4. Verify that the pod created successfully:

    $ oc get pods
    NAME                 READY     STATUS    RESTARTS   AGE
    gluster-S3-pod   1/1       Running   0          36m

    It can take several minutes for the pod to create.

28.7.6. Verifying the Setup

28.7.6.1. Checking the Pod SCC

  1. Export the pod configuration:

    $ oc get -o yaml --export pod <pod_name>
  2. Examine the output. Check that openshift.io/scc has the value of privileged:

    Example 28.11. Export Snippet

    metadata:
      annotations:
        openshift.io/scc: privileged

28.7.6.2. Verifying the Mount

  1. Access the pod and check that the volume is mounted:

    $ oc rsh <pod_name>
    [root@gluster-S3-pvc /]# mount
  2. Examine the output for the Gluster volume:

    Example 28.12. Volume Mount

    192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

28.8. Mount Propagation

28.8.1. Overview

Mount propagation allows for sharing volumes mounted by a container to other containers in the same pod, or even to other pods on the same node.

28.8.2. Values

Mount propagation of a volume is controlled by the mountPropagation field in Container.volumeMounts. Its values are:

  • none - This volume mount does not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host. In similar fashion, no mounts created by the container are visible on the host. This is the default mode, and is equal to private mount propagation in Linux kernels.
  • HostToContainer - This volume mount receives all subsequent mounts that are mounted to this volume or any of its subdirectories. In other words, if the host mounts anything inside the volume mount, the container acknowledges it mounted there. This mode is equal to rslave mount propagation in Linux kernels.
  • Bidirectional - This volume mount behaves the same as the HostToContainer mount. In addition, all volume mounts created by the container are propagated back to the host and to all containers of all pods that use the same volume. A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using a hostPath volume. This mode is equal to rshared mount propagation in Linux kernels.
Important

Bidirectional mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by containers in pods must be destroyed, or unmounted, by the containers on termination.

28.8.3. Configuration

Before mount propagation can work properly on some deployments, such as CoreOS, Red Hat Enterprise Linux/Centos, or Ubuntu, the mount share must be configured correctly in Docker.

Procedure

  1. Edit your Docker’s systemd service file. Set MountFlags as follows:

    MountFlags=shared

    Alternatively, remove MountFlags=slave, if present.

  2. Restart the Docker daemon:

    $ sudo systemctl daemon-reload
    $ sudo systemctl restart docker

28.9. Switching an Integrated OpenShift Container Registry to GlusterFS

28.9.1. Overview

This topic reviews how to attach a GlusterFS volume to an integrated OpenShift Container Registry. This can be done with any of converged mode, independent mode, or standalone Red Hat Gluster Storage. It is assumed that the registry has already been started and a volume has been created.

28.9.2. Prerequisites

  • An existing registry deployed without configuring storage.
  • An existing GlusterFS volume
  • glusterfs-fuse installed on all schedulable nodes.
  • A user with the cluster-admin role binding.

    • For this guide, that user is admin.
Note

All oc commands are executed on the master node as the admin user.

28.9.3. Manually Provision the GlusterFS PersistentVolumeClaim

  1. To enable static provisioning, first create a GlusterFS volume. See the Red Hat Gluster Storage Administration Guide for information on how to do this using the gluster command-line interface or the heketi project site for information on how to do this using heketi-cli. For this example, the volume will be named myVol1.
  2. Define the following Service and Endpoints in gluster-endpoints.yaml:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: glusterfs-cluster 1
    spec:
      ports:
      - port: 1
    ---
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: glusterfs-cluster 2
    subsets:
      - addresses:
          - ip: 192.168.122.221 3
        ports:
          - port: 1 4
      - addresses:
          - ip: 192.168.122.222 5
        ports:
          - port: 1 6
      - addresses:
          - ip: 192.168.122.223 7
        ports:
          - port: 1 8
    1 2
    These names must match.
    3 5 7
    The ip values must be the actual IP addresses of a Red Hat Gluster Storage server, not hostnames.
    4 6 8
    The port number is ignored.
  3. From the OpenShift Container Platform master host, create the Service and Endpoints:

    $ oc create -f gluster-endpoints.yaml
    service "glusterfs-cluster" created
    endpoints "glusterfs-cluster" created
  4. Verify that the Service and Endpoints were created:

    $ oc get services
    NAME                       CLUSTER_IP       EXTERNAL_IP   PORT(S)    SELECTOR        AGE
    glusterfs-cluster          172.30.205.34    <none>        1/TCP      <none>          44s
    
    $ oc get endpoints
    NAME                ENDPOINTS                                               AGE
    docker-registry     10.1.0.3:5000                                           4h
    glusterfs-cluster   192.168.122.221:1,192.168.122.222:1,192.168.122.223:1   11s
    kubernetes          172.16.35.3:8443                                        4d
    Note

    Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.

  5. In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:

    $ mkdir -p /mnt/glusterfs/myVol1
    
    $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
    
    $ ls -lnZ /mnt/glusterfs/
    drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0    myVol1 1 2
    1
    The UID is 592.
    2
    The GID is 590.
  6. Define the following PersistentVolume (PV) in gluster-pv.yaml:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gluster-default-volume 1
      annotations:
        pv.beta.kubernetes.io/gid: "590" 2
    spec:
      capacity:
        storage: 2Gi 3
      accessModes: 4
        - ReadWriteMany
      glusterfs:
        endpoints: glusterfs-cluster 5
        path: myVol1 6
        readOnly: false
      persistentVolumeReclaimPolicy: Retain
    1
    The name of the volume.
    2
    The GID on the root of the GlusterFS volume.
    3
    The amount of storage allocated to this volume.
    4
    accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    5
    The Endpoints resource previously created.
    6
    The GlusterFS volume that will be accessed.
  7. From the OpenShift Container Platform master host, create the PV:

    $ oc create -f gluster-pv.yaml
  8. Verify that the PV was created:

    $ oc get pv
    NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
    gluster-default-volume   <none>    2147483648   RWX           Available                       2s
  9. Create a PersistentVolumeClaim (PVC) that will bind to the new PV in gluster-claim.yaml:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gluster-claim  1
    spec:
      accessModes:
      - ReadWriteMany      2
      resources:
         requests:
           storage: 1Gi    3
    1
    The claim name is referenced by the pod under its volumes section.
    2
    Must match the accessModes of the PV.
    3
    This claim will look for PVs offering 1Gi or greater capacity.
  10. From the OpenShift Container Platform master host, create the PVC:

    $ oc create -f gluster-claim.yaml
  11. Verify that the PV and PVC are bound:

    $ oc get pv
    NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM          REASON    AGE
    gluster-pv   <none>    1Gi        RWX           Available   gluster-claim            37s
    
    $ oc get pvc
    NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
    gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
Note

PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.

28.9.4. Attach the PersistentVolumeClaim to the Registry

Before moving forward, ensure that the docker-registry service is running.

$ oc get svc
NAME              CLUSTER_IP       EXTERNAL_IP   PORT(S)                 SELECTOR                  AGE
docker-registry   172.30.167.194   <none>        5000/TCP                docker-registry=default   18m
Note

If either the docker-registry service or its associated pod is not running, refer back to the registry setup instructions for troubleshooting before continuing.

Then, attach the PVC:

$ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
     --claim-name=gluster-claim --overwrite

Setting up the Registry provides more information on using an OpenShift Container Registry.

28.10. Binding Persistent Volumes by Labels

28.10.1. Overview

This topic provides an end-to-end example for binding persistent volume claims (PVCs) to persistent volumes (PVs), by defining labels in the PV and matching selectors in the PVC. This feature is available for all storage options. It is assumed that a OpenShift Container Platform cluster contains persistent storage resources which are available for binding by PVCs.

A Note on Labels and Selectors

Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV. For a more in-depth look at labels, see Pods and Services.

Note

For this example, we will be using modified GlusterFS PV and PVC specifications. However, implementation of selectors and labels is generic across for all storage options. See the relevant storage option for your volume provider to learn more about its unique configuration.

28.10.1.1. Assumptions

It is assumed that you have:

  • An existing OpenShift Container Platform cluster with at least one master and one node
  • At least one supported storage volume
  • A user with cluster-admin privileges

28.10.2. Defining Specifications

Note

These specifications are tailored to GlusterFS. Consult the relevant storage option for your volume provider to learn more about its unique configuration.

28.10.2.1. Persistent Volume with Labels

Example 28.13. glusterfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-volume
  labels: 1
    storage-tier: gold
    aws-availability-zone: us-east-1
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: glusterfs-cluster 2
    path: myVol1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
1
Use labels to identify common attributes or characteristics shared among volumes. In this case, we defined the Gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
2
Endpoints define the Gluster trusted pool and are discussed below.

28.10.2.2. Persistent Volume Claim with Selectors

A claim with a selector stanza (see example below) attempts to match existing, unclaimed, and non-prebound PVs. The existence of a PVC selector ignores a PV’s capacity. However, accessModes are still considered in the matching criteria.

It is important to note that a claim must match all of the key-value pairs included in its selector stanza. If no PV matches the claim, then the PVC will remain unbound (Pending). A PV can subsequently be created and the claim will automatically check for a label match.

Example 28.14. glusterfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-claim
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 1Gi
  selector: 1
    matchLabels:
      storage-tier: gold
      aws-availability-zone: us-east-1
1
The selector stanza defines all labels necessary in a PV in order to match this claim.

28.10.2.3. Volume Endpoints

To attach the PV to the Gluster volume, endpoints should be configured before creating our objects.

Example 28.15. glusterfs-ep.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs-cluster
subsets:
  - addresses:
      - ip: 192.168.122.221
    ports:
      - port: 1
  - addresses:
      - ip: 192.168.122.222
    ports:
      - port: 1

28.10.2.4. Deploy the PV, PVC, and Endpoints

For this example, run the oc commands as a cluster-admin privileged user. In a production environment, cluster clients might be expected to define and create the PVC.

# oc create -f glusterfs-ep.yaml
endpoints "glusterfs-cluster" created
# oc create -f glusterfs-pv.yaml
persistentvolume "gluster-volume" created
# oc create -f glusterfs-pvc.yaml
persistentvolumeclaim "gluster-claim" created

Lastly, confirm that the PV and PVC bound successfully.

# oc get pv,pvc
NAME              CAPACITY   ACCESSMODES      STATUS     CLAIM                     REASON    AGE
gluster-volume    2Gi        RWX              Bound      gfs-trial/gluster-claim             7s
NAME              STATUS     VOLUME           CAPACITY   ACCESSMODES               AGE
gluster-claim     Bound      gluster-volume   2Gi        RWX                       7s
Note

PVCs are local to a project, whereas PVs are a cluster-wide, global resource. Developers and non-administrator users may not have access to see all (or any) of the available PVs.

28.11. Using Storage Classes for Dynamic Provisioning

28.11.1. Overview

In these examples we will walk through a few scenarios of various configuratons of StorageClasses and Dynamic Provisioning using Google Cloud Platform Compute Engine (GCE). These examples assume some familiarity with Kubernetes, GCE and Persistent Disks and OpenShift Container Platform is installed and properly configured to use GCE.

28.11.2. Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses

StorageClasses can be used to differentiate and delineate storage levels and usages. In this case, the cluster-admin or storage-admin sets up two distinct classes of storage in GCE.

  • slow: Cheap, efficient, and optimized for sequential data operations (slower reading and writing)
  • fast: Optimized for higher rates of random IOPS and sustained throughput (faster reading and writing)

By creating these StorageClasses, the cluster-admin or storage-admin allows users to create claims requesting a particular level or service of StorageClass.

Example 28.16. StorageClass Slow Object Definitions

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow 1
provisioner: kubernetes.io/gce-pd 2
parameters:
  type: pd-standard 3
  zone: us-east1-d  4
1
Name of the StorageClass.
2
The provisioner plug-in to be used. This is a required field for StorageClasses.
3
PD type. This example uses pd-standard, which has a slightly lower cost, rate of sustained IOPS, and throughput versus pd-ssd, which carries more sustained IOPS and throughput.
4
The zone is required.

Example 28.17. StorageClass Fast Object Definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zone: us-east1-d

As a cluster-admin or storage-admin, save both definitions as YAML files. For example, slow-gce.yaml and fast-gce.yaml. Then create the StorageClasses.

# oc create -f slow-gce.yaml
storageclass "slow" created

# oc create -f fast-gce.yaml
storageclass "fast" created

# oc get storageclass
NAME       TYPE
fast       kubernetes.io/gce-pd
slow       kubernetes.io/gce-pd
Important

cluster-admin or storage-admin users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.

As a regular user, create a new project:

# oc new-project rh-eng

Create the claim YAML definition, save it to a file (pvc-fast.yaml):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 storageClassName: fast

Add the claim with the oc create command:

# oc create -f pvc-fast.yaml
persistentvolumeclaim "pvc-engineering" created

Check to see if your claim is bound:

# oc get pvc
NAME              STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering   Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           2m
Important

Since this claim was created and bound in the rh-eng project, it can be shared by any user in the same project.

As a cluster-admin or storage-admin user, view the recent dynamically provisioned Persistent Volume (PV).

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                     REASON    AGE
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound     rh-eng/pvc-engineering              5m
Important

Notice the RECLAIMPOLICY is Delete by default for all dynamically provisioned volumes. This means the volume only lasts as long as the claim still exists in the system. If you delete the claim, the volume is also deleted and all data on the volume is lost.

Finally, check the GCE console. The new disk has been created and is ready for use.

kubernetes-dynamic-pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 	SSD persistent disk 	10 GB 	us-east1-d

Pods can now reference the persistent volume claim and start using the volume.

28.11.3. Scenario 2: How to enable Default StorageClass behavior for a Cluster

In this example, a cluster-admin or storage-admin enables a default storage class for all other users and projects that do not implicitly specify a StorageClass in their claim. This is useful for a cluster-admin or storage-admin to provide easy management of a storage volume without having to set up or communicate specialized StorageClasses across the cluster.

This example builds upon Section 28.11.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses. The cluster-admin or storage-admin will create another StorageClass for designation as the defaultStorageClass.

Example 28.18. Default StorageClass Object Definition

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: generic 1
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" 2
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
  zone: us-east1-d
1
Name of the StorageClass, which needs to be unique in the cluster.
2
Annotation that marks this StorageClass as the default class. You must use "true" quoted in this version of the API. Without this annotation, OpenShift Container Platform considers this not the default StorageClass.

As a cluster-admin or storage-admin save the definition to a YAML file (generic-gce.yaml), then create the StorageClasses:

# oc create -f generic-gce.yaml
storageclass "generic" created

# oc get storageclass
NAME       TYPE
generic    kubernetes.io/gce-pd
fast       kubernetes.io/gce-pd
slow       kubernetes.io/gce-pd

As a regular user, create a new claim definition without any StorageClass requirement and save it to a file (generic-pvc.yaml).

Example 28.19. default Storage Claim Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering2
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 5Gi

Execute it and check the claim is bound:

# oc create -f generic-pvc.yaml
persistentvolumeclaim "pvc-engineering2" created
                                                                   3s
# oc get pvc
NAME               STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering    Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           41m
pvc-engineering2   Bound     pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           7s  1
1
pvc-engineering2 is bound to a dynamically provisioned Volume by default.

As a cluster-admin or storage-admin, view the Persistent Volumes defined so far:

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                     REASON    AGE
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound     rh-eng/pvc-engineering2             5m 1
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound     mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound     rh-eng/pvc-engineering              46m 2
1
This PV was bound to our default dynamic volume from the default StorageClass.
2
This PV was bound to our first PVC from Section 28.11.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses with our fast StorageClass.

Create a manually provisioned disk using GCE (not dynamically provisioned). Then create a Persistent Volume that connects to the new GCE disk (pv-manual-gce.yaml).

Example 28.20. Manual PV Object Defition

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-manual-gce
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-newly-created-gce-PD
   fsType: ext4

Execute the object definition file:

# oc create -f pv-manual-gce.yaml

Now view the PVs again. Notice that a pv-manual-gce volume is Available.

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                     REASON    AGE
pv-manual-gce                              35Gi       RWX           Retain          Available                                       4s
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound       rh-eng/pvc-engineering2             12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound       mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound       rh-eng/pvc-engineering              53m

Now create another claim identical to the generic-pvc.yaml PVC definition but change the name and do not set a storage class name.

Example 28.21. Claim Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-engineering3
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 15Gi

Because default StorageClass is enabled in this instance, the manually created PV does not satisfy the claim request. The user receives a new dynamically provisioned Persistent Volume.

# oc get pvc
NAME               STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
pvc-engineering    Bound     pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           1h
pvc-engineering2   Bound     pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           19m
pvc-engineering3   Bound     pvc-6fa8e73b-8c00-11e6-9962-42010af00004   15Gi       RWX           6s
Important

Since the default StorageClass is enabled on this system, for the manually created Persistent Volume to get bound by the above claim and not have a new dynamic provisioned volume be bound, the PV would need to have been created in the default StorageClass.

Since the default StorageClass is enabled on this system, you would need to create the PV in the default StorageClass for the manually created Persistent Volume to get bound to the above claim and not have a new dynamic provisioned volume bound to the claim.

To fix this, the cluster-admin or storage-admin user simply needs to create another GCE disk or delete the first manual PV and use a PV object definition that assigns a StorageClass name (pv-manual-gce2.yaml) if necessary:

Example 28.22. Manual PV Spec with default StorageClass name

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-manual-gce2
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-newly-created-gce-PD
   fsType: ext4
 storageClassName: generic 1
1
The name for previously created generic StorageClass.

Execute the object definition file:

# oc create -f pv-manual-gce2.yaml

List the PVs:

# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                     REASON    AGE
pv-manual-gce                              35Gi       RWX           Retain          Available                                       4s 1
pv-manual-gce2                             35Gi       RWX           Retain          Bound       rh-eng/pvc-engineering3             4s 2
pvc-a9f70544-8bfd-11e6-9962-42010af00004   5Gi        RWX           Delete          Bound       rh-eng/pvc-engineering2             12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004   5Gi        RWO           Delete          Bound       mytest/gce-dyn-claim1               21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004   10Gi       RWX           Delete          Bound       rh-eng/pvc-engineering              53m
1
The original manual PV, still unbound and Available. This is because it was not created in the default StorageClass.
2
The second PVC (other than the name) is bound to the Available manually created PV pv-manual-gce2.
Important

Notice that all dynamically provisioned volumes by default have a RECLAIMPOLICY of Delete. Once the PVC dynamically bound to the PV is deleted, the GCE volume is deleted and all data is lost. However, the manually created PV has a default RECLAIMPOLICY of Retain.

28.12. Using Storage Classes for Existing Legacy Storage

28.12.1. Overview

In this example, a legacy data volume exists and a cluster-admin or storage-admin needs to make it available for consumption in a particular project. Using StorageClasses decreases the likelihood of other users and projects gaining access to this volume from a claim because the claim would have to have an exact matching value for the StorageClass name. This example also disables dynamic provisioning. This example assumes:

28.12.1.1. Scenario 1: Link StorageClass to existing Persistent Volume with Legacy Data

As a cluster-admin or storage-admin, define and create the StorageClass for historical financial data.

Example 28.23. StorageClass finance-history Object Definitions

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: finance-history 1
provisioner: no-provisioning 2
parameters: 3
1
Name of the StorageClass.
2
This is a required field, but since there is to be no dynamic provisioning, a value must be put here as long as it is not an actual provisioner plug-in type.
3
Parameters can simply be left blank, since these are only used for the dynamic provisioner.

Save the definitions to a YAML file (finance-history-storageclass.yaml) and create the StorageClass.

# oc create -f finance-history-storageclass.yaml
storageclass "finance-history" created


# oc get storageclass
NAME              TYPE
finance-history   no-provisioning
Important

cluster-admin or storage-admin users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.

The StorageClass exists. A cluster-admin or storage-admin can create the Persistent Volume (PV) for use with the StorageClass. Create a manually provisioned disk using GCE (not dynamically provisioned) and a Persistent Volume that connects to the new GCE disk (gce-pv.yaml).

Example 28.24. Finance History PV Object

apiVersion: v1
kind: PersistentVolume
metadata:
 name: pv-finance-history
spec:
 capacity:
   storage: 35Gi
 accessModes:
   - ReadWriteMany
 gcePersistentDisk:
   readOnly: false
   pdName: the-existing-PD-volume-name-that-contains-the-valuable-data 1
   fsType: ext4
 storageClassName: finance-history 2
2
The StorageClass name, that must match exactly.
1
The name of the GCE disk that already exists and contains the legacy data.

As a cluster-admin or storage-admin, create and view the PV.

# oc create -f gce-pv.yaml
persistentvolume "pv-finance-history" created

# oc get pv
NAME                CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                        REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Available                                          2d

Notice you have a pv-finance-history Available and ready for consumption.

As a user, create a Persistent Volume Claim (PVC) as a YAML file and specify the correct StorageClass name:

Example 28.25. Claim for finance-history Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: pvc-finance-history
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 20Gi
 storageClassName: finance-history 1
1
The StorageClass name, that must match exactly or the claim will go unbound until it is deleted or another StorageClass is created that matches the name.

Create and view the PVC and PV to see if it is bound.

# oc create -f pvc-finance-history.yaml
persistentvolumeclaim "pvc-finance-history" created

# oc get pvc
NAME                  STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
pvc-finance-history   Bound     pv-finance-history   35Gi       RWX           9m


# oc get pv  (cluster/storage-admin)
NAME                 CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                         REASON    AGE
pv-finance-history   35Gi       RWX           Retain          Bound       default/pvc-finance-history             5m
Important

You can use StorageClasses in the same cluster for both legacy data (no dynamic provisioning) and with dynamic provisioning.

28.13. Configuring Azure Blob Storage for Integrated Container Image Registry

28.13.1. Overview

This topic reviews how to configure Microsoft Azure Blob Storage for OpenShift integrated container image registry.

28.13.2. Before You Begin

  • Create a storage container using Microsoft Azure Portal, Microsoft Azure CLI, or Microsoft Azure Storage Explorer. Keep a note of the storage account name, storage account key and container name.
  • Deploy the integrated container image registry if it is not deployed.

28.13.3. Overriding Registry Configuration

To create a new registry pod and replace the old pod automatically:

  1. Create a new registry configuration file called registryconfig.yaml and add the following information:

    version: 0.1
    log:
      level: debug
    http:
      addr: :5000
    storage:
      cache:
        blobdescriptor: inmemory
      delete:
        enabled: true
      azure: 1
        accountname: azureblobacc
        accountkey:  azureblobacckey
        container: azureblobname
        realm: core.windows.net 2
    auth:
      openshift:
        realm: openshift
    middleware:
      registry:
        - name: openshift
      repository:
        - name: openshift
          options:
            acceptschema2: false
            pullthrough: true
            enforcequota: false
            projectcachettl: 1m
            blobrepositorycachettl: 10m
      storage:
        - name: openshift
    1
    Replace the values for accountname, acountkey, and container with storage account name, storage account key, and storage container name respectively.
    2
    If using Azure regional cloud, set to the desired realm. For example, core.cloudapi.de for the Germany regional cloud.
  2. Create a new registry configuration:

    $ oc create secret generic registry-config --from-file=config.yaml=registryconfig.yaml
  3. Add the secret:

    $ oc set volume dc/docker-registry --add --type=secret \
        --secret-name=registry-config -m /etc/docker/registry/
  4. Set the REGISTRY_CONFIGURATION_PATH environment variable:

    $ oc set env dc/docker-registry \
        REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yaml
  5. If you already created a registry configuration:

    1. Delete the secret:

      $ oc delete secret registry-config
    2. Create a new registry configuration:

      $ oc create secret generic registry-config --from-file=config.yaml=registryconfig.yaml
    3. Update the configuration by starting a new rollout:

      $ oc rollout latest docker-registry