-
Language:
English
-
Language:
English
Chapter 16. Persistent Storage Examples
16.1. Overview
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
16.2. Sharing an NFS Persistent Volume (PV) Across Two Pods
16.2.1. Overview
The following use case describes how a cluster administrator wanting to leverage shared storage for use by two separate containers would configure the solution. This example highlights the use of NFS, but can easily be adapted to other shared storage types, such as GlusterFS. In addition, this example will show configuration of pod security as it relates to shared storage.
Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage. This topic shows and end-to-end example of using an existing NFS cluster and OpenShift Enterprise persistent store, and assumes an existing NFS server and exports exist in your OpenShift Enterprise infrastructure.
All oc
commands are executed on the OpenShift Enterprise master host.
16.2.2. Creating the Persistent Volume
Before creating the PV object in OpenShift Enterprise, the persistent volume (PV) file is defined:
Example 16.1. Persistent Volume Object Definition Using NFS
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteMany 3 persistentVolumeReclaimPolicy: Retain 4 nfs: 5 path: /opt/nfs 6 server: nfs.f22 7 readOnly: false
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- A volume reclaim policy of retain indicates to preserve the volume after the pods.
- 5
- This defines the volume type being used, in this case the NFS plug-in.
- 6
- This is the NFS mount path.
- 7
- This is the NFS server. This can also be specified by IP address.
Save the PV definition to a file, for example nfs-pv.yaml, and create the persistent volume:
# oc create -f nfs-pv.yaml persistentvolume "nfs-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE nfs-pv <none> 1Gi RWX Available 37s
16.2.3. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC. This is the use case we are highlighting in this example.
Example 16.2. PVC Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc 1 spec: accessModes: - ReadWriteMany 2 resources: requests: storage: 1Gi 3
Save the PVC definition to a file, for example nfs-pvc.yaml, and create the PVC:
# oc create -f nfs-pvc.yaml persistentvolumeclaim "nfs-pvc" created
Verify that the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc <none> Bound nfs-pv 1Gi RWX 24s
1
- 1
- The claim, nfs-pvc, was bound to the nfs-pv PV.
16.2.4. Ensuring NFS Volume Access
Access is necessary to a node in the NFS server. On this node, examine the NFS export mount:
[root@nfs nfs]# ls -lZ /opt/nfs/ total 8 -rw-r--r--. 1 root 100003 system_u:object_r:usr_t:s0 10 Oct 12 23:27 test2b 1 2
In order to access the NFS mount, the container must match the SELinux label, and either run with a UID of 0, or with 100003 in its supplemental groups range. Gain access to the volume by matching the NFS mount’s groups, which will be defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote NFS server. To enable writing to NFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_sandbox_use_nfs on # setsebool -P virt_use_nfs on
The virt_sandbox_use_nfs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
16.2.5. Creating the Pod
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the NFS volume for read-write access:
Example 16.3. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: nginx-nfs-pod 1 labels: name: nginx-nfs-pod spec: containers: - name: nginx-nfs-pod image: fedora/nginx 2 ports: - name: web containerPort: 80 volumeMounts: - name: nfsvol 3 mountPath: /usr/share/nginx/html 4 securityContext: supplementalGroups: [100003] 5 privileged: false volumes: - name: nfsvol persistentVolumeClaim: claimName: nfs-pvc 6
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod.
- 3
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The group ID to be assigned to the container.
- 6
- The PVC that was created in the previous step.
Save the pod definition to a file, for example nfs.yaml, and create the pod:
# oc create -f nfs.yaml pod "nginx-nfs-pod" created
Verify that the pod was created:
# oc get pods NAME READY STATUS RESTARTS AGE nginx-nfs-pod 1/1 Running 0 4s
More details are shown in the oc describe pod
command:
[root@ose70 nfs]# oc describe pod nginx-nfs-pod Name: nginx-nfs-pod Namespace: default 1 Image(s): fedora/nginx Node: ose70.rh7/192.168.234.148 2 Start Time: Mon, 21 Mar 2016 09:59:47 -0400 Labels: name=nginx-nfs-pod Status: Running Reason: Message: IP: 10.1.0.4 Replication Controllers: <none> Containers: nginx-nfs-pod: Container ID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f Image: fedora/nginx Image ID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Mon, 21 Mar 2016 09:59:49 -0400 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: nfsvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-pvc 3 ReadOnly: false default-token-a06zb: Type: Secret (a secret that should populate this volume) SecretName: default-token-a06zb Events: 4 FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 4m 4m 1 {scheduler } Scheduled Successfully assigned nginx-nfs-pod to ose70.rh7 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.0.4" already present on machine 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Created Created with docker id 866a37108041 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Started Started with docker id 866a37108041 4m 4m 1 {kubelet ose70.rh7} spec.containers{nginx-nfs-pod} Pulled Container image "fedora/nginx" already present on machine 4m 4m 1 {kubelet ose70.rh7} spec.containers{nginx-nfs-pod} Created Created with docker id a3292104d6c2 4m 4m 1 {kubelet ose70.rh7} spec.containers{nginx-nfs-pod} Started Started with docker id a3292104d6c2
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more, shown in the oc get pod <name> -o yaml
command:
[root@ose70 nfs]# oc get pod nginx-nfs-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: restricted 1 creationTimestamp: 2016-03-21T13:59:47Z labels: name: nginx-nfs-pod name: nginx-nfs-pod namespace: default 2 resourceVersion: "2814411" selflink: /api/v1/namespaces/default/pods/nginx-nfs-pod uid: 2c22d2ea-ef6d-11e5-adc7-000c2900f1e3 spec: containers: - image: fedora/nginx imagePullPolicy: IfNotPresent name: nginx-nfs-pod ports: - containerPort: 80 name: web protocol: TCP resources: {} securityContext: privileged: false terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /usr/share/nginx/html name: nfsvol - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-a06zb readOnly: true dnsPolicy: ClusterFirst host: ose70.rh7 imagePullSecrets: - name: default-dockercfg-xvdew nodeName: ose70.rh7 restartPolicy: Always securityContext: supplementalGroups: - 100003 3 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: nfsvol persistentVolumeClaim: claimName: nfs-pvc 4 - name: default-token-a06zb secret: secretName: default-token-a06zb status: conditions: - lastProbeTime: null lastTransitionTime: 2016-03-21T13:59:49Z status: "True" type: Ready containerStatuses: - containerID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f image: fedora/nginx imageID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a lastState: {} name: nginx-nfs-pod ready: true restartCount: 0 state: running: startedAt: 2016-03-21T13:59:49Z hostIP: 192.168.234.148 phase: Running podIP: 10.1.0.4 startTime: 2016-03-21T13:59:47Z
16.2.6. Creating an Additional Pod to Reference the Same PVC
This pod definition, created in the same namespace, uses a different container. However, we can use the same backing storage by specifying the claim name in the volumes section below:
Example 16.4. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: busybox-nfs-pod 1 labels: name: busybox-nfs-pod spec: containers: - name: busybox-nfs-pod image: busybox 2 command: ["sleep", "60000"] volumeMounts: - name: nfsvol-2 3 mountPath: /usr/share/busybox 4 readOnly: false securityContext: supplementalGroups: [100003] 5 privileged: false volumes: - name: nfsvol-2 persistentVolumeClaim: claimName: nfs-pvc 6
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod.
- 3
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The group ID to be assigned to the container.
- 6
- The PVC that was created earlier and is also being used by a different container.
Save the pod definition to a file, for example nfs-2.yaml, and create the pod:
# oc create -f nfs-2.yaml pod "busybox-nfs-pod" created
Verify that the pod was created:
# oc get pods NAME READY STATUS RESTARTS AGE busybox-nfs-pod 1/1 Running 0 3s
More details are shown in the oc describe pod
command:
[root@ose70 nfs]# oc describe pod busybox-nfs-pod Name: busybox-nfs-pod Namespace: default Image(s): busybox Node: ose70.rh7/192.168.234.148 Start Time: Mon, 21 Mar 2016 10:19:46 -0400 Labels: name=busybox-nfs-pod Status: Running Reason: Message: IP: 10.1.0.5 Replication Controllers: <none> Containers: busybox-nfs-pod: Container ID: docker://346d432e5a4824ebf5a47fceb4247e0568ecc64eadcc160e9bab481aecfb0594 Image: busybox Image ID: docker://17583c7dd0dae6244203b8029733bdb7d17fccbb2b5d93e2b24cf48b8bfd06e2 QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Mon, 21 Mar 2016 10:19:48 -0400 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: nfsvol-2: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: nfs-pvc ReadOnly: false default-token-32d2z: Type: Secret (a secret that should populate this volume) SecretName: default-token-32d2z Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 4m 4m 1 {scheduler } Scheduled Successfully assigned busybox-nfs-pod to ose70.rh7 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.0.4" already present on machine 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Created Created with docker id 249b7d7519b1 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Started Started with docker id 249b7d7519b1 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Pulled Container image "busybox" already present on machine 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Created Created with docker id 346d432e5a48 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Started Started with docker id 346d432e5a48
As you can see, both containers are using the same storage claim that is attached to the same NFS mount on the back end.
16.3. Complete Example Using Ceph RBD
16.3.1. Overview
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Enterprise persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
16.3.2. Installing the ceph-common Package
The ceph-common library must be installed on all schedulable OpenShift Enterprise nodes:
The OpenShift Enterprise all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
# yum install -y ceph-common
16.3.3. Creating the Ceph Secret
The ceph auth get-key
command is run on a Ceph MON node to display the key value for the client.admin user:
Example 16.5. Ceph Secret Definition
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
- 1
- This base64 key is generated on one of the Ceph MON nodes using the
ceph auth get-key client.admin | base64
command, then copying the output and pasting it as the secret key’s value.
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ oc create -f ceph-secret.yaml secret "ceph-secret" created
Verify that the secret was created:
# oc get secret ceph-secret NAME TYPE DATA AGE ceph-secret Opaque 1 23d
16.3.4. Creating the Persistent Volume
Next, before creating the PV object in OpenShift Enterprise, define the persistent volume file:
Example 16.6. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv 1 spec: capacity: storage: 2Gi 2 accessModes: - ReadWriteOnce 3 rbd: 4 monitors: 5 - 192.168.122.133:6789 pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret 6 fsType: ext4 7 readOnly: false persistentVolumeReclaimPolicy: Recycle
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Enterprise to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
# oc create -f ceph-pv.yaml persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE ceph-pv <none> 2147483648 RWO Available 2s
16.3.5. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 16.7. PVC Object Definition
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-claim spec: accessModes: 1 - ReadWriteOnce resources: requests: storage: 2Gi 2
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created
#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim <none> Bound ceph-pv 1Gi RWX 21s
1
- 1
- the claim was bound to the ceph-pv PV.
16.3.6. Creating the Pod
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 16.8. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: ceph-pod1 1 spec: containers: - name: ceph-busybox image: busybox 2 command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 3 mountPath: /usr/share/busybox 4 readOnly: false volumes: - name: ceph-vol1 5 persistentVolumeClaim: claimName: ceph-claim 6
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created
#verify pod was created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m
1
- 1
- After a minute or so, the pod will be in the Running state.
16.3.7. Defining Group and Owner IDs (Optional)
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup
, as shown in the following pod definition fragment:
16.4. Complete Example Using GlusterFS
16.4.1. Overview
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Enterprise persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
16.4.2. Installing the glusterfs-fuse Package
The glusterfs-fuse library must be installed on all schedulable OpenShift Enterprise nodes:
# yum install -y glusterfs-fuse
The OpenShift Enterprise all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.
16.4.3. Creating the Gluster Endpoints
The named endpoints define each node in the Gluster-trusted storage pool:
Example 16.10. GlusterFS Endpoint Definition
apiVersion: v1 kind: Endpoints metadata: name: gluster-endpoints 1 subsets: - addresses: 2 - ip: 192.168.122.21 ports: 3 - port: 1 protocol: TCP - addresses: - ip: 192.168.122.22 ports: - port: 1 protocol: TCP
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
# oc create -f gluster-endpoints.yaml endpoints "gluster-endpoints" created
Verify that the endpoints were created:
# oc get endpoints gluster-endpoints NAME ENDPOINTS AGE gluster-endpoints 192.168.122.21:1,192.168.122.22:1 1m
16.4.4. Creating the Persistent Volume
Next, before creating the PV object, define the persistent volume in OpenShift Enterprise:
Example 16.11. Persistent Volume Object Definition Using GlusterFS
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-pv 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteMany 3 glusterfs: 4 endpoints: gluster-endpoints 5 path: /HadoopVol 6 readOnly: false persistentVolumeReclaimPolicy: Retain 7
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- This defines the volume type being used. In this case, the glusterfs plug-in is defined.
- 5
- This references the endpoints named above.
- 6
- This is the Gluster volume name, preceded by
/
. - 7
- A volume reclaim policy of retain indicates that the volume will be preserved after the pods accessing it terminate.
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
# oc create -f gluster-pv.yaml persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available 37s
16.4.5. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 16.12. PVC Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim 1 spec: accessModes: - ReadWriteMany 2 resources: requests: storage: 1Gi 3
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
# oc create -f gluster-claim.yaml persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
1
- 1
- The claim was bound to the gluster-pv PV.
16.4.6. Defining GlusterFS Volume Access
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
# ls -lZ /mnt/glusterfs/ drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol # id yarn uid=592(yarn) gid=590(hadoop) groups=590(hadoop) 1 2 3
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
16.4.7. Creating the Pod
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
Example 16.13. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: gluster-pod1 labels: name: gluster-pod1 1 spec: containers: - name: gluster-pod1 image: busybox 2 command: ["sleep", "60000"] volumeMounts: - name: gluster-vol1 3 mountPath: /usr/share/busybox 4 readOnly: false securityContext: supplementalGroups: [590] 5 privileged: false volumes: - name: gluster-vol1 6 persistentVolumeClaim: claimName: gluster-claim 7
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 6
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The group ID to be assigned to the container.
- 7
- The PVC that is bound to the Gluster cluster.
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
# oc create -f gluster-pod1.yaml pod "gluster-pod1" created
Verify the pod was created:
# oc get pod
NAME READY STATUS RESTARTS AGE
gluster-pod1 1/1 Running 0 31s
1
- 1
- After a minute or so, the pod will be in the Running state.
More details are shown in the oc describe pod
command:
# oc describe pod gluster-pod1 Name: gluster-pod1 Namespace: default 1 Image(s): busybox Node: rhel7.2-dev/192.168.122.177 Start Time: Tue, 22 Mar 2016 10:55:57 -0700 Labels: name=gluster-pod1 Status: Running Reason: Message: IP: 10.1.0.2 2 Replication Controllers: <none> Containers: gluster-pod1: Container ID: docker://acc0c80c28a5cd64b6e3f2848052ef30a21ee850d27ef5fe959d11da4e5a3f4f Image: busybox Image ID: docker://964092b7f3e54185d3f425880be0b022bfc9a706701390e0ceab527c84dea3e3 QoS Tier: cpu: BestEffort memory: BestEffort State: Running Started: Tue, 22 Mar 2016 10:56:00 -0700 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: gluster-vol1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: gluster-claim 3 ReadOnly: false default-token-rbi9o: Type: Secret (a secret that should populate this volume) SecretName: default-token-rbi9o Events: 4 FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 2m 2m 1 {scheduler } Scheduled Successfully assigned gluster-pod1 to rhel7.2-dev 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.1.6" already present on machine 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Created Created with docker id d5c66b4f3aaa 2m 2m 1 {kubelet rhel7.2-dev} implicitly required container POD Started Started with docker id d5c66b4f3aaa 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Pulled Container image "busybox" already present on machine 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Created Created with docker id acc0c80c28a5 2m 2m 1 {kubelet rhel7.2-dev} spec.containers{gluster-pod1} Started Started with docker id acc0c80c28a5
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
# oc get pod gluster-pod1 -o yaml apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: restricted 1 creationTimestamp: 2016-03-22T17:55:57Z labels: name: gluster-pod1 name: gluster-pod1 namespace: default 2 resourceVersion: "511908" selflink: /api/v1/namespaces/default/pods/gluster-pod1 uid: 545068a3-f057-11e5-a8e5-5254008f071b spec: containers: - command: - sleep - "60000" image: busybox imagePullPolicy: IfNotPresent name: gluster-pod1 resources: {} securityContext: privileged: false runAsUser: 1000000000 3 seLinuxOptions: level: s0:c1,c0 4 terminationMessagePath: /dev/termination-log volumeMounts: - mountPath: /usr/share/busybox name: gluster-vol1 - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-rbi9o readOnly: true dnsPolicy: ClusterFirst host: rhel7.2-dev imagePullSecrets: - name: default-dockercfg-2g6go nodeName: rhel7.2-dev restartPolicy: Always securityContext: seLinuxOptions: level: s0:c1,c0 5 supplementalGroups: - 590 6 serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster-claim 7 - name: default-token-rbi9o secret: secretName: default-token-rbi9o status: conditions: - lastProbeTime: null lastTransitionTime: 2016-03-22T17:56:00Z status: "True" type: Ready containerStatuses: - containerID: docker://acc0c80c28a5cd64b6e3f2848052ef30a21ee850d27ef5fe959d11da4e5a3f4f image: busybox imageID: docker://964092b7f3e54185d3f425880be0b022bfc9a706701390e0ceab527c84dea3e3 lastState: {} name: gluster-pod1 ready: true restartCount: 0 state: running: startedAt: 2016-03-22T17:56:00Z hostIP: 192.168.122.177 phase: Running podIP: 10.1.0.2 startTime: 2016-03-22T17:55:57Z
- 1
- The SCC used by the pod.
- 2
- The project (namespace) name.
- 3
- The UID of the busybox container.
- 4 5
- The SELinux label for the container, and the default SELinux label for the entire pod, which happen to be the same here.
- 6
- The supplemental group ID for the pod (all containers).
- 7
- The PVC name used by the pod.
16.5. Backing Docker Registry with GlusterFS Storage
16.5.1. Overview
This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.
It is assumed that the Docker registry service has already been started and the Gluster volume has been created.
16.5.2. Prerequisites
- The docker-registry was deployed without configuring storage.
- A Gluster volume exists and glusterfs-fuse is installed on schedulable nodes.
Definitions written for GlusterFS endpoints and service, persistent volume (PV), and persistent volume claim (PVC).
For this guide, these will be:
- gluster-endpoints-service.yaml
- gluster-endpoints.yaml
- gluster-pv.yaml
- gluster-pvc.yaml
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc
commands are executed on the master node as the admin user.
16.5.3. Create the Gluster Persistent Volume
First, make the Gluster volume available to the registry.
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml $ oc create -f gluster-pvc.yaml
Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available 37s $ oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.
16.5.4. Attach the PVC to the Docker Registry
Before moving forward, ensure that the docker-registry service is running.
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
$ oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \ --claim-name=gluster-claim --overwrite
Deploying a Docker Registry provides more information on using the Docker registry.
16.5.5. Known Issues
16.5.5.1. Pod Cannot Resolve the Volume Host
In non-production cases where the dnsmasq server is located on the same node as the OpenShift Enterprise master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.
In /etc/dnsmasq.conf, add:
# Reverse DNS record for master host-record=master.example.com,<master-IP> # Wildcard DNS for OpenShift Applications - Points to Router address=/apps.example.com/<master-IP> # Forward .local queries to SkyDNS server=/local/127.0.0.1#8053 # Forward reverse queries for service network to SkyDNS. # This is for default OpenShift SDN - change as needed. server=/17.30.172.in-addr.arpa/127.0.0.1#8053
With these settings, dnsmasq will pull from the /etc/hosts file on the master node.
Add the appropriate host names and IPs for all necessary hosts.
In master-config.yaml, change bindAddress
to:
dnsConfig: bindAddress: 127.0.0.1:8053
When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.
In /etc/resolv.conf all scheduled nodes:
nameserver 192.168.1.100 1 nameserver 192.168.1.1 2
Once the configurations are changed, restart the OpenShift Enterprise master and dnsmasq services.
$ systemctl restart atomic-openshift-master $ systemctl restart dnsmasq
16.6. Mounting Volumes on Privileged Pods
16.6.1. Overview
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.
16.6.2. Prerequisites
- An existing Gluster volume.
- glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
- Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
- Persistent volumes: gluster-pv.yaml
- Persistent volume claims: gluster-pvc.yaml
- Privileged pods: gluster-nginx-pod.yaml
-
A user with the cluster-admin role binding. For this guide, that user is called
admin
.
16.6.3. Creating the Persistent Volume
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Verify that the objects were created:
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
16.6.4. Creating a Regular User
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
- As the admin, add a user to the SCC:
$ oadm policy add-scc-to-user privileged <username>
- Log in as the regular user:
$ oc login -u <username> -p <password>
- Then, create a new project:
$ oc new-project <project_name>
16.6.5. Creating the Persistent Volume Claim
As a regular user, create the PersistentVolumeClaim to access the volume:
$ oc create -f gluster-pvc.yaml -n <project_name>
Define your pod to access the claim:
Example 16.14. Pod Definition
apiVersion: v1 id: gluster-nginx-pvc kind: Pod metadata: name: gluster-nginx-priv spec: containers: - name: gluster-nginx-priv image: fedora/nginx volumeMounts: - mountPath: /mnt/gluster 1 name: gluster-volume-claim securityContext: privileged: true volumes: - name: gluster-volume-claim persistentVolumeClaim: claimName: gluster-claim 2
Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
$ oc create -f gluster-nginx-pod.yaml
Verify that the pod created successfully:
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-nginx-pod 1/1 Running 0 36m
It can take several minutes for the pod to create.
16.6.6. Verifying the Setup
16.6.6.1. Checking the Pod SCC
Export the pod configuration:
$ oc export pod <pod_name>
Examine the output. Check that
openshift.io/scc
has the value ofprivileged
:Example 16.15. Export Snippet
metadata: annotations: openshift.io/scc: privileged
16.6.6.2. Verifying the Mount
Access the pod and check that the volume is mounted:
$ oc rsh <pod_name> [root@gluster-nginx-pvc /]# mount
Examine the output for the Gluster volume:
Example 16.16. Volume Mount
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)