Chapter 3. Creating Persistent Volumes
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.
3.1. File Storage
3.1.1. Static Provisioning of Volumes
/usr/share/heketi/templates/ directory.
Note
# cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<path>/gluster-endpoints.yaml
- To specify the endpoints you want to create, update the copied
sample-gluster-endpoints.yamlfile with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.# cat sample-gluster-endpoints.yaml apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.10.100 ports: - port: 1 - addresses: - ip: 192.168.10.101 ports: - port: 1 - addresses: - ip: 192.168.10.102 ports: - port: 1name: is the name of the endpointip: is the ip address of the Red Hat Gluster Storage nodes. - Execute the following command to create the endpoints:
# oc create -f <name_of_endpoint_file>
For example:# oc create -f sample-gluster-endpoints.yaml endpoints "glusterfs-cluster" created
- To verify that the endpoints are created, execute the following command:
# oc get endpoints
For example:# oc get endpoints NAME ENDPOINTS AGE storage-project-router 192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936 2d glusterfs-cluster 192.168.121.168:1,192.168.121.172:1,192.168.121.233:1 3s heketi 10.1.1.3:8080 2m heketi-storage-endpoints 192.168.121.168:1,192.168.121.172:1,192.168.121.233:1 3m
- Execute the following command to create a gluster service:
# oc create -f <name_of_service_file>
For example:# cat sample-gluster-service.yaml apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1# oc create -f sample-gluster-service.yaml service "glusterfs-cluster" created
- To verify that the service is created, execute the following command:
# oc get service
For example:# oc get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE storage-project-router 172.30.94.109 <none> 80/TCP,443/TCP,1936/TCP 2d glusterfs-cluster 172.30.212.6 <none> 1/TCP 5s heketi 172.30.175.7 <none> 8080/TCP 2m heketi-storage-endpoints 172.30.18.24 <none> 1/TCP 3m
Note
The endpoints and the services must be created for each project that requires a persistent storage. - Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
cat pv001.json { "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "glusterfs-f8c612ee", "creationTimestamp": null }, "spec": { "capacity": { "storage": "100Gi" }, "glusterfs": { "endpoints": "TYPE ENDPOINT HERE", "path": "vol_f8c612eea57556197511f6b8c54b6070" }, "accessModes": [ "ReadWriteMany" ], "persistentVolumeReclaimPolicy": "Retain" }, "status": {}Important
You must manually add the Labels information to the .json file.Following is the example YAML file for reference:apiVersion: v1 kind: PersistentVolume metadata: name: pv-storage-project-glusterfs1 labels: storage-tier: gold spec: capacity: storage: 12Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain glusterfs: endpoints: TYPE END POINTS NAME HERE, path: vol_e6b77204ff54c779c042f570a71b1407name: The name of the volume.storage: The amount of storage allocated to this volumeglusterfs: The volume type being used, in this case the glusterfs plug-inendpoints: The endpoints name that defines the trusted storage pool createdpath: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.labels: Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.Note
- heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to
oc create -f -to create the persistent volume immediately. - If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the
heketi-cli volume listcommand. This command lists the cluster name. You can then update the endpoint information in thepv001.jsonfile accordingly. - When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
- If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
- Edit the pv001.json file and enter the name of the endpoint in the endpoint's section:
cat pv001.json { "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "glusterfs-f8c612ee", "creationTimestamp": null, "labels": { "storage-tier": "gold" } }, "spec": { "capacity": { "storage": "12Gi" }, "glusterfs": { "endpoints": "glusterfs-cluster", "path": "vol_f8c612eea57556197511f6b8c54b6070" }, "accessModes": [ "ReadWriteMany" ], "persistentVolumeReclaimPolicy": "Retain" }, "status": {} } - Create a persistent volume by executing the following command:
# oc create -f pv001.json
For example:# oc create -f pv001.json persistentvolume "glusterfs-4fc22ff9" created
- To verify that the persistent volume is created, execute the following command:
# oc get pv
For example:# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Available 4s
- Create a persistent volume claim file. For example:
# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi selector: matchLabels: storage-tier: gold - Bind the persistent volume to the persistent volume claim by executing the following command:
# oc create -f pvc.yaml
For example:# oc create -f pvc.yaml persistentvolumeclaim"glusterfs-claim" created
- To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
# oc get pv # oc get pvc
For example:# oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-4fc22ff9 100Gi RWX Bound storage-project/glusterfs-claim 1m
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-4fc22ff9 100Gi RWX 11s
- The claim can now be used in the application:For example:
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: glusterfs-claim# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html-single/configuring_clusters/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
# oc get pods -n <storage_project_name>
For example:# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE block-test-router-1-deploy 0/1 Running 0 4h busybox 1/1 Running 0 43s glusterblock-provisioner-1-bjpz4 1/1 Running 0 4h glusterfs-7l5xf 1/1 Running 0 4h glusterfs-hhxtk 1/1 Running 3 4h glusterfs-m4rbc 1/1 Running 0 4h heketi-1-3h9nb 1/1 Running 0 4h
- To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-1310998-81732b5fd87c197f627a24bcd2777f12eec4ee937cc2660656908b2fa6359129 100.0G 34.1M 99.9G 0% / tmpfs 1.5G 0 1.5G 0% /dev tmpfs 1.5G 0 1.5G 0% /sys/fs/cgroup 192.168.121.168:vol_4fc22ff934e531dec3830cfbcad1eeae 99.9G 66.1M 99.9G 0% /usr/share/busybox tmpfs 1.5G 0 1.5G 0% /run/secrets /dev/mapper/vg_vagrant-lv_root 37.7G 3.8G 32.0G 11% /dev/termination-log tmpfs 1.5G 12.0K 1.5G 0% /var/run/secretgit s/kubernetes.io/serviceaccount
Note
3.1.2. Dynamic Provisioning of Volumes
3.1.2.1. Configuring Dynamic Provisioning of Volumes
3.1.2.1.1. Creating Secret for Heketi Authentication
Note
admin-key value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
- Create an encoded value for the password by executing the following command:
# echo -n "<key>" | base64
where “key” is the value for "admin-key" that was created while deploying Red Hat Openshift Container StorageFor example:# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
- Create a secret file. A sample secret file is provided below:
# cat glusterfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
- Register the secret on Openshift by executing the following command:
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
3.1.2.1.2. Registering a Storage Class
- To create a storage class execute the following command:
# cat > glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-container provisioner: kubernetes.io/glusterfs reclaimPolicy: Retain parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" volumetype: "replicate:3" clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" secretNamespace: "default" secretName: "heketi-secret" volumeoptions: "client.ssl on, server.ssl on" volumenameprefix: "test-vol" allowVolumeExpansion: true
where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolvolumetype: It specifies the volume type that is being used.Note
Distributed-Three-way replication is the only supported volume type.clusterid: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.Note
To get the cluster ID, execute the following command:# heketi-cli cluster list
secretNamespace + secretName: Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.Note
When the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.volumeoptions: This is an optional parameter. It allows you to create glusterfs volumes with encryption enabled by setting the parameter to "client.ssl on, server.ssl on". For more information on enabling encryption, see Chapter 8, Enabling Encryption.Note
Do not add this parameter in the storageclass if encryption is not enabled.volumenameprefix: This is an optional parameter. It depicts the name of the volume created by heketi. For more information see Section 3.1.2.1.5, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”Note
The value for this parameter cannot contain `_` in the storageclass.allowVolumeExpansion: To increase the PV claim value, ensure to set theallowVolumeExpansionparameter in the storageclass file totrue. For more information, see Section 3.1.2.1.7, “Expanding Persistent Volume Claim”. - To register the storage class to Openshift, execute the following command:
# oc create -f glusterfs-storageclass.yaml storageclass "gluster-container" created
- To get the details of the storage class, execute the following command:
# oc describe storageclass gluster-container Name: gluster-container IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin,secretName=heketi-secret,secretNamespace=default No events.
3.1.2.1.3. Creating a Persistent Volume Claim
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
# cat glusterfs-pvc-claim1.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-container spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5GipersistentVolumeReclaimPolicy:This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.Note
When PV is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV. - Register the claim by executing the following command:
# oc create -f glusterfs-pvc-claim1.yaml persistentvolumeclaim "claim1" created
- To get the details of the claim, execute the following command:
# oc describe pvc <claim_name>
For example:# oc describe pvc claim1 Name: claim1 Namespace: default StorageClass: gluster-container Status: Bound Volume: pvc-54b88668-9da6-11e6-965e-54ee7551fd0c Labels: <none> Capacity: 4Gi Access Modes: RWO No events.
3.1.2.1.4. Verifying Claim Creation
- To get the details of the persistent volume claim and persistent volume, execute the following command:
# oc get pv,pvc NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv/pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO Delete Bound storage-project/claim1 3m NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc/claim1 Bound pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 4Gi RWO 4m
- To validate if the endpoint and the services are created as part of claim creation, execute the following command:
# oc get endpoints,service NAME ENDPOINTS AGE ep/storage-project-router 192.168.68.3:443,192.168.68.3:1936,192.168.68.3:80 28d ep/gluster-dynamic-claim1 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 5m ep/heketi 10.130.0.21:8080 21d ep/heketi-storage-endpoints 192.168.68.2:1,192.168.68.3:1,192.168.68.4:1 25d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/storage-project-router 172.30.166.64 <none> 80/TCP,443/TCP,1936/TCP 28d svc/gluster-dynamic-claim1 172.30.52.17 <none> 1/TCP 5m svc/heketi 172.30.129.113 <none> 8080/TCP 21d svc/heketi-storage-endpoints 172.30.133.212 <none> 1/TCP 25d
3.1.2.1.5. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
volumenameprefix to the storage class file. For more information, see Section 3.1.2.1.2, “Registering a Storage Class”
Note
# oc describe pv <pv_name>
# oc describe pv pvc-f92e3065-25e8-11e8-8f17-005056a55501
Name: pvc-f92e3065-25e8-11e8-8f17-005056a55501
Labels: <none>
Annotations: Description=Gluster-Internal: Dynamically provisioned PV
gluster.kubernetes.io/heketi-volume-id=027c76b24b1a3ce3f94d162f843529c8
gluster.org/type=file
kubernetes.io/createdby=heketi-dynamic-provisioner
pv.beta.kubernetes.io/gid=2000
pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/glusterfs
volume.beta.kubernetes.io/mount-options=auto_unmount
StorageClass: gluster-container-prefix
Status: Bound
Claim: glusterfs/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 1Gi
Message:
Source:
Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
EndpointsName: glusterfs-dynamic-claim1
Path: test-vol_glusterfs_claim1_f9352e4c-25e8-11e8-b460-005056a55501
ReadOnly: false
Events: <none>Path will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.1.2.1.6. Using the Claim in a Pod
- To use the claim in the application, for example
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html-single/configuring_clusters/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE storage-project-router-1-at7tf 1/1 Running 0 13d busybox 1/1 Running 0 8s glusterfs-dc-192.168.68.2-1-hu28h 1/1 Running 0 7d glusterfs-dc-192.168.68.3-1-ytnlg 1/1 Running 0 7d glusterfs-dc-192.168.68.4-1-juqcq 1/1 Running 0 13d heketi-1-9r47c 1/1 Running 0 13d
- To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ $ df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-666733-38050a1d2cdb41dc00d60f25a7a295f6e89d4c529302fb2b93d8faa5a3205fb9 10.0G 33.8M 9.9G 0% / tmpfs 23.5G 0 23.5G 0% /dev tmpfs 23.5G 0 23.5G 0% /sys/fs/cgroup /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /run/secrets /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /dev/termination-log /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/resolv.conf /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hostname /dev/mapper/rhgs-root 17.5G 3.6G 13.8G 21% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 192.168.68.2:vol_5b05cf2e5404afe614f8afa698792bae 4.0G 32.6M 4.0G 1% /usr/share/busybox tmpfs 23.5G 16.0K 23.5G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 23.5G 0 23.5G 0% /proc/kcore tmpfs 23.5G 0 23.5G 0% /proc/timer_stats
3.1.2.1.7. Expanding Persistent Volume Claim
allowVolumeExpansion parameter in the storageclass file to true. For more information refer, Section 3.1.2.1.2, “Registering a Storage Class”
- If the feature gates
ExpandPersistentVolumes, and the admissionconfigPersistentVolumeClaimResizeare not enabled, then edit the master.conf file located at /etc/origin/master/master-config.yaml on the master to enable them. For example:To enable feature gatesExpandPersistentVolumesapiServerArguments: runtime-config: - apis/settings.k8s.io/v1alpha1=true storage-backend: - etcd3 storage-media-type: - application/vnd.kubernetes.protobuf feature-gates: - ExpandPersistentVolumes=true controllerArguments: feature-gates: - ExpandPersistentVolumes=trueTo enable admissionconfigPersistentVolumeClaimResizeadd the following under admission config in the master-config file.admissionConfig: pluginConfig: PersistentVolumeClaimResize: configuration: apiVersion: v1 disable: false kind: DefaultAdmissionConfig- Restart the OpenShift master by running the following commands:
# /usr/local/bin/master-restart api # /usr/local/bin/master-restart controllers
- To check the existing persistent volume size, execute the following command on the app pod:
# oc rsh busybox
# df -h
For example:# oc rsh busybox / # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7 10.0G 34.2M 9.9G 0% / tmpfs 15.6G 0 15.6G 0% /dev tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /dev/termination-log /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /run/secrets /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/resolv.conf /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hostname /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501 2.0G 32.6M 2.0G 2% /usr/share/busybox tmpfs 15.6G 16.0K 15.6G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 15.6G 0 15.6G 0% /proc/kcore tmpfs 15.6G 0 15.6G 0% /proc/timer_list tmpfs 15.6G 0 15.6G 0% /proc/timer_stats tmpfs 15.6G 0 15.6G 0% /proc/sched_debug tmpfs 15.6G 0 15.6G 0% /proc/scsi tmpfs 15.6G 0 15.6G 0% /sys/firmwareIn this example the persistent volume size is 2Gi - To edit the persistent volume claim value, execute the following command and edit the following storage parameter:
resources: requests: storage: <storage_value># oc edit pvc <claim_name>
For example, to expand the storage value to 20Gi:# oc edit pvc claim3 apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-class: gluster-container2 volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs creationTimestamp: 2018-02-14T07:42:00Z name: claim3 namespace: storage-project resourceVersion: "283924" selfLink: /api/v1/namespaces/storage-project/persistentvolumeclaims/claim3 uid: 8a9bb0df-115a-11e8-8cb3-005056a5a340 spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeName: pvc-8a9bb0df-115a-11e8-8cb3-005056a5a340 status: accessModes: - ReadWriteOnce capacity: storage: 2Gi phase: Bound - To verify, execute the following command on the app pod:
# oc rsh busybox
/ # df -h
For example:# oc rsh busybox # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7 10.0G 34.2M 9.9G 0% / tmpfs 15.6G 0 15.6G 0% /dev tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /dev/termination-log /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /run/secrets /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/resolv.conf /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hostname /dev/mapper/rhel_dhcp47--150-root 50.0G 7.4G 42.6G 15% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm 10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501 20.0G 65.3M 19.9G 1% /usr/share/busybox tmpfs 15.6G 16.0K 15.6G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 15.6G 0 15.6G 0% /proc/kcore tmpfs 15.6G 0 15.6G 0% /proc/timer_list tmpfs 15.6G 0 15.6G 0% /proc/timer_stats tmpfs 15.6G 0 15.6G 0% /proc/sched_debug tmpfs 15.6G 0 15.6G 0% /proc/scsi tmpfs 15.6G 0 15.6G 0% /sys/firmwareIt is observed that the size is changed from 2Gi (earlier) to 20Gi.
3.1.2.1.8. Deleting a Persistent Volume Claim
Note
- To delete a claim, execute the following command:
# oc delete pvc <claim-name>
For example:# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
- To verify if the claim is deleted, execute the following command:
# oc get pvc <claim-name>
For example:# oc get pvc claim1 No resources found.
When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
# oc get pv <pv-name>
For example:# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
- To verify if the endpoints are deleted, execute the following command:
# oc get endpoints <endpointname>
For example:# oc get endpoints gluster-dynamic-claim1 No resources found.
- To verify if the service is deleted, execute the following command:
# oc get service <servicename>
For example:# oc get service gluster-dynamic-claim1 No resources found.
3.1.3. Volume Security
To create a statically provisioned volume with a GID, execute the following command:
$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
Two new parameters, gidMin and gidMax, are introduced with dynamic provisioner. These values allow the administrator to configure the GID range for the volume in the storage class. To set up the GID values and provide volume security for dynamically provisioned volumes, execute the following commands:
- Create a storage class file with the GID values. For example:
# cat glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name:gluster-container provisioner: kubernetes.io/glusterfs parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "2000" gidMax: "4000"
Note
If the gidMin and gidMax value are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647. - Create a persistent volume claim. For more information see, Section 3.1.2.1.3, “Creating a Persistent Volume Claim”
- Use the claim in the pod. Ensure that this pod is non-privileged. For more information see, Section 3.1.2.1.6, “Using the Claim in a Pod”
- To verify if the GID is within the range specified, execute the following command:
# oc rsh busybox
$ id
For example:$ id uid=1000060000 gid=0(root) groups=0(root),2001
where, 2001 in the above output is the allocated GID for the persistent volume, which is within the range specified in the storage class. You can write to this volume with the allocated GID.Note
When the persistent volume claim is deleted, the GID of the persistent volume is released from the pool.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.