Chapter 9. Creating Persistent Volumes

OpenShift Container Platform clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Binding PVs by Labels and Selectors

Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.

You can use labels to identify common attributes or characteristics shared among volumes. For example, you can define the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
More details for provisioning volumes in file based storage is provided in Section 9.1, “File Storage”. Similarly, further details for provisioning volumes in block based storage is provided in Section 9.2, “Block Storage”.

9.1. File Storage

File storage, also called file-level or file-based storage, stores data in a hierarchical structure. The data is saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format. You can provision volumes either statically or dynamically for file based storage.

9.1.1. Static Provisioning of Volumes

To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created:

The sample glusterfs endpoint file (sample-gluster-endpoints.yaml) and the sample glusterfs service file (sample-gluster-service.yaml) are available at /usr/share/heketi/templates/ directory.

Note

Ensure to copy the sample glusterfs endpoint file / glusterfs service file to a location of your choice and then edit the copied file. For example:
# cp /usr/share/heketi/templates/sample-gluster-endpoints.yaml /<path>/gluster-endpoints.yaml
  1. To specify the endpoints you want to create, update the copied sample-gluster-endpoints.yaml file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.
    # cat sample-gluster-endpoints.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: glusterfs-cluster
    subsets:
      - addresses:
          - ip: 192.168.10.100
        ports:
          - port: 1
      - addresses:
          - ip: 192.168.10.101
        ports:
          - port: 1
      - addresses:
          - ip: 192.168.10.102
        ports:
          - port: 1
    name: is the name of the endpoint
    ip: is the ip address of the Red Hat Gluster Storage nodes.
  2. Execute the following command to create the endpoints:
    # oc create -f <name_of_endpoint_file>
    For example:
    # oc create -f sample-gluster-endpoints.yaml
    endpoints "glusterfs-cluster" created
  3. To verify that the endpoints are created, execute the following command:
    # oc get endpoints
    For example:
    # oc get endpoints
    NAME                       ENDPOINTS                                                     AGE
    storage-project-router     192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936   2d
    glusterfs-cluster          192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3s
    heketi                     10.1.1.3:8080                                                 2m
    heketi-storage-endpoints   192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3m
  4. Execute the following command to create a gluster service:
    # oc create -f <name_of_service_file>
    For example:
    # oc create -f sample-gluster-service.yaml
    service "glusterfs-cluster" created
    # cat sample-gluster-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: glusterfs-cluster
    spec:
      ports:
        - port: 1
  5. To verify that the service is created, execute the following command:
    # oc get service
    For example:
    # oc get service
    NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
    storage-project-router     172.30.94.109   <none>        80/TCP,443/TCP,1936/TCP   2d
    glusterfs-cluster          172.30.212.6    <none>        1/TCP                     5s
    heketi                     172.30.175.7    <none>        8080/TCP                  2m
    heketi-storage-endpoints   172.30.18.24    <none>        1/TCP                     3m

    Note

    The endpoints and the services must be created for each project that requires a persistent storage.
  6. Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
    $ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
    cat pv001.json 
    {
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-f8c612ee",
        "creationTimestamp": null
      },
      "spec": {
        "capacity": {
          "storage": "100Gi"
        },
        "glusterfs": {
          "endpoints": "TYPE ENDPOINT HERE",
          "path": "vol_f8c612eea57556197511f6b8c54b6070"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Retain"
      },
      "status": {}
    

    Important

    You must manually add the Labels information to the .json file.
    Following is the example YAML file for reference:
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-storage-project-glusterfs1
      labels:
        storage-tier: gold
    spec:
      capacity:
        storage: 12Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      glusterfs:
        endpoints: TYPE END POINTS NAME HERE,
        path: vol_e6b77204ff54c779c042f570a71b1407
    name: The name of the volume.
    storage: The amount of storage allocated to this volume
    glusterfs: The volume type being used, in this case the glusterfs plug-in
    endpoints: The endpoints name that defines the trusted storage pool created
    path: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.
    accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    lables: Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.

    Note

    • heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to oc create -f - to create the persistent volume immediately.
    • If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the heketi-cli volume list command. This command lists the cluster name. You can then update the endpoint information in the pv001.json file accordingly.
    • When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
    • If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
  7. Edit the pv001.json file and enter the name of the endpoint in the endpoint's section:
    cat pv001.json
    {
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-f8c612ee",
        "creationTimestamp": null,
        "labels": {
          "storage-tier": "gold"
        }
      },
      "spec": {
        "capacity": {
          "storage": "12Gi"
        },
        "glusterfs": {
          "endpoints": "glusterfs-cluster",
          "path": "vol_f8c612eea57556197511f6b8c54b6070"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Retain"
      },
      "status": {}
    }
  8. Create a persistent volume by executing the following command:
    # oc create -f pv001.json
    For example:
    # oc create -f pv001.json
    persistentvolume "glusterfs-4fc22ff9" created
  9. To verify that the persistent volume is created, execute the following command:
    # oc get pv
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Available                       4s
  10. Create a persistent volume claim file. For example:
    # cat pvc.yaml
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: glusterfs-claim
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
        selector:
          matchLabels:
            storage-tier: gold
  11. Bind the persistent volume to the persistent volume claim by executing the following command:
    # oc create -f pvc.yaml
    For example:
    # oc create -f pvc.yaml
    persistentvolumeclaim"glusterfs-claim" created
  12. To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
    # oc get pv
    # oc get pvc
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS    CLAIM                  REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Bound     storage-project/glusterfs-claim             1m
    # oc get pvc
    
    NAME              STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
    glusterfs-claim   Bound     glusterfs-4fc22ff9   100Gi      RWX           11s
  13. The claim can now be used in the application:
    For example:
    # cat app.yaml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sleep
            - "3600"
          name: busybox
          volumeMounts:
            - mountPath: /usr/share/busybox
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: glusterfs-claim
    # oc create -f app.yaml
    pod "busybox" created
  14. To verify that the pod is created, execute the following command:
    # oc get pods
  15. To verify that the persistent volume is mounted inside the container, execute the following command:
    # oc rsh busybox
    / $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-1310998-81732b5fd87c197f627a24bcd2777f12eec4ee937cc2660656908b2fa6359129
                          100.0G     34.1M     99.9G   0% /
    tmpfs                     1.5G         0      1.5G   0% /dev
    tmpfs                     1.5G         0      1.5G   0% /sys/fs/cgroup
    192.168.121.168:vol_4fc22ff934e531dec3830cfbcad1eeae
                           99.9G     66.1M     99.9G   0% /usr/share/busybox
    tmpfs                     1.5G         0      1.5G   0% /run/secrets
    /dev/mapper/vg_vagrant-lv_root
                           37.7G      3.8G     32.0G  11% /dev/termination-log
    tmpfs                     1.5G     12.0K      1.5G   0% /var/run/secretgit s/kubernetes.io/serviceaccount

Note

If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en/openshift-container-platform/3.6/single/installation-and-configuration/#gluster-volume-security.

9.1.2. Dynamic Provisioning of Volumes

Dynamic provisioning enables provisioning of Red Hat Gluster Storage volume to a running application container without having to pre-create the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.

Note

Dynamically provisioned Volumes are supported from Container-Native Storage 3.4. If you have any statically provisioned volumes and require more information about managing it, then refer Section 9.1.1, “Static Provisioning of Volumes”

9.1.2.1. Configuring Dynamic Provisioning of Volumes

To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
9.1.2.1.1. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
  1. To create a storage class execute the following command:
    # cat glusterfs-storageclass.yaml
    
    
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: gluster-container
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
      restuser: "admin"
      volumetype: "replicate:3"
      clusterid: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a"
      secretNamespace: "default"
      secretName: "heketi-secret"
      volumeoptions: "client.ssl on, server.ssl on"
      volumenameprefix: "test-vol"
    allowVolumeExpansion: "true"
    
    where,
    resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
    restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
    volumetype: It specifies the volume type that is being used.

    Note

    Distributed-Three-way replication is the only supported volume type.
    clusterid: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma separated cluster IDs. This is an optional parameter.

    Note

    To get the cluster ID, execute the following command:
    # heketi-cli cluster list
    secretNamespace + secretName: Identification of Secret instance that contains the user password that is used when communicating with the Gluster REST service. These parameters are optional. Empty password will be used when both secretNamespace and secretName are omitted.

    Note

    When the persistent volumes are dynamically provisioned, the Gluster plugin automatically creates an endpoint and a headless service in the name gluster-dynamic-<claimname>. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
    volumeoptions: This parameter allows you to create glusterfs volumes with encryption enabled by setting the parameter to "client.ssl on, server.ssl on". For more information on enabling encryption, see Chapter 17, Enabling Encryption.
    volumenameprefix: This is an optional parameter. It depicts the name of the volume created by heketi. For more information see Section 9.1.2.1.5, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”

    Note

    The value for this parameter cannot contain `_` in the storageclass.
    allowVolumeExpansion: To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to "true". For more information, see Section 9.1.2.1.7, “Expanding Persistent Volume Claim”.
  2. To register the storage class to Openshift, execute the following command:
    # oc create -f glusterfs-storageclass.yaml
    storageclass "gluster-container" created
  3. To get the details of the storage class, execute the following command:
    # oc describe storageclass gluster-container
                                   
    Name: gluster-container
    IsDefaultClass: No
    Annotations: <none>
    Provisioner: kubernetes.io/glusterfs
    Parameters: resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin,secretName=heketi-secret,secretNamespace=default
    No events.
9.1.2.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:

Note

If the admin-key value (secret to access heketi to get the volume details) was not set during the deployment of Container-Native Storage, then the following steps can be omitted.
  1. Create an encoded value for the password by executing the following command:
    # echo -n "<key>" | base64
    where “key” is the value for "admin-key" that was created while deploying Container-Native Storage
    For example:
    # echo -n "mypassword" | base64
    bXlwYXNzd29yZA==
  2. Create a secret file. A sample secret file is provided below:
    # cat glusterfs-secret.yaml
                                   
    apiVersion: v1
    kind: Secret
    metadata:
      name: heketi-secret
      namespace: default
    data:
      # base64 encoded password. E.g.: echo -n "mypassword" | base64
      key: bXlwYXNzd29yZA==
    type: kubernetes.io/glusterfs
  3. Register the secret on Openshift by executing the following command:
    # oc create -f glusterfs-secret.yaml
    secret "heketi-secret" created
9.1.2.1.3. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
  1. Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
    # cat glusterfs-pvc-claim1.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: claim1
      annotations:
        volume.beta.kubernetes.io/storage-class: gluster-container
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 4Gi
  2. Register the claim by executing the following command:
    # oc create -f glusterfs-pvc-claim1.yaml
    persistentvolumeclaim "claim1" created
  3. To get the details of the claim, execute the following command:
    # oc describe pvc <claim_name>
    For example:
    # oc describe pvc claim1
                                   
    Name: claim1
    Namespace: default
    StorageClass: gluster-container
    Status: Bound
    Volume: pvc-54b88668-9da6-11e6-965e-54ee7551fd0c
    Labels: <none>
    Capacity: 4Gi
    Access Modes: RWO
    No events.
9.1.2.1.4. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
  1. To get the details of the persistent volume claim and persistent volume, execute the following command:
    # oc get pv,pvc
    
    NAME                                          CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                    REASON    AGE
    pv/pvc-962aa6d1-bddb-11e6-be23-5254009fc65b   4Gi        RWO           Delete          Bound      storage-project/claim1             3m
    
    NAME         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    pvc/claim1   Bound     pvc-962aa6d1-bddb-11e6-be23-5254009fc65b   4Gi        RWO           4m
  2. To validate if the endpoint and the services are created as part of claim creation, execute the following command:
    # oc get endpoints,service
    
    NAME                          ENDPOINTS                                            AGE
    ep/storage-project-router     192.168.68.3:443,192.168.68.3:1936,192.168.68.3:80   28d
    ep/gluster-dynamic-claim1     192.168.68.2:1,192.168.68.3:1,192.168.68.4:1         5m
    ep/heketi                     10.130.0.21:8080                                     21d
    ep/heketi-storage-endpoints   192.168.68.2:1,192.168.68.3:1,192.168.68.4:1         25d
    
    NAME                           CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
    svc/storage-project-router     172.30.166.64    <none>        80/TCP,443/TCP,1936/TCP   28d
    svc/gluster-dynamic-claim1     172.30.52.17     <none>        1/TCP                     5m
    svc/heketi                     172.30.129.113   <none>        8080/TCP                  21d
    svc/heketi-storage-endpoints   172.30.133.212   <none>        1/TCP                     25d
9.1.2.1.5. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
  • Any string that was provided as the field value of "volnameprefix" in the storageclass file.
  • Persistent volume claim name.
  • Project / Namespace name.
To set the name, ensure that you have added the parameter volumenameprefix to the storage class file. For more information refer, Section 9.1.2.1.1, “Registering a Storage Class”

Note

The value for this parameter cannot contain `_` in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
# oc describe pv <pv_name>
For example:
    # oc describe pv pvc-f92e3065-25e8-11e8-8f17-005056a55501
    Name:            pvc-f92e3065-25e8-11e8-8f17-005056a55501
    Labels:          <none>
    Annotations:     Description=Gluster-Internal: Dynamically provisioned PV
                     gluster.kubernetes.io/heketi-volume-id=027c76b24b1a3ce3f94d162f843529c8
                     gluster.org/type=file
                     kubernetes.io/createdby=heketi-dynamic-provisioner
                     pv.beta.kubernetes.io/gid=2000
                     pv.kubernetes.io/bound-by-controller=yes
                     pv.kubernetes.io/provisioned-by=kubernetes.io/glusterfs
                     volume.beta.kubernetes.io/mount-options=auto_unmount
    StorageClass:    gluster-container-prefix
    Status:          Bound
    Claim:           glusterfs/claim1
    Reclaim Policy:  Delete
    Access Modes:    RWO
    Capacity:        1Gi
    Message:        
    Source:
        Type:           Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
        EndpointsName:  glusterfs-dynamic-claim1
        Path:           test-vol_glusterfs_claim1_f9352e4c-25e8-11e8-b460-005056a55501
        ReadOnly:       false
    Events:             <none>
The value for Path will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
9.1.2.1.6. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
  1. To use the claim in the application, for example
    # cat app.yaml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sleep
            - "3600"
          name: busybox
          volumeMounts:
            - mountPath: /usr/share/busybox
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: claim1
    # oc create -f app.yaml
    pod "busybox" created
  2. To verify that the pod is created, execute the following command:
    # oc get pods
    
    NAME                                READY     STATUS         RESTARTS   AGE
    storage-project-router-1-at7tf      1/1       Running        0          13d
    busybox                             1/1       Running        0          8s
    glusterfs-dc-192.168.68.2-1-hu28h   1/1       Running        0          7d
    glusterfs-dc-192.168.68.3-1-ytnlg   1/1       Running        0          7d
    glusterfs-dc-192.168.68.4-1-juqcq   1/1       Running        0          13d
    heketi-1-9r47c                      1/1       Running        0          13d
  3. To verify that the persistent volume is mounted inside the container, execute the following command:
    # oc rsh busybox
    / $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-666733-38050a1d2cdb41dc00d60f25a7a295f6e89d4c529302fb2b93d8faa5a3205fb9
                             10.0G     33.8M      9.9G   0% /
    tmpfs                    23.5G         0     23.5G   0% /dev
    tmpfs                    23.5G         0     23.5G   0% /sys/fs/cgroup
    /dev/mapper/rhgs-root
                             17.5G      3.6G     13.8G  21% /run/secrets
    /dev/mapper/rhgs-root
                             17.5G      3.6G     13.8G  21% /dev/termination-log
    /dev/mapper/rhgs-root
                             17.5G      3.6G     13.8G  21% /etc/resolv.conf
    /dev/mapper/rhgs-root
                             17.5G      3.6G     13.8G  21% /etc/hostname
    /dev/mapper/rhgs-root
                             17.5G      3.6G     13.8G  21% /etc/hosts
    shm                      64.0M         0     64.0M   0% /dev/shm
    192.168.68.2:vol_5b05cf2e5404afe614f8afa698792bae
                              4.0G     32.6M      4.0G   1% /usr/share/busybox
    tmpfs                    23.5G     16.0K     23.5G   0% /var/run/secrets/kubernetes.io/serviceaccount
    tmpfs                    23.5G         0     23.5G   0% /proc/kcore
    tmpfs                    23.5G         0     23.5G   0% /proc/timer_stats
9.1.2.1.7. Expanding Persistent Volume Claim
To increase the PV claim value, ensure to set the allowVolumeExpansion parameter in the storageclass file to "true". For more information refer, Section 9.1.2.1.1, “Registering a Storage Class”
To expand the persistent value claim value, execute the following commands:
  1. If the feature gates ExpandPersistentVolumes, and the admissionconfig PersistentVolumeClaimResize are not enabled, then edit the master.conf file located at /etc/origin/master/master-config.yaml on the master to enable them. For example:
    To enable feature gates ExpandPersistentVolumes
        apiServerArguments:
              runtime-config:
              - apis/settings.k8s.io/v1alpha1=true
              storage-backend:
              - etcd3
              storage-media-type:
              - application/vnd.kubernetes.protobuf
              feature-gates:
              - ExpandPersistentVolumes=true
        controllerArguments:
              feature-gates:
              - ExpandPersistentVolumes=true
    
    To enable admissionconfig PersistentVolumeClaimResize add the following under admission config in the master-config file.
    admissionConfig:
         PersistentVolumeClaimResize:
         configuration:
               apiVersion: v1
               disable: false
               kind: DefaultAdmissionConfig
    
    1. Restart the OpenShift master.
  2. To check the existing persistent volume size, execute the following command on the app pod:
    # oc rsh busybox
    # df -h
    For example:
    # oc rsh busybox 
    / # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7
                             10.0G     34.2M      9.9G   0% /
    tmpfs                    15.6G         0     15.6G   0% /dev
    tmpfs                    15.6G         0     15.6G   0% /sys/fs/cgroup
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /dev/termination-log
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /run/secrets
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/resolv.conf
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/hostname
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/hosts
    shm                      64.0M         0     64.0M   0% /dev/shm
    10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501
                              2.0G     32.6M      2.0G   2% /usr/share/busybox
    tmpfs                    15.6G     16.0K     15.6G   0% /var/run/secrets/kubernetes.io/serviceaccount
    tmpfs                    15.6G         0     15.6G   0% /proc/kcore
    tmpfs                    15.6G         0     15.6G   0% /proc/timer_list
    tmpfs                    15.6G         0     15.6G   0% /proc/timer_stats
    tmpfs                    15.6G         0     15.6G   0% /proc/sched_debug
    tmpfs                    15.6G         0     15.6G   0% /proc/scsi
    tmpfs                    15.6G         0     15.6G   0% /sys/firmware
    
    In this example the persistent volume size is 2Gi
  3. To edit the persistent volume claim value, execute the following command and edit the following storage parameter:
    resources:
        requests:
          storage: <storage_value>
    
    # oc edit pvc <claim_name>
    For example, to expand the storage value to 20Gi:
    # oc edit pvc claim3
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
        pv.kubernetes.io/bind-completed: "yes"
        pv.kubernetes.io/bound-by-controller: "yes"
        volume.beta.kubernetes.io/storage-class: gluster-container2
        volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
      creationTimestamp: 2018-02-14T07:42:00Z
      name: claim3
      namespace: storage-project
      resourceVersion: "283924"
      selfLink: /api/v1/namespaces/storage-project/persistentvolumeclaims/claim3
      uid: 8a9bb0df-115a-11e8-8cb3-005056a5a340
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      volumeName: pvc-8a9bb0df-115a-11e8-8cb3-005056a5a340
    status:
      accessModes:
      - ReadWriteOnce
      capacity:
        storage: 2Gi
      phase: Bound
  4. To verify, execute the following command on the app pod:
    # oc rsh busybox
    / # df -h
    For example:
    # oc rsh busybox 
    # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-100702042-0fa327369e7708b67f0c632d83721cd9a5b39fd3a7b3218f3ff3c83ef4320ce7
                             10.0G     34.2M      9.9G   0% /
    tmpfs                    15.6G         0     15.6G   0% /dev
    tmpfs                    15.6G         0     15.6G   0% /sys/fs/cgroup
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /dev/termination-log
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /run/secrets
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/resolv.conf
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/hostname
    /dev/mapper/rhel_dhcp47--150-root
                             50.0G      7.4G     42.6G  15% /etc/hosts
    shm                      64.0M         0     64.0M   0% /dev/shm
    10.70.46.177:test-vol_glusterfs_claim10_d3e15a8b-26b3-11e8-acdf-005056a55501
                             20.0G     65.3M     19.9G   1% /usr/share/busybox
    tmpfs                    15.6G     16.0K     15.6G   0% /var/run/secrets/kubernetes.io/serviceaccount
    tmpfs                    15.6G         0     15.6G   0% /proc/kcore
    tmpfs                    15.6G         0     15.6G   0% /proc/timer_list
    tmpfs                    15.6G         0     15.6G   0% /proc/timer_stats
    tmpfs                    15.6G         0     15.6G   0% /proc/sched_debug
    tmpfs                    15.6G         0     15.6G   0% /proc/scsi
    tmpfs                    15.6G         0     15.6G   0% /sys/firmware
    
    It is observed that the size is changed from 2Gi (earlier) to 20Gi.
9.1.2.1.8. Deleting a Persistent Volume Claim
  1. To delete a claim, execute the following command:
    # oc delete pvc <claim-name>
    For example:
    # oc delete pvc claim1
    persistentvolumeclaim "claim1" deleted
  2. To verify if the claim is deleted, execute the following command:
    # oc get pvc <claim-name>
    For example:
    # oc get pvc claim1
    No resources found.
    When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
    • To verify if the persistent volume is deleted, execute the following command:
      # oc get pv <pv-name>
      For example:
      # oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 
      No resources found.
    • To verify if the endpoints are deleted, execute the following command:
      # oc get endpoints <endpointname>
      For example:
      # oc get endpoints gluster-dynamic-claim1
      No resources found.
    • To verify if the service is deleted, execute the following command:
      # oc get service <servicename>
      For example:
      # oc get service gluster-dynamic-claim1 
      No resources found.

9.1.3. Volume Security

Volumes come with a UID/GID of 0 (root). For an application pod to write to the volume, it should also have a UID/GID of 0 (root). With the volume security feature the administrator can now create a volume with a unique GID and the application pod can write to the volume using this unique GID
Volume security for statically provisioned volumes

To create a statically provisioned volume with a GID, execute the following command:

$ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json --gid=590
In the above command, a 100G persistent volume with a GID of 590 is created and the output of the persistent volume specification describing this volume is added to the pv001.json file.
Volume security for dynamically provisioned volumes

Two new parameters, gidMin and gidMax, are introduced with dynamic provisioner. These values allows the administrator to configure the GID range for the volume in the storage class. To set up the GID values and provide volume security for dynamically provisioned volumes, execute the following commands:

  1. Create a storage class file with the GID values. For example:
    # cat glusterfs-storageclass.yaml
    
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name:gluster-container
    provisioner: kubernetes.io/glusterfs
    parameters:
      resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
      restuser: "admin"
      secretNamespace: "default"
      secretName: "heketi-secret"
      gidMin: "2000"
      gidMax: "4000"

    Note

    If the gidMin and gidMax value are not provided, then the dynamic provisioned volumes will have the GID between 2000 and 2147483647.
  2. Create a persistent volume claim. For more information see, Section 9.1.2.1.3, “Creating a Persistent Volume Claim”
  3. Use the claim in the pod. Ensure that this pod is non-privileged. For more information see, Section 9.1.2.1.6, “Using the Claim in a Pod”
  4. To verify if the GID is within the range specified, execute the following command:
    # oc rsh busybox
    $ id
    For example:
    $ id
    uid=1000060000 gid=0(root) groups=0(root),2001
    where, 2001 in the above output is the allocated GID for the persistent volume, which is within the range specified in the storage class. You can write to this volume with the allocated GID.

    Note

    When the persistent volume claim is deleted, the GID of the persistent volume is released from the pool.

9.1.4. Enabling Volume Metrics

To enable volume metrics for glusterfs plugin in order to collect the statistics of GlusterFS PVs, set up Prometheus, a monitoring toolkit. You can use Prometheus to visualize metrics and alerts for OpenShift Container Platform system resources.
Following is the list of different metrics of the PVs that can be viewed on Prometheus:
  • kubelet_volume_stats_available_bytes: Number of available bytes in the volume.
  • kubelet_volume_stats_capacity_bytes: Capacity in bytes of the volume.
  • kubelet_volume_stats_inodes: Maximum number of inodes in the volume.
  • kubelet_volume_stats_inodes_free: Number of free inodes in the volume.
  • kubelet_volume_stats_inodes_used: Number of used inodes in the volume.
  • kubelet_volume_stats_used_bytes: Number of used bytes in the volume.
Viewing Metrics

To view any metrics:

  1. Add the metrics name in Prometheus, and click Execute.
  2. In the Graph tab, the value for the metrics for the volume is displayed as a graph.
    For example, in the following image, to check the available bytes, kubelet_volume_stats_available_bytes metric is added to the search bar on Prometheus. On clicking Execute, the available bytes value is depicted as a graph. You can hover the mouse on the line to get more details. (To view the image in detail, right-click and select View Image.)
    Viewing Metrics