Red Hat Training

A Red Hat training course is available for Red Hat Gluster Storage

Chapter 5. Creating Persistent Volumes

OpenShift Container Platform clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
Binding PVs by Labels and Selectors

Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV.

You can use labels to identify common attributes or characteristics shared among volumes. For example, you can define the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.
You can provision volumes either statically or dynamically. In static provisioning of volumes a persistent volume claim has to be created which the administrator uses to create a persistent volume. More details about static provisioning of volumes is provided in Section 5.1, “Static Provisioning of Volumes”.
From Container Native Storage 3.4 release onwards dynamic provisioning of volumes is introduced. With dynamic provisioning no administrator intervention is required to create a persistent volume. The volume will be created dynamically and provisioned to the application containers. More details about dynamic provisioning of volumes is provided in Section 5.2, “Dynamic Provisioning of Volumes”.

5.1. Static Provisioning of Volumes

To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created:

The sample glusterfs endpoint file (sample-gluster-endpoints.yaml) and the sample glusterfs service file (sample-gluster-service.yaml) are available at /usr/share/heketi/templates/ directory.
  1. To specify the endpoints you want to create, update the sample-gluster-endpoints.yaml file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.
    # cat sample-gluster-endpoints.yaml
    apiVersion: v1
    kind: Endpoints
    metadata:
      name: glusterfs-cluster
    subsets:
    - addresses:
      - ip: 192.168.10.100
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.10.101
      ports:
      - port: 1
    - addresses:
      - ip: 192.168.10.102
      ports:
      - port: 1
    
    name: is the name of the endpoint
    ip: is the ip address of the Red Hat Gluster Storage nodes.
  2. Execute the following command to create the endpoints:
    # oc create -f <name_of_endpoint_file>
    For example:
    # oc create -f sample-gluster-endpoints.yaml
    endpoints "glusterfs-cluster" created
  3. To verify that the endpoints are created, execute the following command:
    # oc get endpoints
    For example:
    # oc get endpoints
    NAME                       ENDPOINTS                                                     AGE
    storage-project-router     192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936   2d
    glusterfs-cluster          192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3s
    heketi                     10.1.1.3:8080                                                 2m
    heketi-storage-endpoints   192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3m
  4. Execute the following command to create a gluster service:
    # oc create -f <name_of_service_file>
    For example:
    # oc create -f sample-gluster-service.yaml
    service "glusterfs-cluster" created
    # cat sample-gluster-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: glusterfs-cluster
    spec:
      ports:
      - port: 1
    
  5. To verify that the service is created, execute the following command:
    # oc get service
    For example:
    # oc get service
    NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
    storage-project-router     172.30.94.109   <none>        80/TCP,443/TCP,1936/TCP   2d
    glusterfs-cluster          172.30.212.6    <none>        1/TCP                     5s
    heketi                     172.30.175.7    <none>        8080/TCP                  2m
    heketi-storage-endpoints   172.30.18.24    <none>        1/TCP                     3m

    Note

    The endpoints and the services must be created for each project that requires a persistent storage.
  6. Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume specification describing this volume to the file pv001.json:
    $ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
    cat pv001.json 
    {
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-f8c612ee",
        "creationTimestamp": null
      },
      "spec": {
        "capacity": {
          "storage": "100Gi"
        },
        "glusterfs": {
          "endpoints": "TYPE ENDPOINT HERE",
          "path": "vol_f8c612eea57556197511f6b8c54b6070"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Retain"
      },
      "status": {}
    

    Important

    You must manually add the Labels information to the .json file.
    Following is the example YAML file for reference:
    apiVersion: v1
    kind: PersistentVolume
    metadata:
     name: pv-storage-project-glusterfs1
     labels:
      storage-tier: gold
    spec:
     capacity:
       storage: 12Gi
     accessModes:
       - ReadWriteMany
     persistentVolumeReclaimPolicy: Retain
     glusterfs:
       endpoints: TYPE END POINTS NAME HERE,
       path: vol_e6b77204ff54c779c042f570a71b1407
    }
    name: The name of the volume.
    storage: The amount of storage allocated to this volume
    glusterfs: The volume type being used, in this case the glusterfs plug-in
    endpoints: The endpoints name that defines the trusted storage pool created
    path: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.
    accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    lables: Use labels to identify common attributes or characteristics shared among volumes. In this case, we have defined the gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with storage-tier=gold to match this PV.

    Note

    • heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to oc create -f - to create the persistent volume immediately.
    • Creation of more than 100 volumes per 3 nodes per cluster is not supported.
    • If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the heketi-cli volume list command. This command lists the cluster name. You can then update the endpoint information in the pv001.json file accordingly.
    • When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
    • If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
  7. Edit the pv001.json file and enter the name of the endpoint in the endpoint's section:
    cat pv001.json
    {
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-f8c612ee",
        "creationTimestamp": null,
        "labels": {
          "storage-tier": "gold"
        }
      },
      "spec": {
        "capacity": {
          "storage": "12Gi"
        },
        "glusterfs": {
          "endpoints": "glusterfs-cluster",
          "path": "vol_f8c612eea57556197511f6b8c54b6070"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Retain"
      },
      "status": {}
    }
  8. Create a persistent volume by executing the following command:
    # oc create -f pv001.json
    For example:
    # oc create -f pv001.json
    persistentvolume "glusterfs-4fc22ff9" created
  9. To verify that the persistent volume is created, execute the following command:
    # oc get pv
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Available                       4s
  10. Create a persistent volume claim file. For example:
    # cat pvc.json 
          
    {
       "apiVersion": "v1",
       "kind": "PersistentVolumeClaim",
       "metadata": {
         "name":  "glusterfs-claim"
      },
       "spec": {
         "accessModes": [
           "ReadWriteMany"
        ],
         "resources": {
           "requests": {
             "storage": "100Gi"
          },
           "selector": {
           "matchLabels": {
             "storage-tier": "gold"
          }
        }
       }
      }
    }
    
  11. Bind the persistent volume to the persistent volume claim by executing the following command:
    # oc create -f pvc.json
    For example:
    # oc create -f pvc.json
    persistentvolumeclaim"glusterfs-claim" created
  12. To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
    # oc get pv
    # oc get pvc
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS    CLAIM                  REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Bound     storage-project/glusterfs-claim             1m
    # oc get pvc
    
    NAME              STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
    glusterfs-claim   Bound     glusterfs-4fc22ff9   100Gi      RWX           11s
  13. The claim can now be used in the application:
    For example:
    # cat app.yml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sleep
            - "3600"
          name: busybox
          volumeMounts:
            - mountPath: /usr/share/busybox
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: glusterfs-claim
    # oc create -f app.yml
    pod "busybox" created
  14. To verify that the pod is created, execute the following command:
    # oc get pods
  15. To verify that the persistent volume is mounted inside the container, execute the following command:
    # oc rsh busybox
    / $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-1310998-81732b5fd87c197f627a24bcd2777f12eec4ee937cc2660656908b2fa6359129
                          100.0G     34.1M     99.9G   0% /
    tmpfs                     1.5G         0      1.5G   0% /dev
    tmpfs                     1.5G         0      1.5G   0% /sys/fs/cgroup
    192.168.121.168:vol_4fc22ff934e531dec3830cfbcad1eeae
                           99.9G     66.1M     99.9G   0% /usr/share/busybox
    tmpfs                     1.5G         0      1.5G   0% /run/secrets
    /dev/mapper/vg_vagrant-lv_root
                           37.7G      3.8G     32.0G  11% /dev/termination-log
    tmpfs                     1.5G     12.0K      1.5G   0% /var/run/secretgit s/kubernetes.io/serviceaccount

Note

If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en/openshift-container-platform/3.4/single/installation-and-configuration/#gluster-volume-security.