Show Table of Contents
3.2. Block Storage
Block storage allows the creation of high performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
gluster-block is a distributed management framework for block devices. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUN's across multiple nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.
Note
Static provisioning of volumes is not supported for Block storage. Dynamic provisioning of volumes is the only method supported.
The recommended Red Hat Enterprise Linux (RHEL) version for block storage is RHEL-7.5.4.
Block volume expansion is not supported in Container-Native Storage 3.6.
3.2.1. Dynamic Provisioning of Volumes for Block Storage
Dynamic provisioning enables you to provision a Red Hat Gluster Storage volume to a running application container without pre-creating the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.
3.2.1.1. Configuring Dynamic Provisioning of Volumes
To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
3.2.1.1.1. Configuring Multipathing on all Initiators
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the app pods are hosted:
- To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
# yum install iscsi-initiator-utils device-mapper-multipath
- To enable multipath, execute the following command:
# mpathconf --enable
- Create and add the following content to the multipath.conf file:
Note
In case of upgrades, make sure that the changes to multipath.conf and reloading of multipathd are done only after all the server nodes are upgraded.# cat >> /etc/multipath.conf <<EOF # LIO iSCSI devices { device { vendor "LIO-ORG" user_friendly_names "yes" # names like mpatha path_grouping_policy "failover" # one path per group hardware_handler "1 alua" path_selector "round-robin 0" failback immediate path_checker "tur" prio "alua" no_path_retry 120 } } EOF - Execute the following commands to start multipath daemon and [re]load the multipath configuration:
# systemctl start multipathd
# systemctl reload multipathd
3.2.1.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:
Note
If the
admin-key value (secret to access heketi to get the volume details) was not set during the deployment of Red Hat Openshift Container Storage, then the following steps can be omitted.
- Create an encoded value for the password by executing the following command:
# echo -n "<key>" | base64
where “key” is the value foradmin-keythat was created while deploying CNSFor example:# echo -n "mypassword" | base64 bXlwYXNzd29yZA==
- Create a secret file. A sample secret file is provided below:
# cat glusterfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: gluster.org/glusterblock
- Register the secret on Openshift by executing the following command:
# oc create -f glusterfs-secret.yaml secret "heketi-secret" created
3.2.1.1.3. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
- Create a storage class. A sample storage class file is presented below:
# cat > glusterfs-block-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gluster-block provisioner: gluster.org/glusterblock reclaimPolicy: Retain parameters: resturl: "http://heketi-storage-project.cloudapps.mystorage.com" restuser: "admin" restsecretnamespace: "default" restsecretname: "heketi-secret" hacount: "3" clusterids: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a" chapauthenabled: "true" volumenameprefix: "test-vol"
where,resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage poolrestsecretnamespace + restsecretname : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when bothrestsecretnamespaceandrestsecretnameare omitted.hacount: It is the count of the number of paths to the block target server.hacountprovides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths.clusterids: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma-separated cluster IDs. This is an optional parameter.Note
To get the cluster ID, execute the following command:# heketi-cli cluster list
chapauthenabled: If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter.volumenameprefix: This is an optional parameter. It depicts the name of the volume created by heketi. For more information see, Section 3.2.1.1.6, “(Optional) Providing a Custom Volume Name Prefix for Persistent Volumes”Note
The value for this parameter cannot contain `_` in the storageclass. - To register the storage class to Openshift, execute the following command:
# oc create -f glusterfs-block-storageclass.yaml storageclass "gluster-block" created
- To get the details of the storage class, execute the following command:
# oc describe storageclass gluster-block Name: gluster-block IsDefaultClass: No Annotations: <none> Provisioner: gluster.org/glusterblock Parameters: chapauthenabled=true,hacount=3,opmode=heketi,restsecretname=heketi-secret,restsecretnamespace=default,resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin Events: <none>
3.2.1.1.4. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
- Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
# cat glusterfs-block-pvc-claim.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim1 annotations: volume.beta.kubernetes.io/storage-class: gluster-block spec: persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce resources: requests: storage: 5GipersistentVolumeReclaimPolicy:This is an optional parameter. When this parameter is set to "Retain" the underlying persistent volume is retained even after the corresponding persistent volume claim is deleted.Note
When PVC is deleted, the underlying heketi and gluster volumes are not deleted if "persistentVolumeReclaimPolicy:" is set to "Retain". To delete the volume, you must use heketi cli and then delete the PV. - Register the claim by executing the following command:
# oc create -f glusterfs-block-pvc-claim.yaml persistentvolumeclaim "claim1" created
- To get the details of the claim, execute the following command:
# oc describe pvc <claim_name>
For example:# oc describe pvc claim1 Name: claim1 Namespace: block-test StorageClass: gluster-block Status: Bound Volume: pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 Labels: <none> Annotations: control-plane.alpha.kubernetes.io/leader={"holderIdentity":"8d7fecb4-7dba-11e7-a347-0a580a830002","leaseDurationSeconds":15,"acquireTime":"2017-08-10T15:02:30Z","renewTime":"2017-08-10T15:02:58Z","lea... pv.kubernetes.io/bind-completed=yes pv.kubernetes.io/bound-by-controller=yes volume.beta.kubernetes.io/storage-class=gluster-block volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock Capacity: 5Gi Access Modes: RWO Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002 Normal Provisioning External provisioner is provisioning volume for claim "block-test/claim1" 1m 1m 18 persistentvolume-controller Normal ExternalProvisioning cannot find provisioner "gluster.org/glusterblock", expecting that a volume for the claim is provisioned either manually or via external software 1m 1m 1 gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002 Normal ProvisioningSucceeded Successfully provisioned volume pvc-ee30ff43-7ddc-11e7-89da-5254002ec671
3.2.1.1.5. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
- To get the details of the persistent volume claim and persistent volume, execute the following command:
# oc get pv,pvc NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pv/pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 5Gi RWO Delete Bound block-test/claim1 gluster-block 3m NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE pvc/claim1 Bound pvc-ee30ff43-7ddc-11e7-89da-5254002ec671 5Gi RWO gluster-block 4m
3.2.1.1.6. (Optional) Providing a Custom Volume Name Prefix for Persistent Volumes
You can provide a custom volume name prefix to the persistent volume that is created. By providing a custom volume name prefix, users can now easily search/filter the volumes based on:
- Any string that was provided as the field value of "volnameprefix" in the storageclass file.
- Persistent volume claim name.
- Project / Namespace name.
To set the name, ensure that you have added the parameter
volumenameprefix to the storage class file. For more information, refer Section 3.2.1.1.3, “Registering a Storage Class”
Note
The value for this parameter cannot contain `_` in the storageclass.
To verify if the custom volume name prefix is set, execute the following command:
# oc describe pv <pv_name>
For example:
# oc describe pv pvc-4e97bd84-25f4-11e8-8f17-005056a55501
Name: pvc-4e97bd84-25f4-11e8-8f17-005056a55501
Labels: <none>
Annotations: AccessKey=glusterblk-67d422eb-7b78-4059-9c21-a58e0eabe049-secret
AccessKeyNs=glusterfs
Blockstring=url:http://172.31.251.137:8080,user:admin,secret:heketi-secret,secretnamespace:glusterfs
Description=Gluster-external: Dynamically provisioned PV
gluster.org/type=block
gluster.org/volume-id=cd37c089372040eba20904fb60b8c33e
glusterBlkProvIdentity=gluster.org/glusterblock
glusterBlockShare=test-vol_glusterfs_bclaim1_4eab5a22-25f4-11e8-954d-0a580a830003
kubernetes.io/createdby=heketi
pv.kubernetes.io/provisioned-by=gluster.org/glusterblock
v2.0.0=v2.0.0
StorageClass: gluster-block-prefix
Status: Bound
Claim: glusterfs/bclaim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: ISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod)
TargetPortal: 10.70.46.177
IQN: iqn.2016-12.org.gluster-block:67d422eb-7b78-4059-9c21-a58e0eabe049
Lun: 0
ISCSIInterface default
FSType: xfs
ReadOnly: false
Portals: [10.70.46.142 10.70.46.4]
DiscoveryCHAPAuth: false
SessionCHAPAuth: true
SecretRef: {glusterblk-67d422eb-7b78-4059-9c21-a58e0eabe049-secret }
InitiatorName: <none>
Events: <none>
The value for
glusterBlockShare will have the custom volume name prefix attached to the namespace and the claim name, which is "test-vol" in this case.
3.2.1.1.7. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
- To use the claim in the application, for example
# cat app.yaml apiVersion: v1 kind: Pod metadata: name: busybox spec: containers: - image: busybox command: - sleep - "3600" name: busybox volumeMounts: - mountPath: /usr/share/busybox name: mypvc volumes: - name: mypvc persistentVolumeClaim: claimName: claim1# oc create -f app.yaml pod "busybox" created
For more information about using the glusterfs claim in the application see, https://access.redhat.com/documentation/en-us/openshift_container_platform/3.11/html-single/configuring_clusters/#install-config-storage-examples-gluster-example. - To verify that the pod is created, execute the following command:
# oc get pods -n storage-project NAME READY STATUS RESTARTS AGE block-test-router-1-deploy 0/1 Running 0 4h busybox 1/1 Running 0 43s glusterblock-provisioner-1-bjpz4 1/1 Running 0 4h glusterfs-7l5xf 1/1 Running 0 4h glusterfs-hhxtk 1/1 Running 3 4h glusterfs-m4rbc 1/1 Running 0 4h heketi-1-3h9nb 1/1 Running 0 4h
- To verify that the persistent volume is mounted inside the container, execute the following command:
# oc rsh busybox
/ # df -h Filesystem Size Used Available Use% Mounted on /dev/mapper/docker-253:1-11438-39febd9d64f3a3594fc11da83d6cbaf5caf32e758eb9e2d7bdd798752130de7e 10.0G 33.9M 9.9G 0% / tmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /dev/termination-log /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /run/secrets /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/resolv.conf /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/hostname /dev/mapper/VolGroup00-LogVol00 7.7G 2.8G 4.5G 39% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm /dev/mpatha 5.0G 32.2M 5.0G 1% /usr/share/busybox tmpfs 3.8G 16.0K 3.8G 0% /var/run/secrets/kubernetes.io/serviceaccount tmpfs 3.8G 0 3.8G 0% /proc/kcore tmpfs 3.8G 0 3.8G 0% /proc/timer_list tmpfs 3.8G 0 3.8G 0% /proc/timer_stats tmpfs 3.8G 0 3.8G 0% /proc/sched_debug
3.2.1.1.8. Deleting a Persistent Volume Claim
Note
If the "persistentVolumeReclaimPolicy" parameter was set to "Retain" when registering the storageclass, the underlying PV and the corresponding volume remains even when a PVC is deleted.
- To delete a claim, execute the following command:
# oc delete pvc <claim-name>
For example:# oc delete pvc claim1 persistentvolumeclaim "claim1" deleted
- To verify if the claim is deleted, execute the following command:
# oc get pvc <claim-name>
For example:# oc get pvc claim1 No resources found.
When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:- To verify if the persistent volume is deleted, execute the following command:
# oc get pv <pv-name>
For example:# oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b No resources found.
Next step: If you are installing Red Hat Openshift Container Storage 3.11, and you want to use block storage as the backend storage for logging and metrics, proceed to Chapter 7, Gluster Block Storage as Backend for Logging and Metrics.
3.2.2. Replacing a Block on Block Storage
If you want to replace a block from a node that is out of resource or is faulty, it can be replaced to a new node.
Execute the following commands
- Execute the following command to fetch the zone and cluster info from heketi
# heketi-cli topology info --user=<user> --secret=<user key>
--user - heketi user--secret - Secret key for a specified user - After obtaining the cluster id and zone id add a new node to heketi by executing the following command:
Note
Before adding the node, ensure the node is labeled as a glusterfs storage host by adding the label "glusterfs=storage-host", using the following command;# oc label node <NODENAME> glusterfs=storage-host
# heketi-cli node add --zone=<zoneid> --cluster=<clusterid> --management-host-name=<new hostname> --storage-host-name=<new node ip> --user=<user> --secret=<user key>
--cluster - The cluster in which the node should reside--management-host-name - Management hostname. This is the new node that has to be added.--storage-host-name - Storage hostname.--zone - The zone in which the node should reside--user - heketi user.--secret - Secret key for a specified userFor example:heketi-cli node add --zone=1 --cluster=607204cb27346a221f39887a97cf3f90 --management-host-name=dhcp43-241.lab.eng.blr.redhat.com --storage-host-name=10.70.43.241 --user=admin --secret=adminkey Node information: Id: 2639c473a2805f6e19d45997bb18cb9c State: online Cluster Id: 607204cb27346a221f39887a97cf3f90 Zone: 1 Management Hostname dhcp43-241.lab.eng.blr.redhat.com Storage Hostname 10.70.43.241
- Execute the following command to add the device
# heketi-cli device add --name=<device name> --node=<node id> --user=<user> --secret=<user key>
--name - Name of device to add--node - Newly added node idFor example:# heketi-cli device add --name=/dev/vdc --node=2639c473a2805f6e19d45997bb18cb9c --user=admin --secret=adminkey Device added successfully
- After the new node and its associated devices are added to heketi, the faulty or unwanted node can be removed from heketiTo remove any node from heketi, follow this workflow:
- node disable (Disallow usage of a node by placing it offline)
- node replace (Removes a node and all its associated devices from Heketi)
- device delete (Deletes a device from Heketi node)
- node delete (Deletes a node from Heketi management)
- Execute the following command to fetch the node list from heketi
#heketi-cli node list --user=<user> --secret=<user key>
For example:# heketi-cli node list --user=admin --secret=adminkey Id:05746c562d6738cb5d7de149be1dac04 Cluster:607204cb27346a221f39887a97cf3f90 Id:ab37fc5aabbd714eb8b09c9a868163df Cluster:607204cb27346a221f39887a97cf3f90 Id:c513da1f9bda528a9fd6da7cb546a1ee Cluster:607204cb27346a221f39887a97cf3f90 Id:e6ab1fe377a420b8b67321d9e60c1ad1 Cluster:607204cb27346a221f39887a97cf3f90
- Execute the following command to fetch the node info of the node, that has to be deleted from heketi:
# heketi-cli node info <nodeid> --user=<user> --secret=<user key>
For example:# heketi-cli node info c513da1f9bda528a9fd6da7cb546a1ee --user=admin --secret=adminkey Node Id: c513da1f9bda528a9fd6da7cb546a1ee State: online Cluster Id: 607204cb27346a221f39887a97cf3f90 Zone: 1 Management Hostname: dhcp43-171.lab.eng.blr.redhat.com Storage Hostname: 10.70.43.171 Devices: Id:3a1e0717e6352a8830ab43978347a103 Name:/dev/vdc State:online Size (GiB):499 Used (GiB):100 Free (GiB):399 Bricks:1 Id:89a57ace1c3184826e1317fef785e6b7 Name:/dev/vdd State:online Size (GiB):499 Used (GiB):10 Free (GiB):489 Bricks:5
- Execute the following command to disable the node from heketi. This makes the node go offline:
# heketi-cli node disable <node-id> --user=<user> --secret=<user key>
For example:# heketi-cli node disable ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now offline
- Execute the following command to remove a node and all its associated devices from Heketi:
#heketi-cli node remove <node-id> --user=<user> --secret=<user key>
For example:# heketi-cli node remove ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df is now removed
- Execute the following command to delete the devices from heketi node:
# heketi-cli device delete <device-id> --user=<user> --secret=<user key>
For example:# heketi-cli device delete 0fca78c3a94faabfbe5a5a9eef01b99c --user=admin --secret=adminkey Device 0fca78c3a94faabfbe5a5a9eef01b99c deleted
- Execute the following command to delete a node from Heketi management:
#heketi-cli node delete <nodeid> --user=<user> --secret=<user key>
For example:# heketi-cli node delete ab37fc5aabbd714eb8b09c9a868163df --user=admin --secret=adminkey Node ab37fc5aabbd714eb8b09c9a868163df deleted
- Execute the following commands on any one of the gluster pods to replace the faulty node with the new node:
- Execute the following command to get list of blockvolumes hosted under block-hosting-volume
# gluster-block list <block-hosting-volume> --json-pretty
- Execute the following command to find out which all blockvolumes are hosted on the old node, with the help of info command
# gluster-block info <block-hosting-volume>/<block-volume> --json-pretty
- Execute the following command to replace the faulty node with the new node:
# gluster-block replace <volname/blockname> <old-node> <new-node> [force]
For example:{ "NAME":"block", "CREATE SUCCESS":"192.168.124.73", "DELETE SUCCESS":"192.168.124.63", "REPLACE PORTAL SUCCESS ON":[ "192.168.124.79" ], "RESULT":"SUCCESS" } Note: If the old node is down and does not come up again then you can force replace: gluster-block replace sample/block 192.168.124.63 192.168.124.73 force --json-pretty { "NAME":"block", "CREATE SUCCESS":"192.168.124.73", "DELETE FAILED (ignored)":"192.168.124.63", "REPLACE PORTAL SUCCESS ON":[ "192.168.124.79" ], "RESULT":"SUCCESS" }
Note
The next steps henceforth are to be executed only if the block that is to be replaced is still in use. - Logout of the old portal by executing the following command on the initiator:
# iscsiadm -m node -T <targetname> -p <old node> -u
For example:# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.63 -u Logging out of session [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] Logout of [sid: 8, target: iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a, portal: 192.168.124.63,3260] successful.
- To re-discover the new node execute the following command:
# iscsiadm -m discovery -t st -p <new node>
For example:# iscsiadm -m discovery -t st -p 192.168.124.73 192.168.124.79:3260,1 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a 192.168.124.73:3260,2 iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a
- Login to the new portal by executing the following command:
# iscsiadm -m node -T <targetname> -p <new node ip> -l
For example:# iscsiadm -m node -T iqn.2016-12.org.gluster-block:d6d18f43-8a74-4b2c-a5b7-df1fa3f5bc9a -p 192.168.124.73 -l
- To verify if the enabled hosting volume is replaced and running successfully, execute the following command on the initiator:
# ll /dev/disk/by-path/ip-* | grep <targetname> | grep <“new node ip”>

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.