-
Language:
English
-
Language:
English
Chapter 5. Persistent Storage
Container storage by default is not persistent. For example, if a new container build occurs then data is lost because the storage is non-persistent. If a container terminates then of the all changes to it’s local filesystem are lost. OpenShift offers many different types of persistent storage. Persistent storage ensures that data that should persist between builds and container migrations is available. The different storage options can be found at https://docs.openshift.com/container-platform/3.5/architecture/additional_concepts/storage.html#types-of-persistent-volumes. When choosing a persistent storage backend ensure that the backend supports the scaling, speed, and redundancy that the project requires. This reference architecture will focus on cloud provider specific storage, Container-Native Storage (CNS), and Container-Ready Storage(CRS).
5.1. Persistent Volumes
Container storage is defined by the concept of persistent volumes
(pv) which are OpenShift objects that allow for storage to be defined and then used by pods to allow for data persistence. Requesting of persistent volumes
is done by using a persistent volume claim
(pvc). This claim, when successfully fulfilled by the system will also mount the persistent storage to a specific directory within a pod or multiple pods. This directory is referred to as the mountPath
and facilitated using a concept known as bind-mount
.
5.2. Storage Classes
The StorageClass
resource object describes and classifies different types of storage that can be requested, as well as provides a means for passing parameters to the backend for dynamically provisioned storage on demand. StorageClass
objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin)
or Storage Administrators (storage-admin)
define and create the StorageClass
objects that users can use without needing any intimate knowledge about the underlying storage volume sources. Because of this the naming of the storage class
defined in the StorageClass
object should be useful in understanding the type of storage it maps to (ie., HDD
vs SDD
or st1
vs gp2
).
5.3. Cloud Provider Specific Storage
Cloud provider specific storage is storage that is provided from AWS
. This type of storage is presented as an EBS
volume and can be mounted by one pod at a time. The EBS
volume must exist in the same availability zone as the pod that requires the storage. This is because EBS
volumes cannot be mounted by an EC2
instance outside of the availability zone that it was created. For AWS
there are 4 types of persistent disks that can be utilized http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html [io1, gp2, sc1, and st1] for cloud provider specific storage. Cloud provider storage can be created manually and assigned as a persistent volume or a persistent volume can be created dynamically using a StorageClass
object. Note that EBS
storage can only use the access mode of Read-Write-Once (RWO).
5.3.1. Creating a Storage Class
When requesting cloud provider specific storage the name, zone, and type are configurable items. Ensure that the zone configuration parameter specified when creating the StorageClass
object is an AZ
that currently hosts application node instances. This will ensure that PVCs
will be created in the correct AZ
in which containers are running. The cluster-admin
or storage-admin
can perform the following commands which will allow for dynamically provisioned gp2 storage on demand, in the us-east-1a AZ, and using the name "standard" as the name of the StorageClass
object.
$ vi standard-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1a
The cluster-admin or storage-admin can then create the StorageClass
object using the yaml file.
$ oc create -f standard-storage-class.yaml
Multiple StorageClassess
objects can be defined depending on the storage needs of the pods within OpenShift.
5.3.2. Creating and using a Persistent Volumes Claim
The example below shows a dynamically provisioned volume being requested from the StorageClass
named standard.
$ vi db-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db annotations: volume.beta.kubernetes.io/storage-class: standard spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi $ oc create -f db-claim.yaml persistentvolumeclaim "db" created
5.4. Container-Native Storage Overview
Container-Native Storage (CNS) provides dynamically provisioned storage for containers on OpenShift across cloud providers, virtual and bare-metal deployments. CNS
relies on EBS
volumes mounted on the OpenShift nodes and uses software-defined storage provided by Red Hat Gluster Storage. CNS
runs Red Hat Gluster Storage containerized allowing OpenShift storage pods to spread across the cluster and across Availabilty Zones
. CNS
enables the requesting and mounting of Gluster
storage across one or many containers with access modes of either ReadWriteMany(RWX)
, ReadOnlyMany(ROX)
or ReadWriteOnce(RWO)
. CNS
can also be used to host the OpenShift registry.
5.4.1. Prerequisites for Container-Native Storage
Deployment of Container-Native Storage (CNS) on OpenShift Container Platform (OCP) requires at least three OpenShift nodes with at least one unused block storage device attached on each of the nodes. Dedicating three OpenShift nodes to CNS
will allow for the configuration of one StorageClass
object to be used for applications. If two types of CNS storage are required then a minimum of six CNS
nodes must be deployed and configured. This is because only a single CNS
container per OpenShift node is supported.
If the CNS
instances will serve dual roles such as hosting application pods and glusterfs
pods ensure the instances have enough resources to support both operations. CNS
hardware requirements state that there must be 32GB of RAM per EC2
instance. There is a current limit of 300 volumes or PVs
per 3 node CNS
cluster. The CNS EC2
instance type may need to be increased to support 300 volumes.
If there is a need to use the CNS
instances for application or infrastructure pods the label role=app can be applied to the nodes. For nodes which carry both the app and the storage label the EC2
instance type of m4.2xlarge
is a conscious choice that provides balance between enough memory requirements in the adoption phase and immediate EC2
instance cost. In the adoption phase it is expected that the platform will run less than 300 PVs
and the remaining memory on the 32GB instance is enough to serve the application pods. Over time the amount of apps and PVs
will grow and the EC2 instance type choice must be re-evaluated, or more app-only nodes need to be added.
5.4.2. Deployment of CNS Infrastructure
A python script named add-cns-storage.py
is provided in the openshift-ansible-contrib
git repository which will deploy three nodes, add the nodes to the OpenShift environment with specific OpenShift labels and attach an EBS
volume to each node as an available block device to be used for CNS
. Do the following from the workstation performing the deployment of the OpenShift Reference Architecture.
On deployments of the Reference Architecture environment post OpenShift 3.5 --use-cloudformation-facts is available to auto-populate values. An example of these values can be viewed in the Post Provisioning Results
section of this Reference Architecture. If the deployment occurred before version 3.5 then it is required to fill in the values manually. To view the possible configuration triggers run add-cns-storage.py -h. The add-cns-storage.py
script requires the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY exported as an environment variable.
If the Reference Architecture deployment version is >= 3.5, use the deployment option --use-cloudformation-facts
to auto-populate some values based on the existing Cloudformations
stack.
$ cd /home/<user>/git/openshift-ansible-contrib/reference-architecture/aws-ansible/ $ ./add-cns-storage.py --public-hosted-zone=sysdeseng.com \ --region=us-east-1 --gluster-stack=cns \ --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" --keypair=OSE-key *./add-cns-storage.py --gluster-stack=cns --public-hosted-zone=sysdeseng.com \ --rhsm-user=username --rhsm-password=password \ --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" \ --keypair=OSE-key --existing-stack=openshift-infra --region=us-east-1 \ --use-cloudformation-facts \
If the Reference Architecture deployment version is < 3.5, fill in the values to represent the existing Cloudformations
stack.
$ cd /home/<user>/git/openshift-ansible-contrib/reference-architecture/aws-ansible/ $ ./add-cns-storage.py --gluster-stack=cns --public-hosted-zone=sysdeseng.com \ --rhsm-user=username --rhsm-password=password \ --rhsm-pool="Red Hat OpenShift Container Platform, Premium, 2-Core" \ --keypair=OSE-key --existing-stack=openshift-infra --region=us-east-1 \ --private-subnet-id1=subnet-ad2b23f6 --private-subnet-id2=subnet-7cd61a34 \ --private-subnet-id3=subnet-77e89a4b --node-sg=sg-0c7e0f73 \ --iam-role=backup-NodeInstanceProfile-AX9H0AOAINY3
The script above is optional. Instances can be deployed without using this script as long as the new instances are added to the OpenShift cluster using the OpenShift add node playbooks or using the add-node.py
.
5.4.3. Firewall and Security Group Prerequisites
The following ports must be open on the CNS
nodes and on the node security group in AWS
. Ensure the following ports defined in the table below are open. This configuration is automatically applied on the nodes deployed using the add-cns-storage.py
script.

Table 5.1. AWS Nodes Security Group Details - Inbound
Inbound | From |
---|---|
22 / TCP | bastion_sg |
2222 / TCP | ose_node_sg |
4789 / UDP | ose_node_sg |
10250 / TCP | ose_master_sg |
10250 / TCP | ose_node_sg |
24007 / TCP | ose_node_sg |
24008 / TCP | ose_node_sg |
49152-49664 / TCP | ose_node_sg |
5.5. CNS Installation Overview
The process for creating a CNS
deployment on OpenShift starts with creating an OpenShift project that will host the glusterfs
pods and the CNS
service/pod/route resources. The Red Hat utility cns-deploy
will automate the creation of these resources. After the creation of the CNS
components then a StorageClass
can be defined for creating Persistent Volume Claims (PVCs) against the Container-Native Storage Service. CNS
uses services from heketi
to create a gluster
Trusted Storage Pool.
Container-Native Storage service consists of a Red Hat Gluster Storage single container pods running on OpenShift Nodes managed by a Heketi Service. A single heketi
service can manage multiple CNS
Trusted Storage Pools. This is implemented using a DaemonSet
, a specific way to deploy containers to ensure nodes participating in that DaemonSet
always run exactly one instance of the glusterfs
image as a pod. DaemonSets
are required by CNS
because the glusterfs
pods must use the host’s networking resources. The default configuration ensures that no more than one glusterfs
pod can run on an OpenShift node.
5.5.1. Creating CNS Project
These activities should be done on the master due to the requirement of setting the node selector
. The account performing the CNS
activities must be a cluster-admin.
The project name used for this example will be storage
but the project name can be whatever value an administrator chooses.
If the CNS
nodes will only be used for CNS
then a node-selector
should be supplied.
# oadm new-project storage --node-selector='role=storage'
If the CNS
nodes will serve the role of being used for both CNS
and application pods then a node-selector
does not need to supplied.
# oadm new-project storage
An oadm
policy must be set to enable the deployment of the privileged containers as Red Hat Gluster Storage containers can only run in the privileged mode.
# oc project storage # oadm policy add-scc-to-user privileged -z default
5.5.2. Gluster Deployment Prerequisites
Perform the following steps from CLI on a local or deployment workstation and ensure that the oc client has been installed and configured. An entitlement for Red Hat Gluster Storage
is required to install the Gluster
services.
$ subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms $ subscription-manager repos --enable=rhel-7-server-rpms $ yum install -y cns-deploy heketi-client
5.5.3. Deploying Container-Native Storage
The Container-Native Storage glusterfs
and heketi
pods, services, and heketi
route are created using the cns-deploy
tool which was installed during the prerequisite step.
A heketi
topology file is used to create the Trusted Storage Pool. The topology describes the OpenShift nodes that will host Red Hat Gluster Storage services and their attached storage devices. A sample topology file topology-sample.json
is installed with the heketi-client
package in the /usr/share/heketi/
directory. Below a table shows the different instances running in different AZs
to distinguish failure domains defined as a zone in heketi
. This information will be used to make intelligent decisions about how to structure a volume that is, to create Gluster volume layouts that have no more than one brick or storage device from a single failure domain. This information will also be used when healing degraded volumes in the event of a loss of device or an entire node.
Table 5.2. CNS Topology file
Instance | AZ | Zone | Device |
---|---|---|---|
ip-10-20-4-163.ec2.internal | us-east-1a | 1 | /dev/xvdd |
ip-10-20-5-247.ec2.internal | us-east-1c | 2 | /dev/xvdd |
ip-10-20-6-191.ec2.internal | us-east-1d | 3 | /dev/xvdd |
These activities should be done on the workstation where cns-deploy
and heketi-client
were installed. Ensure that the OpenShift client has the cluster-admin privilege before proceeding.
Below is an example of 3 node topology.json
file in the us-east-1
region with /dev/xvdd
as the EBS
volume or device used for CNS
. Edit the values of node.hostnames.manage, node.hostnames.storage, and devices in the topology.json
file based on the the OpenShift nodes that have been deployed in the previous step.
# vi gluster-topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-4-163.ec2.internal"
],
"storage": [
"10.20.4.163"
]
},
"zone": 1
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-5-247.ec2.internal"
],
"storage": [
"10.20.5.247"
]
},
"zone": 2
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-6-191.ec2.internal"
],
"storage": [
"10.20.6.191"
]
},
"zone": 3
},
"devices": [
"/dev/xvdd"
]
}
]
}
]
}
Ensure that the storage project is the current project.
# oc project storage
Already on project "storage" on server "https://openshift-master.sysdeseng.com"
To launch the deployment of CNS
the script cns-deploy will be used. It is advised to specify an admin-key
and user-key
for security reasons when launching the topology. Both admin-key
and user-key
are user defined values, they do not exist before this step. The heketi
admin key (password) will later be used to create a heketi-secret
in OpenShift. Be sure to note these values as they will be needed in future operations. The cns-deploy script will prompt the user before proceeding.
# cns-deploy -n storage -g gluster-topology.json --admin-key 'myS3cr3tpassw0rd' --user-key 'mys3rs3cr3tpassw0rd'
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.
Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.
The client machine that will run this script must have:
* Administrative access to an existing Kubernetes or OpenShift cluster
* Access to a python interpreter 'python'
* Access to the heketi client 'heketi-cli'
Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
* 2222 - sshd (if running GlusterFS in a pod)
* 24007 - GlusterFS Daemon
* 24008 - GlusterFS Management
* 49152 to 49251 - Each brick for every volume on the host requires its own
port. For every new brick, one new port will be used starting at 49152. We
recommend a default range of 49152-49251 on each host, though you can adjust
this to fit your needs.
In addition, for an OpenShift deployment you must:
* Have 'cluster_admin' role on the administrative account doing the deployment
* Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
* Have a router deployed that is configured to allow apps to access services
running in the cluster
Do you wish to proceed with deployment?
[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.
NAME STATUS AGE
storage Active 9m
Using namespace "storage".
template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
template "glusterfs" created
node "ip-10-20-4-163.ec2.internal" labeled
node "ip-10-20-5-247.ec2.internal" labeled
node "ip-10-20-6-191.ec2.internal" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 17 100 17 0 0 3 0 0:00:05 0:00:05 --:--:-- 4
Creating cluster ... ID: 372cf750e0b256fbc8565bb7e4afb434
Creating node ip-10-20-4-163.ec2.internal ... ID: 9683e22a0f98f8c40ed5c3508b2b4a38
Adding device /dev/xvdd ... OK
Creating node ip-10-20-5-247.ec2.internal ... ID: b9bb8fc7be62de3152b9164a7cb3a231
Adding device /dev/xvdd ... OK
Creating node ip-10-20-6-191.ec2.internal ... ID: 790bff20ac0115584b5cd8225565b868
Adding device /dev/xvdd ... OK
Saving heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
Waiting for heketi pod to start ... OK
heketi is now running.
Ready to create and provide GlusterFS volumes.
After successful deploy validate that there are now 3 glusterfs
pods and 1 heketi
pod in the storage project.
# oc get pods
NAME READY STATUS RESTARTS AGE
glusterfs-7gr8y 1/1 Running 0 4m
glusterfs-fhy3e 1/1 Running 0 4m
glusterfs-ouay0 1/1 Running 0 4m
heketi-1-twis5 1/1 Running 0 1m
5.5.4. Exploring Heketi
A route will be created for the heketi
service that was deployed during the run of the cns-deploy
script. The heketi
route URL is used by the heketi-client
. The same route URL will be used to create StorageClass
objects.
The first step is to find the endpoint for the heketi
service and then set the environment variables for the route of the heketi
server, the heketi cli user
, and the heketi cli key
.
# oc get routes heketi NAME HOST/PORT PATH SERVICES PORT TERMINATION heketi heketi-storage.apps.sysdeseng.com heketi <all> # export HEKETI_CLI_SERVER=http://heketi-storage.apps.sysdeseng.com # export HEKETI_CLI_USER=admin # export HEKETI_CLI_KEY=myS3cr3tpassw0rd
To validate that heketi
loaded the topology and has the cluster created execute the following commands:
# heketi-cli topology info ... ommitted ... # heketi-cli cluster list Clusters: 372cf750e0b256fbc8565bb7e4afb434
Use the output of the cluster list to view the nodes and volumes within the cluster.
# heketi-cli cluster info 372cf750e0b256fbc8565bb7e4afb434
5.5.5. Store the Heketi Secret
OpenShift allows for the use of secrets so that items do not need to be stored in clear text. The admin password for heketi
, specified during installation with cns-deploy, should be stored in base64-encoding. OpenShift can refer to this secret instead of specifying the password in clear text.
To generate the base64-encoded equivalent of the admin password supplied to the cns-deploy command perform the following.
# echo -n myS3cr3tpassw0rd | base64 bXlzZWNyZXRwYXNzdzByZA==
On the master or workstation with the OpenShift client installed and a user with cluster-admin privileges use the base64 password string in the following YAML to define the secret in OpenShift’s default project or namespace.
# vi heketi-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: key: bXlzZWNyZXRwYXNzdzByZA== type: kubernetes.io/glusterfs
Create the secret by using the following command.
# oc create -f heketi-secret.yaml secret "heketi-secret" created
5.5.6. Creating a Storage Class
The StorageClass
object created using the CNS
components is a more robust solution than using cloud provider specific storage due to the fact that the storage is not dependant on AZs
. The cluster-admin or storage-admin can perform the following which will allow for dynamically provisioned CNS
storage on demand. The key benefit of this storage is that the persistent storage created can be configured with access modes of ReadWriteOnce(RWO), ReadOnlyMany (ROX), or ReadWriteMany (RWX) adding much more flexibility than cloud provider specific storage.
If Multiple types of CNS storage are desired, additional StorageClass objects can be created to realize multiple tiers of storage defining different types of storage behind a single heketi
instance. This will involve deploying more glusterfs
pods on additional storage nodes (one gluster pod per OpenShift node) with different type and quality of EBS volumes attached to achieve the desired properties of a tier (e.g. io1 for “fast” storage, magnetic for “slow” storage). For the examples below we will assume that only one type of storage is required.
Perform the following steps from CLI on a workstation or master node where the OpenShift client has been configured.
# oc project storage # oc get routes heketi NAME HOST/PORT PATH SERVICES PORT TERMINATION heketi heketi-storage.apps.sysdeseng.com heketi <all> # export HEKETI_CLI_SERVER=http://heketi-storage.apps.sysdeseng.com # export HEKETI_CLI_USER=admin # export HEKETI_CLI_KEY=myS3cr3tpassw0rd
Record the cluster id of the glusterfs
pods in heketi
.
# heketi-cli cluster list Clusters: eb08054c3d42c88f0924fc6a57811610
The StorageClass
object requires both the cluster id and the heketi
route to be defined to successfully created. Use the information from the output of heketi-cli cluster list
and oc get routes heketi
to fill in the resturl and clusterid. For OpenShift 3.4, the value of clusterid
is not supported for the StorageClass
object. If a value is provided the StorageClass
object will fail to create for OpenShift version 3.4. The failure occurs because OpenShift 3.4 can only have a single TSP
or CNS
cluster.
OpenShift 3.4
# vi glusterfs-storageclass-st1.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-cns-slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: http://heketi-storage.apps.sysdeseng.com
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
The StorageClass
object can now be created using this yaml file.
# oc create -f glusterfs-storageclass-st1.yaml
OpenShift 3.5
# vi glusterfs-storageclass-st1.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-cns-slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: http://heketi-storage.apps.sysdeseng.com
clusterid: eb08054c3d42c88f0924fc6a57811610
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
The StorageClass
object can now be created using this yaml file.
# oc create -f glusterfs-storageclass-st1.yaml
To validate the StorageClass
object was created perform the following.
# oc get storageclass gluster-cns-slow NAME TYPE gluster-cns-dd kubernetes.io/glusterfs # oc describe storageclass gluster-cns-slow Name: gluster-cns-slow IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: clusterid=e73da525319cbf784ed27df3e8715ea8,restauthenabled=true,resturl=http://heketi-storage.apps.sysdeseng.com,restuser=admin,secretName=heketi-secret,secretNamespace=default No events.
5.5.7. Creating a Persistent Volume Claim
The StorageClass
object created in the previous section allows for storage to be dynamically provisioned using the CNS
resources. The example below shows a dynamically provisioned volume being requested from the gluster-cns-slow StorageClass
object A sample persistent volume claim is provided below:
$ oc new-project test $ oc get storageclass NAME TYPE gluster-cns-slow kubernetes.io/glusterfs $ vi db-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db annotations: volume.beta.kubernetes.io/storage-class: gluster-cns-slow spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi $ oc create -f db-claim.yaml persistentvolumeclaim "db" created
5.6. Additional CNS Storage Deployments (Optional)
An OpenShift administrator may wish to offer multiple storage tiers to developers and users of the OpenShift Container Platform. Typically these tiers refer to certain performance characteristics, e.g. a storage tier called “fast” might be backed by SSDs
whereas a storage tier called “slow” is backed by magnetic drives or HDDs. With CNS
an administrator can realize this by deploying additional storage nodes running glusterfs
pods. The additional nodes allow for the creation of additional Storage Classes
. A developer then consumes different storage tiers by select the appropriate StorageClass
object by the objects name.
Creating additional CNS
storage deployments is not possible if using OpenShift 3.4. Only one CNS
and subsequent StorageClass
object can be created.
5.6.1. Deployment of a second Gluster Storage Pool
To deploy an additional glusterfs
pool OpenShift requires additional nodes to be available that currently are not running glusterfs
pods yet. This will require that another three OpenShift nodes are available in the environment using either the add-cns-storage.py
script or by manually deploying three instances and installing and configuring those nodes for OpenShift.
If running the add-cns-storage.py
nodes a second time provide a unique value for configuration parameter of --gluster-stack
. If the value of --gluster-stack
is the same for the old environment and the new then the existing CNS
deployment will be replaced.
Once the new nodes are available, the next step is to get glusterfs
pods up and running on the additional nodes. This is achieved by extending the members of the DaemonSet defined in the first CNS
deployment. The storagenode=glusterfs
label must be applied to the nodes to allow for the scheduling of the glusterfs
pods.
First identify the three nodes that will be added to the CNS
cluster and then apply the label.
# oc get nodes NAME STATUS AGE ...omitted... ip-10-20-4-189.ec2.internal Ready 5m ip-10-20-5-204.ec2.internal Ready 5m ip-10-20-6-39.ec2.internal Ready 5m ...omitted... # oc label node ip-10-20-4-189.ec2.internal storagenode=glusterfs # oc label node ip-10-20-5-204.ec2.internal storagenode=glusterfs # oc label node ip-10-20-6-39.ec2.internal storagenode=glusterfs
Once the label has been applied then the glusterfs
pods will scale from 3 pods to 6. The glusterfs
pods will be running on both the newly labeled nodes and the existing nodes.
[subs=+quotes] # *oc get pods* NAME READY STATUS RESTARTS AGE glusterfs-2lcnb 1/1 Running 0 26m glusterfs-356cf 1/1 Running 0 26m glusterfs-fh4gm 1/1 Running 0 26m glusterfs-hg4tk 1/1 Running 0 2m glusterfs-v759z 1/1 Running 0 1m glusterfs-x038d 1/1 Running 0 2m heketi-1-cqjzm 1/1 Running 0 22m
Wait until all of the glusterfs
pods are in READY 1/1 state before continuing. The new pods are not yet configured as a CNS
cluster. The new glusterfs
pods will be a new CNS cluster after the topology.json
file is updated to define the new nodes they reside on and the heketi-cli
is executed with this new topology.json
file as input.
5.6.2. Modifying the Topology File
Modify the topology.json
file of the first CNS
cluster to include a second entry in the “clusters” list containing the additional nodes. The initial nodes have been omitted from the output below but are still required.
# vi gluster-topology.json
{
"clusters": [
{
"nodes": [
{
... nodes from initial cns-deploy call ...
}
]
},
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-4-189.ec2.internal"
],
"storage": [
"10.20.4.189"
]
},
"zone": 1
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-5-204.ec2.internal"
],
"storage": [
"10.20.5.204"
]
},
"zone": 2
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-6-39.ec2.internal"
],
"storage": [
"10.20.6.39"
]
},
"zone": 3
},
"devices": [
"/dev/xvdd"
]
}
]
}
]
}
Using heketi-cli
load the modified topology.json
file via heketi
to trigger the creation of a second cluster using the steps below. The first step is to export the values of the heketi
server, user, and key. The HEKET_CLI_KEY
value should be the same as that created for the first cluster (set using --admin-key
for cns-deploy
).
# export HEKETI_CLI_SERVER=http://heketi-storage.apps.sysdeseng.com # export HEKETI_CLI_USER=admin # export HEKETI_CLI_KEY=myS3cr3tpassw0rd
With these environment variables exported the next step is to load the topology.json
.
# heketi-cli topology load --json=gluster-topology.json
Found node ip-10-20-4-163.ec2.internal on cluster 372cf750e0b256fbc8565bb7e4afb434
Found device /dev/xvdd
Found node ip-10-20-5-247.ec2.internal on cluster 372cf750e0b256fbc8565bb7e4afb434
Found device /dev/xvdd
Found node ip-10-20-6-191.ec2.internal on cluster 372cf750e0b256fbc8565bb7e4afb434
Found device /dev/xvdd
Creating cluster ... ID: 269bb26142a15ee10fa8b1cdeb0a37b7
Creating node ip-10-20-4-189.ec2.internal ... ID: 0bd56937ef5e5689e003f68a7fde7c69
Adding device /dev/xvdd ... OK
Creating node ip-10-20-5-204.ec2.internal ... ID: f79524a4de9b799524c87a4feb41545a
Adding device /dev/xvdd ... OK
Creating node ip-10-20-6-39.ec2.internal ... ID: 4ee40aee71b60b0627fb57be3dd2c66e
Adding device /dev/xvdd ... OK
Observe the second cluster being created and verify that there is a new clusterid
created in the console output. Verify you now have a second clusterid
and that the correct EC2
nodes are in the new cluster.
# heketi-cli cluster list # heketi-cli topology info
5.6.3. Creating an Additional Storage Class
Create a second StorageClass
object via a YAML file similar to the first one with the same heketi
route and heketi
secret but using the new clusterid
and a unique StorageClass
object name.
# vi glusterfs-storageclass-gp2.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-cns-fast
provisioner: kubernetes.io/glusterfs
parameters:
resturl: http://heketi-storage.apps.sysdeseng.com
clusterid: 269bb26142a15ee10fa8b1cdeb0a37b7
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
Using the OpenShift client create the StorageClass
object.
# oc create -f glusterfs-storageclass-gp2.yaml
The second StorageClass
object will now be available to make storage requests using gluster-cns-fast when creating the PVC
.
# vi claim2.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db annotations: volume.beta.kubernetes.io/storage-class: gluster-cns-fast spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
5.7. Container-Ready Storage Overview
Container-Ready Storage (CRS) like CNS
, uses Red Hat Gluster Storage to provide dynamically provisioned storage. Unlike CNS
where OpenShift deploys glusterfs
and heketi
specific pods to be used for OpenShift storage CRS
requires an Administrator to install packages and enable the storage services on EC2 instances. Like CNS
, CRS
enables the requesting and mounting of Red Hat Gluster Storage across one or many containers (access modes RWX, ROX and RWO). CRS
allows for the Red Hat Gluster Storage to be used outside of OpenShift. CRS
can also be used to host the OpenShift registry as can CNS
.
5.7.1. Prerequisites for Container-Ready Storage
Deployment of Container-Ready Storage (CRS) requires at least 3 AWS
instances with at least one unused block storage device or EBS
volume on each node. The instances should have at least 4 CPUs, 32GB RAM, and an unused volume 100GB or larger per node. An entitlement for Red Hat Gluster Storage
is also required to install the Gluster
services.
5.7.2. Deployment of CRS Infrastructure
A python script named add-crs-storage.py
is provided in the openshift-ansible-contrib
git repository which will deploy three AWS
instances, register the instances, and install the prerequisites for CRS
for Gluster on each instance. Perform the following from the workstation where the deployment of the OpenShift Reference Architecture was initiated.
On deployments of the Reference Architecture environment post OpenShift 3.5 --use-cloudformation-facts
is available to auto-populate values. An example of these values can be viewed in the Post Provisioning Results section of this Reference Architecture. If the deployment occurred before 3.5 then it is required to fill in the values manually. To view the possible configuration triggers run add-crs-storage.py -h
. The add-crs-storage.py
script requires the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
exported as an environment variable.
If the Reference Architecture deployment is >= OpenShift 3.5
$ cd /home/<user>/git/openshift-ansible-contrib/reference-architecture/aws-ansible/ ./add-crs-storage.py --rhsm-user=username --rhsm-password=password --region=us-east-1 --gluster-stack=crs --public-hosted-zone=sysdeseng.com \ --rhsm-pool="Red Hat Gluster Storage , Standard" --keypair=OSE-key --existing-stack=openshift-infra --use-cloudformation-facts
If the Reference Architecture deployment was performed before 3.5.
$ cd /home/<user>/git/openshift-ansible-contrib/reference-architecture/aws-ansible/ ./add-crs-storage.py --rhsm-user=username --rhsm-password=PASSWORD --public-hosted-zone=sysdeseng.com \ --rhsm-pool="Red Hat Gluster Storage , Standard" --keypair=OSE-key --existing-stack=openshift-infra \ --private-subnet-id1=subnet-ad2b23f6 --private-subnet-id2=subnet-7cd61a34 --region=us-east-1 \ --private-subnet-id3=subnet-77e89a4b --node-sg=sg-0c7e0f73 -- bastion-sg=sg-1a2b5a23 --gluster-stack=crs
Using the script add-crs-storage.py
is optional. Nodes can be deployed without using this script as long as the 3 new instances have 4 CPUs, 32GB RAM, and an unused storage device or EBS
volume.
5.7.3. CRS Subscription Prerequisites
CRS
requires the instances to use the Red Hat Gluster Storage
entitlement which which allows access to the rh-gluster-3-for-rhel-7-server-rpms
repository containing the required RPMs
for a successful installation.
If the add-crs-storage.py
script was not used perform the following on the 3 CRS
instances to enable the required repository. Ensure the pool that is specified matches a pool available to the RHSM
credentials provided (example pool ID shown below).
# subscription-manager register # subscription-manager attach --pool=8a85f98156981319015699f0183a253c # subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rh-gluster-3-for-rhel-7-server-rpms
5.7.4. Firewall and Security Group Prerequisites
The following ports must be opened on the CRS
nodes and in the AWS
gluster-crs-sg
security group. Ensure the following ports defined in the table below are opened. Iptables or firewalld can be used depending on the preference of the Administrator. These steps are done as part of the automated provisioning of the instances with the add-crs-storage.py
.

Table 5.3. AWS Nodes Security Group Details - Inbound
Inbound | From |
---|---|
22 / TCP | bastion_sg |
22 / TCP | gluster-crs-sg |
2222 / TCP | gluster-crs-sg |
8080 / TCP | gluster-crs-sg |
8080 / TCP | ose-node-sg |
24007 / TCP | gluster-crs-sg |
24007 / TCP | ose-node-sg |
24008 / TCP | gluster-crs-sg |
24008 / TCP | ose-node-sg |
49152-49664 / TCP | gluster-crs-sg |
49152-49664 / TCP | ose-node-sg |
The add-crs-gluster.py
uses iptables and creates the rules shown in the table above on each of the CRS
nodes. The following commands can be ran on the 3 new instances if the instances were built without using the add-crs-gluster.py
script.
# yum -y install firewalld # systemctl enable firewalld # systemctl disable iptables # systemctl stop iptables # systemctl start firewalld # firewall-cmd --add-port=24007/tcp --add-port=24008/tcp --add-port=2222/tcp \ --add-port=8080/tcp --add-port=49152-49251/tcp --permanent # firewall-cmd --reload
5.7.5. CRS Package Prerequisites
The redhat-storage-server
package and dependencies will install all of the required RPMs
for a successful Red Hat Gluster Storage installation. If the add-crs-storage.py
script was not used perform the following on the each of the three CRS
instances.
# yum install -y redhat-storage-server
After successful installation enable and start the glusterd.service
.
# systemctl enable glusterd # systemctl start glusterd
5.7.6. Installing and Configuring Heketi
Heketi
is used to manage the Gluster Trusted Storage Pool(TSP)
. Heketi
is used to perform tasks such as adding volumes, removing volumes, and creating the initial TSP
. Heketi
can be installed on one of the CRS
instances or even within OpenShift if desired. Regardless of whether the add-crs-storage.py
script was used or not the following must be performed on the CRS
instance chosen to run Heketi
. For the steps below the first CRS
Gluster
instance will be used.
# yum install -y heketi heketi-client
Create the heketi
private key on the instance designated to run heketi
.
# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N '' # chown heketi:heketi /etc/heketi/heketi_key.pub # chown heketi:heketi /etc/heketi/heketi_key
Copy the contents of the /etc/heketi/heketi_key.pub
into a clipboard and login to each CRS
node and paste the contents of the clipboard as a new line into the /home/ec2-user/.ssh/authorized_keys
file. This must be done on all 3 instances including the CRS
node where the heketi
services are running. Also, on each of the 3 instances requiretty
must be disabled or removed in /etc/sudoers
to allow for management of those hosts using sudo
. Ensure that the line below either does not exist in sudoers or that it is commented out.
# visudo
... omitted ...
#Defaults requiretty
... omitted ...
On the node where Heketi
was installed, edit the /etc/heketi/heketi.json
file to setup the SSH executor and the admin and user keys. The heketi
admin key (password) will be used to create a heketi-secret
in OpenShift. This secret will then be used during the creation of the StorageClass
object.
# vi /etc/heketi/heketi.json
... omitted ...
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true,
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "myS3cr3tpassw0rd"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "mys3rs3cr3tpassw0rd"
}
},
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh",
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/heketi_key",
"user": "ec2-user",
"sudo": true,
"port": "22",
"fstab": "/etc/fstab"
},
... omitted ...
Restart and enable heketi
service to use the configured /etc/heketi/heketi.json
file.
# systemctl restart heketi # systemctl enable heketi
The heketi
service should now be running. Heketi
provides an endpoint to perform a health check. This validation can be done from either an OpenShift
master or from any of the CRS
instances.
# curl http://ip-10-20-4-40.ec2.internal:8080/hello Hello from Heketi
5.7.7. Loading Topology File
The topology.json
is used to tell heketi
about the environment and which nodes and storage devices it will manage. There is a sample file located in /usr/share/heketi/topology-sample.json
and an example shown below for 3 CRS nodes in 3 zones. Both CRS
and CNS
use the same format for the topology.json
file.
# vi topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-4-40.ec2.internal"
],
"storage": [
"10.20.4.40"
]
},
"zone": 1
},
"devices": [
"/dev/xvdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-5-104.ec2.internal"
],
"storage": [
"10.20.5.104"
]
},
"zone": 2
},
"devices": [
"/dev/xvdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"ip-10-20-6-79.ec2.internal"
],
"storage": [
"10.20.6.79"
]
},
"zone": 3
},
"devices": [
"/dev/xvdb"
]
}
]
}
]
}
The HEKETI_CLI_SERVER
, HEKETI_CLI_USER
, and HEKETI_CLI_KEY
environment variables are required for heketi-cli
commands to be ran. The HEKETI_CLI_SERVER
is the AWS
instance name where the heketi
services are running. The HEKETI_CLI_KEY
is the admin key value configured in the /etc/heketi/heketi.json
file.
# export HEKETI_CLI_SERVER=http://ip-10-20-4-40.ec2.internal:8080 # export HEKETI_CLI_USER=admin # export HEKETI_CLI_KEY=myS3cr3tpassw0rd
Using heketi-cli
, run the following command to load the topology of your environment.
# heketi-cli topology load --json=topology.json
Found node ip-10-20-4-40.ec2.internal on cluster c21779dd2a6fb2d665f3a5b025252849
Adding device /dev/xvdb ... OK
Creating node ip-10-20-5-104.ec2.internal ... ID: 53f9e1af44cd5471dd40f3349b00b1ed
Adding device /dev/xvdb ... OK
Creating node ip-10-20-6-79.ec2.internal ... ID: 328dfe7fab00a989909f6f46303f561c
Adding device /dev/xvdb ... OK
5.7.8. Validating Gluster Installation(Optional)
From the instance where heketi
client is installed and the heketi
environment variables has been exported create a Gluster volume to verify heketi
.
# heketi-cli volume create --size=50
Name: vol_4950679f18b9fad6f118b2b20b0c727e
Size: 50
Volume Id: 4950679f18b9fad6f118b2b20b0c727e
Cluster Id: b708294a5a2b9fed2430af9640e7cae7
Mount: 10.20.4.119:vol_4950679f18b9fad6f118b2b20b0c727e
Mount Options: backup-volfile-servers=10.20.5.14,10.20.6.132
Durability Type: replicate
Distributed+Replica: 3
The command gluster volume info
can provide further information on the newly created Gluster
volume.
# gluster volume info
Volume Name: vol_f2eec68b2dea1e6c6725d1ca3f9847a4
Type: Replicate
Volume ID: f001d6e9-fee4-4e28-9908-359bbd28b8f5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.20.5.104:/var/lib/heketi/mounts/vg_9de785372f550942e33d0f3abd8cd9ab/brick_03cfb63f8293238affe791032ec779c2/brick
Brick2: 10.20.4.40:/var/lib/heketi/mounts/vg_ac91b30f6491c571d91022d24185690f/brick_2b60bc032bee1be7341a2f1b5441a37f/brick
Brick3: 10.20.6.79:/var/lib/heketi/mounts/vg_34faf7faaf6fc469298a1c15a0b2fd2f/brick_d7181eb86f3ca37e4116378181e28855/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
5.8. CRS for OpenShift
5.8.1. Store the heketi secret
OpenShift allows for the use of secrets so that items do not need to be stored in clear text. The admin password for heketi
, specified during configuration of the heketi.json
file, should be stored in base64-encoding. OpenShift can refer to this secret instead of specifying the password in clear text.
To generate the base64-encoded equivalent of the admin password supplied to the cns-deploy command perform the following.
# echo -n myS3cr3tpassw0rd | base64 bXlzZWNyZXRwYXNzdzByZA==
On the master or workstation with the OpenShift client installed with cluster-admin
privileges use the base64 password string in the following YAML to define the secret in OpenShift’s default namespace.
# vi heketi-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: default
data:
key: bXlzZWNyZXRwYXNzdzByZA==
type: kubernetes.io/glusterfs
Create the secret by using the following command.
# oc create -f heketi-secret.yaml
secret "heketi-secret" created
5.8.2. Creating a Storage Class
CRS
storage has all of the same benefits that CNS
storage has with regard to OpenShift storage. The cluster-admin
or storage-admin
can perform the following which will allow for dynamically provisioned CRS
storage on demand. The key benefit of this storage is that the persistent storage can be created with access modes ReadWriteOnce(RWO)
, ReadOnlyMany(ROX)
, or ReadWriteMany(RWX)
adding more flexibility than cloud provider specific storage.
A StorageClass
object requires certain parameters to be defined to successfully create the resource. Use the values of the exported environment variables from the previous steps to define the resturl
, restuser
, secretNamespace
, and secretName
.
# vi storage-crs.json
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: crs-slow-st1
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://ip-10-20-4-40.ec2.internal:8080"
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
Once the Storage Class
json file has been created use the oc create
command to create the object in OpenShift.
# oc create -f storage-crs.json
To validate the Storage Class
was created perform the following.
# oc get storageclass NAME TYPE crs-slow-st1 kubernetes.io/glusterfs # oc describe storageclass crs-slow-st1 Name: crs-slow-st1 IsDefaultClass: No Annotations: storageclass.beta.kubernetes.io/is-default-class=true Provisioner: kubernetes.io/glusterfs Parameters: restauthenabled=true,resturl=http://ip-10-20-4-40.ec2.internal:8080, restuser=admin,secretName=heketi-secret,secretNamespace=default No events.
5.8.3. Creating a Persistent Volume Claim
The Storage Class
created in the previous section allows for storage to be dynamically provisioned using the CRS
resources. The example below shows a dynamically provisioned volume being requested from the crs-slow-st1 StorageClass
object. A sample persistent volume claim is provided below:
$ oc new-project test $ vi db-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db annotations: volume.beta.kubernetes.io/storage-class: crs-slow-st1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi $ oc create -f db-claim.yaml persistentvolumeclaim "db" created
5.9. RWO Persistent Storage Example (Optional)
For ReadWriteOnce
storage, any of the StorageClass
objects created in the above sections can be used. The persistent volume claim will be done at the time of application deployment and provisioned based on the rules in the StorageClass
object. The example below uses a MySQL
deployment using an OpenShift standard template and one of the StorageClass
objects defined above.
Create an OpenShift project for MySQL
deployment.
# oc new-project rwo
The ‘mysql-persistent’ template will be used for deploying MySQL
. The first step is to check to see if the template is available for use.
# oc get templates -n openshift | grep "MySQL database service, with persistent storage"
mysql-persistent MySQL database service, with persistent storage.
Export the default mysql-persistent template content into a yaml file. The OpenShift client can provide a view of the available parameters for this template.
# oc export template/mysql-persistent -n openshift -o yaml > mysql-persistent.yaml # oc process -f mysql-persistent.yaml --parameters
View the contents of the yaml file and add the lines below to identify the StorageClass
object the MySQL
PVC
will be created from. If these lines are not added the default StorageClass object will be used.
Any of the StorageClass
objects created in this reference architecture can be used.
# vi mysql-persistent.yaml
…. omitted ….
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ${DATABASE_SERVICE_NAME}
annotations:
volume.beta.kubernetes.io/storage-class: gluster-cns-fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: ${VOLUME_CAPACITY}
…. omitted ….
Create a deployment manifest from the mysql-persistent.yaml
template file and view contents. Make sure to modify the ‘storage: ${VOLUME_CAPACITY}’ to be the desired size for the database (1Gi is default value).
# oc process -f mysql-persistent.yaml -o yaml > cns-mysql-persistent.yaml # vi cns-mysql-persistent.yaml …. omitted …. - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: volume.beta.kubernetes.io/storage-class: gluster-cns-fast labels: template: mysql-persistent-template name: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi …. omitted ….
Using the deployment manifest, create the the objects for the MySQL
application.
# oc create -f cns-mysql-persistent.yaml
secret "mysql" created
service "mysql" created
persistentvolumeclaim "mysql" created
deploymentconfig "mysql" created
Validate application is using a persistent volume claim.
# oc describe dc mysql
…. omitted ….
Volumes:
mysql-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql
ReadOnly: false
…. omitted ….
# oc get pvc mysql
NAME STATUS VOLUME CAPACITY ACCESSMODES
mysql Bound pvc-fc297b76-1976-11e7-88db-067ee6f6ca67 1Gi RWO
Validate that the MySQL
pod has a PV mounted at /var/lib/mysql/data
directory.
# oc volumes dc mysql
deploymentconfigs/mysql
pvc/mysql (allocated 1GiB) as mysql-data
mounted at /var/lib/mysql/data
The option also exists to connect to the running pod to view the storage that is currently in use.
# oc rsh mysql-1-4tb9g
sh-4.2$ df -h /var/lib/mysql/data
Filesystem Size Used Avail Use% Mounted on
10.20.4.40:vol_e9b42baeaaab2b20d816b65cc3095558 1019M 223M 797M 22% /var/lib/mysql/data
5.10. RWX Persistent Storage (Optional)
One of the benefits of using Red Hat Gluster Storage is the ability to use access mode ReadWriteMany(RWX) for container storage. This example is for a PHP
application which has requirements for a persistent volume mount point. The application will be scaled to show the benefits of RWX
persistent storage.
Create a test project for the demo application.
# oc new-project rwx
Create the application using the following github link:
# oc new-app openshift/php:7.0~https://github.com/christianh814/openshift-php-upload-demo --name=demo
--> Found image d3b9896 (2 weeks old) in image stream "openshift/php" under tag "7.0" for "openshift/php:7.0"
Apache 2.4 with PHP 7.0
-----------------------
Platform for building and running PHP 7.0 applications
Tags: builder, php, php70, rh-php70
* A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be created
* The resulting image will be pushed to image stream "demo:latest"
* Use 'start-build' to trigger a new build
* This image will be deployed in deployment config "demo"
* Port 8080/tcp will be load balanced by service "demo"
* Other containers can access this service through the hostname "demo"
--> Creating resources ...
imagestream "demo" created
buildconfig "demo" created
deploymentconfig "demo" created
service "demo" created
--> Success
Build scheduled, use 'oc logs -f bc/demo' to track its progress.
Run 'oc status' to view your app.
Validate that the build is complete and the pods are running.
# oc get pods
NAME READY STATUS RESTARTS AGE
demo-1-build 0/1 Completed 0 20s
demo-1-sch77 1/1 Running 0 7s
The next step is to retrieve the name of the OpenShift svc
which will be used to create a route.
# oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo 172.30.211.203 <none> 8080/TCP 1m
Expose the service as a public route by using the oc expose
command.
# oc expose svc/demo
route "demo" exposed
OpenShift will create a route based on the application name, project, and wildcard zone. This will be the URL
that can be accessed by browser.
# oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
demo demo-manual.apps.sysdeseng.com demo 8080-tcp None
Using a web browser validate the application (example http://demo-manual.apps.sysdeseng.com/) using the route defined in the previous step.

Upload a file using the web UI.

Connect to the demo-1-sch77
and verify the file exists.
# oc get pods NAME READY STATUS RESTARTS AGE demo-1-sch77 1/1 Running 0 5m # oc rsh demo-1-sch77 sh-4.2$ cd uploaded sh-4.2$ pwd /opt/app-root/src/uploaded sh-4.2$ ls -lh total 16K -rw-r--r--. 1 1000080000 root 16K Apr 26 21:32 cns-deploy-4.0.0-15.el7rhgs.x86_64.rpm.gz
Scale up the number of demo-1 pods from 1 to 2.
# oc scale dc/demo --replicas=2
# oc get pods
NAME READY STATUS RESTARTS AGE
demo-1-build 0/1 Completed 0 7m
demo-1-sch77 1/1 Running 0 7m
demo-1-sdz28 0/1 Running 0 3s
Login to the newly created pod and view the uploaded
directory.
# oc rsh demo-1-sdz28
sh-4.2$ cd uploaded
sh-4.2$ pwd
/opt/app-root/src/uploaded
sh-4.2$ ls -lh
total 0
The uploaded file is not available to this newly created second pod because the storage is local to the pod, demo-1-sch77. In the next steps, the storage for the pods will be changed from local or ephemeral storage to a RWX
persistent volume claim for the mount point /opt/app-root/src/uploaded
.
First, add a persistent volume claim to the project. The existing OCP
StorageClass
object created for a CNS cluster (gluster-cns-slow) will be used to create a PVC
with the access mode of RWX
.
A CRS StoreClass
object can be used in the steps below as well.
The first step is to create the app-claim.yaml
file.
# vi app-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app
annotations:
volume.beta.kubernetes.io/storage-class: gluster-cns-slow
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Using the app-claim.yaml
file use the OpenShift client to create the PVC
.
# oc create -f app-claim.yaml
persistentvolumeclaim "app" created
Verify the PVC
was created.
# oc get pvc app
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
app Bound pvc-418330b7-2ac9-11e7-946e-067f85bdafe9 10Gi RWX 46s
Now that the PVC
exists tie the claim to the deployment configuration using the existing mount path /opt/app-root/src/uploaded
for demo
pods.
# oc volume dc/demo --add --name=persistent-volume --type=persistentVolumeClaim --claim-name=app --mount-path=/opt/app-root/src/uploaded
A new deployment is created using the PVC
and there are two new demo
pods
# oc get pods
NAME READY STATUS RESTARTS AGE
demo-1-build 0/1 Completed 0 16m
demo-2-9cv88 1/1 Running 0 8s
demo-2-m1mwt 1/1 Running 0 13s
Now there is a persistent volume allocated using the gluster-cns-slow storage class and mounted at /opt/app-root/src/uploaded
on the demo-2
pods.
# oc volumes dc demo
deploymentconfigs/demo
pvc/app (allocated 10GiB) as persistent-volume
mounted at /opt/app-root/src/uploaded
Using the route for the demo-2
deployment upload a new file (example http://demo-manual.apps.sysdeseng.com/).

Now login to both pods and validate that both pods can read the newly uploaded file.
On the first pod perform the following.
# oc rsh demo-2-9cv88
sh-4.2$ df -h
Filesystem Size Used Avail Use% Mounted on
….omitted….
10.20.4.115:vol_624ec880d10630989a8bdf90ae183366 10G 39M 10G 1% /opt/app-root/src/uploaded
sh-4.2$ cd /opt/app-root/src/uploaded
sh-4.2$ ls -lh
total 5.6M
-rw-r--r--. 1 1000080000 2002 5.6M Apr 26 21:51 heketi-client-4.0.0-7.el7rhgs.x86_64.rpm.gz
On the second pod perform the following.
# oc rsh demo-2-m1mwt
sh-4.2$ df -h
Filesystem Size Used Avail Use% Mounted on
….omitted….
10.20.4.115:vol_624ec880d10630989a8bdf90ae183366 10G 39M 10G 1% /opt/app-root/src/uploaded
sh-4.2$ cd /opt/app-root/src/uploaded
sh-4.2$ ls -lh
total 5.6M
-rw-r--r--. 1 1000080000 2002 5.6M Apr 26 21:51 heketi-client-4.0.0-7.el7rhgs.x86_64.rpm.gz
Scale up the number of demo-2 pods from two to three.
# oc scale dc/demo --replicas=2
Verify the third pod has a STATUS of Running.
# oc get pods
NAME READY STATUS RESTARTS AGE
demo-1-build 0/1 Completed 0 43m
demo-2-9cv88 1/1 Running 0 26m
demo-2-kcc16 1/1 Running 0 5s
demo-2-m1mwt 1/1 Running 0 27m
Login to the third pod and validate the uploaded file exists.
# oc rsh demo-2-kcc16
sh-4.2$ cd uploaded
sh-4.2$ ls -lh
total 5.6M
-rw-r--r--. 1 1000080000 2002 5.6M Apr 26 21:51 heketi-client-4.0.0-7.el7rhgs.x86_64.rpm.gz
Because of the use of a CNS RWX
persistent volume for the mount point /opt/app-root/src/uploaded
the file that was uploaded using the Web UI for the demo application is now available to be read or downloaded by all demo-2
pods no matter how they are scaled up or down.
5.11. Deleting a PVC (Optional)
There may become a point in which a PVC
is no longer necessary for a project. The following can be done to remove the PVC
.
# oc delete pvc db
persistentvolumeclaim "db" deleted
# oc get pvc db
No resources found.
Error from server: persistentvolumeclaims "db" not found