Chapter 8. Post Deployment and Validation
This section provides post deployment and validation information
Validate the Deployment Once the deployment is complete the Red Hat OpenShift Container Platform will resemble the deployment illustrated in the image below. The deployment will have three master nodes running the etcd cluster, three infrastructure nodes, and three Container-native storage Nodes. The registry for container applications is running on two of the infrastructure nodes. Moving the registry to persistent storage backed by Gluster Container-native storage is described later in this document. One of the infrastructure nodes is running HA Proxy to provide loadbalancing of external traffic among the master nodes. Routing services are running on two of the infrastructure nodes as well, the routers provide routing services between the external network and the container network. There are three nodes running Container-native storage to provide containers with persistent storage.

Figure 31 Red Hat OpenShift Container Platform Deployment
Verify the Red Hat OpenShift Platform ETCD cluster is running
To verify the Red Hat OpenShift Platfrom deployment was successful check the ectd cluster by logging into the first master node and running the following command:
etcdctl -C https://ocp-master1.hpecloud.test:2379,https://ocp-master2.hpecloud.test:2379,https://ocp-master3.hpecloud.test:2379\
--ca-file=/etc/origin/master/master.etcd-ca.crt \
--cert-file=/etc/origin/master/master.etcd-client.crt \
--key-file=/etc/origin/master/master.etcd-client.key cluster-health
2017-06-15 14:44:06.566247 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2017-06-15 14:44:06.566839 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
member 70f2422ba978ec65 is healthy: got healthy result from https://10.19.20.173:2379
member b6d844383be04fb2 is healthy: got healthy result from https://10.19.20.175:2379
member fe30d0c37c03d494 is healthy: got healthy result from https://10.19.20.174:2379
cluster is healthyCheck the members of the etcd cluster by executing the command etcdctl -C command as shown below:
etcdctl -C https://ocp-master1.hpecloud.test:2379,https://ocp-master2.hpecloud.test:2379,https://ocp-master3.hpecloud.test:2379 \
--ca-file=/etc/origin/master/master.etcd-ca.crt \
--cert-file=/etc/origin/master/master.etcd-client.crt \
--key-file=/etc/origin/master/master.etcd-client.key member list
2017-06-15 14:48:58.657439 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
70f2422ba978ec65: name=ocp-master1.hpecloud.test peerURLs=https://10.19.20.173:2380 clientURLs=https://10.19.20.173:2379 isLeader=false
b6d844383be04fb2: name=ocp-master3.hpecloud.test peerURLs=https://10.19.20.175:2380 clientURLs=https://10.19.20.175:2379 isLeader=false
fe30d0c37c03d494: name=ocp-master2.hpecloud.test peerURLs=https://10.19.20.174:2380 clientURLs=https://10.19.20.174:2379 isLeader=trueList nodes
Executing the oc get nodes command will display the OpenShift Container Platform nodes and their respective status.
oc get nodes NAME STATUS AGE ocp-cns1.hpecloud.test Ready 1h ocp-cns2.hpecloud.test Ready 1h ocp-cns3.hpecloud.test Ready 1h ocp-infra1.hpecloud.test Ready 1h ocp-infra2.hpecloud.test Ready 1h ocp-master1.hpecloud.test Ready,SchedulingDisabled 1h ocp-master2.hpecloud.test Ready,SchedulingDisabled 1h ocp-master3.hpecloud.test Ready,SchedulingDisabled 1h
List projects
Using the oc get projects command will display all projects that the user has permission to access.
oc get projects NAME DISPLAY NAME STATUS default Active kube-system Active logging Active management-infra Active openshift Active openshift-infra Active Using project "default" on server "https://openshift-master.hpecloud.test:8443".
List pods
The oc get pods command will list the running pods.
oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-sx2sh 1/1 Running 0 1h docker-registry-1-tbz80 1/1 Running 0 1h registry-console-1-sxm4d 1/1 Running 0 1h router-1-ntql2 1/1 Running 0 1h router-1-s6xqt 1/1 Running 0 1h
List Services
The oc get services command will list the available services, Cluster IP addresses and ports.
oc get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry 172.30.80.1 <none> 5000/TCP 1h kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 1h registry-console 172.30.105.3 <none> 9000/TCP 1h router 172.30.206.66 <none> 80/TCP,443/TCP,1936/TCP 1h
List endpoints
The oc get endpoints command will display a list of external endpoint IP addresses and ports.
oc get endpoints NAME ENDPOINTS AGE docker-registry 10.128.2.4:5000,10.128.2.5:5000 1h kubernetes 10.19.20.173:8443,10.19.20.174:8443,10.19.20.175:8443 + 6 more... 1h registry-console 10.128.2.3:9090 1h router 10.19.20.170:443,10.19.20.171:443,10.19.20.170:1936 + 3 more... 1h
Verify the Red Hat OpenShift Container Platform User Interface
Log into the Red Hat OpenShift Container Platform user interface using the public url defined in the openshift_master_cluster_public_hostname variable as shown below.
openshift_master_cluster_public_hostname: openshift-master.hpecloud.test

Figure 32 Red Hat OpenShift Container Platform User Interface
Verify Container-native storage
Container-native storage is a hyper-converged solution that allows containerized applications and Gluster based storage to reside on the same nodes. The Container-native storage provides containers with persistent storage. Secrets are defined to enhance security for Container-native storage, these secrets are applied to the configuration as variables in the cnsdeploy.yaml ansible task. The secrets are stored in an encrypted format in ansible vault and initially defined in the passwords.yaml file. There is an admin secret and a user secret defined. Set the environment variables by exporting the heketi user and the heketi secret to be used with the heketi cli command as HEKETI_CLI_USER and HEKETI_CLI_KEY, for example export HEKETI_CLI_USER=admin and export HEKETI_CLI_KEY=n0tP@ssw0rd . Alternatively, the user and secret can be passed on the command line using the --secret and --user options as shown below.
heketi-cli topology info -s http://heketi-rhgs-cluster.paas.hpecloud.test --user admin --secret n0tP@ssw0rd
Use the oc get route heketi command to get the heketi server name.
oc get route heketi NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD heketi heketi-rhgs-cluster.paas.hpecloud.test heketi <all> None
Export the heketi server name as HEKETI_CLI_SERVER by executing export HEKETI_CLI_SERVER=http://heketi-rhgs-cluster.paas.hpecloud.test .
Using the heketi client create a 100 GB test volume on the cluster named test_volume
heketi-cli volume create --size=100 --name test_volume Name: test_volume Size: 100 Volume Id: b39a544984eb9848a1727564ea66cb6f Cluster Id: 05f9e98b350c2341c1dffb746d0549e9 Mount: 10.19.20.176:test_volume Mount Options: backup-volfile-servers=10.19.20.177,10.19.20.178 Durability Type: replicate Distributed+Replica: 3
In our example the heketi server name is heketi-rhgs-cluster.paas.hpecloud.test, export the server name the to the HEKETI_CLI_SERVER variable. Use the heketi client, heketi-cli , to view the Gluster topology as shown below. Notice the 100 GB volume previously created test_volume. To delete the test volume use heketi-cli volume delete <volume ID> .
heketi-cli topology info
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Volumes:
Name: heketidbstorage
Size: 2
Id: 51b91b87d77bff02cb5670cbab621d6d
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Mount: 10.19.20.176:heketidbstorage
Mount Options: backup-volfile-servers=10.19.20.177,10.19.20.178
Durability Type: replicate
Replica: 3
Snapshot: Disabled
Bricks:
Id: 7f243adb17ab1dbf64f032bff7c47d81
Path: /var/lib/heketi/mounts/vg_5e25001fb558bff388f3304897440f33/brick_7f243adb17ab1dbf64f032bff7c47d81/brick
Size (GiB): 2
Node: 0fd823a6c17500ffcfdf797f9eaa9d4a
Device: 5e25001fb558bff388f3304897440f33
-
Id: 8219d3b014d751bbb58fcfae57eb1cf0
Path: /var/lib/heketi/mounts/vg_159c4b0cca850b7549b3957e93014ffe/brick_8219d3b014d751bbb58fcfae57eb1cf0/brick
Size (GiB): 2
Node: 8b1e2abf4076ef4dd93299f2be042b89
Device: 159c4b0cca850b7549b3957e93014ffe
-
Id: 843b35abc2ab505ed7591a2873cdd29d
Path: /var/lib/heketi/mounts/vg_01423a48fb095139ce7b229a73eac064/brick_843b35abc2ab505ed7591a2873cdd29d/brick
Size (GiB): 2
Node: a5923b6b35fef3131bd077a33f658239
Device: 01423a48fb095139ce7b229a73eac064
Name: test_volume
Size: 100
Id: b39a544984eb9848a1727564ea66cb6f
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Mount: 10.19.20.176:test_volume
Mount Options: backup-volfile-servers=10.19.20.177,10.19.20.178
Durability Type: replicate
Replica: 3
Snapshot: Disabled
Bricks:
Id: 45cc45e08b6215ee206093489fa12216
Path: /var/lib/heketi/mounts/vg_01423a48fb095139ce7b229a73eac064/brick_45cc45e08b6215ee206093489fa12216/brick
Size (GiB): 100
Node: a5923b6b35fef3131bd077a33f658239
Device: 01423a48fb095139ce7b229a73eac064
-
Id: 47722ac826e6a263c842fe6a87f30d84
Path: /var/lib/heketi/mounts/vg_159c4b0cca850b7549b3957e93014ffe/brick_47722ac826e6a263c842fe6a87f30d84/brick
Size (GiB): 100
Node: 8b1e2abf4076ef4dd93299f2be042b89
Device: 159c4b0cca850b7549b3957e93014ffe
-
Id: a257cdad1725fcfa22de954e59e70b64
Path: /var/lib/heketi/mounts/vg_5e25001fb558bff388f3304897440f33/brick_a257cdad1725fcfa22de954e59e70b64/brick
Size (GiB): 100
Node: 0fd823a6c17500ffcfdf797f9eaa9d4a
Device: 5e25001fb558bff388f3304897440f33
Nodes:
Node Id: 0fd823a6c17500ffcfdf797f9eaa9d4a
State: online
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Zone: 1
Management Hostname: ocp-cns3.hpecloud.test
Storage Hostname: 10.19.20.178
Devices:
Id:5e25001fb558bff388f3304897440f33 Name:/dev/sdb State:online Size (GiB):11177 Used (GiB):102 Free (GiB):11075
Bricks:
Id:7f243adb17ab1dbf64f032bff7c47d81 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_5e25001fb558bff388f3304897440f33/brick_7f243adb17ab1dbf64f032bff7c47d81/brick
Id:a257cdad1725fcfa22de954e59e70b64 Size (GiB):100 Path: /var/lib/heketi/mounts/vg_5e25001fb558bff388f3304897440f33/brick_a257cdad1725fcfa22de954e59e70b64/brick
Node Id: 8b1e2abf4076ef4dd93299f2be042b89
State: online
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Zone: 1
Management Hostname: ocp-cns2.hpecloud.test
Storage Hostname: 10.19.20.177
Devices:
Id:159c4b0cca850b7549b3957e93014ffe Name:/dev/sdb State:online Size (GiB):11177 Used (GiB):102 Free (GiB):11075
Bricks:
Id:47722ac826e6a263c842fe6a87f30d84 Size (GiB):100 Path: /var/lib/heketi/mounts/vg_159c4b0cca850b7549b3957e93014ffe/brick_47722ac826e6a263c842fe6a87f30d84/brick
Id:8219d3b014d751bbb58fcfae57eb1cf0 Size (GiB):2 Path: /var/lib/heketi/mounts/vg_159c4b0cca850b7549b3957e93014ffe/brick_8219d3b014d751bbb58fcfae57eb1cf0/brick
Node Id: a5923b6b35fef3131bd077a33f658239
State: online
Cluster Id: 05f9e98b350c2341c1dffb746d0549e9
Zone: 1
Management Hostname: ocp-cns1.hpecloud.test
Storage Hostname: 10.19.20.176
Devices:
Id:01423a48fb095139ce7b229a73eac064 Name:/dev/sdb State:online Size (GiB):11177 Used (GiB):102 Free (GiB):11075
Bricks:
Id:45cc45e08b6215ee206093489fa12216 Size (GiB):100 Path: /var/lib/heketi/mounts/vg_01423a48fb095139ce7b229a73eac064/brick_45cc45e08b6215ee206093489fa12216/brick
Id:843b35abc2ab505ed7591a2873cdd29d Size (GiB):2 Path: /var/lib/heketi/mounts/vg_01423a48fb095139ce7b229a73eac064/brick_843b35abc2ab505ed7591a2873cdd29d/brickBacking the Docker Registry with Gluster Container-native storage
In a production environment the local Docker registry should be backed with persistent storage. This section will describe how to create a Persistent Volume Claim (PVC) and Persistent Volume (PV) for the Docker registry.
In this example a user account named gluster was created and assigned the cluster admin role. Log in under this account to perform the oc commands described in this section.
The following files are required to create the gluster service, endpoints, persistent volume and persistent volume claim. In this example the files were created on the master nodes, ocp-master1.hpecloud.test, and the oc commands were executed from this node.
Gluster Service File
The contents of the glusterservice.yaml file are shown below. This file will create an OpenShift service named glusterfs-cluster.
apiVersion: v1 kind: Service metadata: name: glusterfs-cluster spec: ports: - port: 1
Gluster Endpoint File
The contents of the glusterendpoints.yaml file are shown below, this file defines the gluster endpoints. The IP addresses in this file are the IP addresses of the Gluster nodes.
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 10.19.20.176
ports:
- port: 1
- addresses:
- ip: 10.19.20.177
ports:
- port: 1
- addresses:
- ip: 10.19.20.178
ports:
- port: 1Gluster PV File
The contents of the glusterpv.yaml file is shown below, this file describes the persistent volume that will be used for the Docker registry persistent storage. The path specified must match the name of the volume that will be used for the Docker registry on the Gluster Container-native storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-default-volume
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: regVol1
readOnly: false
persistentVolumeReclaimPolicy: RetainGluster PVC File
The contents of the glusterpvc.yaml file are shown below. This file describes the persistent volume claim that will be used by the Docker registry pods to mount the persistent volume and provide the Docker registry with persistent storage.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1GiCreating a volume for the Docker registry
Use the heketi-cli volume create command to create a volume on the Container-native storage for the Docker registry persistent storage. The name used when creating the volume must match the path specified in the glusterpv.yaml file.
heketi-cli volume create --size=20 --name=regVol1
The code snipet below shows the commands along with thier respective outputs that are used to create the glusterfs service, the gluster endpoints, the persistent volume and the persistent volume claim that will be used to provide persistent storage for the Docker registry.
[root@ocp-master1 ~]# oc whoami gluster [root@ocp-master1 ~]# oc create -f glusterservice.yaml service "glusterfs-cluster" created [root@ocp-master1 ~]# oc get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry 172.30.35.174 <none> 5000/TCP 8h glusterfs-cluster 172.30.81.99 <none> 1/TCP 5s kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 8h registry-console 172.30.173.160 <none> 9000/TCP 8h router 172.30.139.18 <none> 80/TCP,443/TCP,1936/TCP 8h [root@ocp-master1 ~]# oc create -f glusterendpoints.yaml endpoints "glusterfs-cluster" created [root@ocp-master1 ~]# oc get endpoints NAME ENDPOINTS AGE docker-registry 10.128.2.3:5000,10.131.0.3:5000 8h glusterfs-cluster 10.19.20.176:1,10.19.20.177:1,10.19.20.178:1 27s kubernetes 10.19.20.173:8443,10.19.20.174:8443,10.19.20.175:8443 + 6 more... 8h registry-console 10.130.0.2:9090 8h router 10.19.20.170:443,10.19.20.171:443,10.19.20.170:1936 + 3 more... 8h [root@ocp-master1 ~]# oc create -f glusterpv.yaml persistentvolume "gluster-default-volume" created [root@ocp-master1 ~]# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE gluster-default-volume 20Gi RWX Retain Available 16s registry-volume 10Gi RWX Retain Bound default/registry-claim 8h [root@ocp-master1 ~]# oc create -f glusterpvc.yaml persistentvolumeclaim "gluster-claim" created [root@ocp-master1 ~]# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim Bound gluster-default-volume 20Gi RWX 25s registry-claim Bound registry-volume 10Gi RWX 8h
Attach the Docker registry to the persistent storage using the oc volume command shown below.
oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc --claim-name=gluster-claim --overwrite
Review the OpenShift documentation at https://docs.openshift.com/container-platform/3.5/install_config/storage_examples/gluster_backed_registry.html for detailed information on backing the Docker registry with Gluster File Storage.
CNS Storage Class
This section will describe how to create a storage class for provisioning storage from the gluster based Container-native storage. To create an OpenShift storage class you will need the cluster ID and the rest API URL of the CNS cluster. The rest API URL is the same as the HEKETI_CLI_SERVER variable exported earlier. Use the heketi-cli cluster list command to display the cluster ID and the oc get route command to display the heketi server name.
[root@ocp-master1 ~]# heketi-cli cluster list Clusters: 5f64ac6c1313d2abba1143bb808da586 [root@ocp-master1 ~]# oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD heketi heketi-rhgs-cluster.paas.hpecloud.test heketi <all> None [root@ocp-master1 ~]# echo $HEKETI_CLI_SERVER http://heketi-rhgs-cluster.paas.hpecloud.test
The storage class object requires the heketi secret that was created during provisioning of the Container-native storage. In this example, the heketi secret is "n0tP@ssw0rd". This secret will be stored in OpenShift as a secret. The OpenShift secret is created using the oc create command along with a file that defines the secret. First we are going to encrypt the secret with the echo -n <secret> | base64 command, then create a yaml file that will be passed with the oc create command to create the secret in OpenShift as shown below.
[root@ocp-master1 ~]# echo -n n0tP@ssw0rd |base64 bjB0UEBzc3cwcmQ= [root@ocp-master1 ~]# vi heketi-secret.yaml [root@ocp-master1 ~]# oc create -f heketi-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: key: bjB0UEBzc3cwcmQ= type: kubernetes.io/glusterfs [root@ocp-master1 ~]# cat heketi-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: key: bjB0UEBzc3cwcmQ= type: kubernetes.io/glusterfs [root@ocp-master1 ~]# oc create -f heketi-secret.yaml secret "heketi-secret" created [root@ocp-master1 ~]# oc describe secret heketi-secret Name: heketi-secret Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/glusterfs Data ==== key: 11 bytes
Next, create a yaml file that will be used to create the storageclass object. This file will contain the cns cluster id, cluster name, and the heketi username and secret.
[root@ocp-master1 ~]# vi cns-storageclass-st1.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-cns-bronze provisioner: kubernetes.io/glusterfs parameters: clusterid: 5f64ac6c1313d2abba1143bb808da586 resturl: http://heketi-rhgs-cluster.paas.hpecloud.test restauthenabled: "true" restuser: "admin" secretName: "heketi-secret" secretNamespace: "default"
The file will be passed with the oc create command to create a new storage class object named gluster-cns-bronze.
[root@ocp-master1 ~]# oc create -f cns-storageclass-st1.yaml storageclass "gluster-cns-bronze" created [root@ocp-master1 ~]# oc describe storageclass gluster-cns-bronze Name: gluster-cns-bronze IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: clusterid=5f64ac6c1313d2abba1143bb808da586,restauthenabled=true,resturl=http://heketi-rhgs-cluster.paas.hpecloud.test,restuser=admin,secretName=n0tP@ssw0rd,secretNamespace=default No events.
Now that the storage class object, gluster-cns-bronze, has been created, create a persistent volume claim against the gluster-cns-bronze storage class. This persistent volume claim will dynamically allocate storage from the gluster-cns-bronze storage class.
Create a yaml file that defines the persistent volume claime against the gluster-cns-bronze storage class object as shown below.
[root@ocp-master1 ~]# vi cns-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db
annotations:
volume.beta.kubernetes.io/storage-class: gluster-cns-bronze
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiCreate the persistent volume claim using the oc create command and passing the cns-claim.yaml file.
[root@ocp-master1 ~]# oc create -f cns-claim.yaml persistentvolumeclaim "db" created
Review the provisioning of the persistent storage using the oc describe persistentvolumeclaim command and heketi-cli volume info command
[root@ocp-master1 ~]# oc describe persistentvolumeclaim db Name: db Namespace: rhgs-cluster StorageClass: gluster-cns-bronze Status: Bound Volume: pvc-7605ae4b-714b-11e7-a319-1402ec825eec Labels: <none> Capacity: 10Gi Access Modes: RWO No events. [root@ocp-master1 ~]# heketi-cli volume info 93b150f2f7a5ddcb1ed5462cedeb43a9 Name: vol_93b150f2f7a5ddcb1ed5462cedeb43a9 Size: 10 Volume Id: 93b150f2f7a5ddcb1ed5462cedeb43a9 Cluster Id: 5f64ac6c1313d2abba1143bb808da586 Mount: 10.19.20.176:vol_93b150f2f7a5ddcb1ed5462cedeb43a9 Mount Options: backup-volfile-servers=10.19.20.177,10.19.20.178 Durability Type: replicate Distributed+Replica: 3

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.