Chapter 6. Operational Management

With the successful deployment of OpenShift, the following section demonstrates how to confirm proper functionality of the Red Hat OpenShift Container Platform.

6.1. Running Diagnostics

Perform the following steps from the first master node.

To run diagnostics, SSH into the the first master node. Direct access is provided to the first master node because of the configuration of the local ~/.ssh/config file.

ssh master-0

Connectivity to the master00 host as the root user should have been established. Run the diagnostics that are included as part of the install.

sudo oadm diagnostics
[Note] Determining if client configuration exists for client/cluster diagnostics
Info:  Successfully read a client config file at '/root/.kube/config'
Info:  Using context for cluster-admin access: 'default/devs-ocp3-example-com:8443/system:admin'
[Note] Performing systemd discovery

[Note] Running diagnostic: ConfigContexts[default/devs-ocp3-example-com:8443/system:admin]
       Description: Validate client config context is complete and has connectivity

Info:  The current client config context is 'default/devs-ocp3-example-com:8443/system:admin':
       The server URL is 'https://devs.ocp3.example.com:8443'
       The user authentication is 'system:admin/devs-ocp3-example-com:8443'
       The current project is 'default'
       Successfully requested project list; has access to project(s):
         [logging management-infra markllama openshift openshift-infra default]

[Note] Running diagnostic: DiagnosticPod
       Description: Create a pod to run diagnostics from the application standpoint


[Note] Running diagnostic: ClusterRegistry
       Description: Check that there is a working Docker registry

[Note] Running diagnostic: ClusterRoleBindings
       Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

Info:  clusterrolebinding/cluster-readers has more subjects than expected.

       Use the oadm policy reconcile-cluster-role-bindings command to update the role binding to remove extra subjects.

Info:  clusterrolebinding/cluster-readers has extra subject {ServiceAccount management-infra management-admin    }.

[Note] Running diagnostic: ClusterRoles
       Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
       Description: Check there is a working router

[Note] Skipping diagnostic: MasterNode
       Description: Check if master is also running node (for Open vSwitch)
       Because: Network plugin does not require master to also run node:

[Note] Running diagnostic: NodeDefinitions
       Description: Check node records on master

[Note] Running diagnostic: AnalyzeLogs
       Description: Check for recent problems in systemd service logs

Info:  Checking journalctl logs for 'docker' service

[Note] Running diagnostic: MasterConfigCheck
       Description: Check the master config file

Info:  Found a master config file: /etc/origin/master/master-config.yaml

... output abbreviated ...

[Note] Running diagnostic: UnitStatus
       Description: Check status for related systemd units

[Note] Summary of diagnostics execution (version v3.2.1.15):
[Note] Warnings seen: 3
Note

The warnings do not cause issues in the environment

Based on the results of the diagnostics, actions can be taken to alleviate any issues.

6.2. Checking the Health of ETCD

Perform the following steps from a local workstation.

This section focuses on the ETCD cluster. It describes the different commands to ensure the cluster is healthy. The internal DNS names of the nodes running ETCD must be used.

Issue the etcdctl command to confirm that the cluster is healthy.

sudo etcdctl \ --ca-file /etc/etcd/ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key \ --endpoints https://master-0.ocp3.example.com:2379 \ --endpoints https://master-1.ocp3.example.com:2379 \ --endpoints https://master-2.ocp3.example.com:2379 \ cluster-health
member 9bd1d7731aa447e is healthy: got healthy result from https://172.18.10.4:2379
member 2663a31f4ce5756b is healthy: got healthy result from https://172.18.10.5:2379
member 3e8001b17125a44e is healthy: got healthy result from https://172.18.10.6:2379
cluster is healthy

6.3. Docker Storage Setup

The role docker-storage-setup tells the Docker service to use /dev/vdb and create the volume group of docker-vol. The extra Docker storage options ensures that a container can grow no larger than 3G. Docker storage setup is performed on all master, infrastructure, and application nodes.

# vi /etc/sysconfig/docker-storage-setup
DEVS=/dev/vdb
VG=docker-vg

6.4. Yum Repositories

The specfic repositories for a successful OpenShift installation are define inn section Register for Software Updates. All systems except for the bastion host must have the same subscriptions. To verify those subscriptions, match those defined earlier, perfom the following.

sudo yum repolist
Loaded plugins: search-disabled-repos, subscription-manager
repo id                                        repo name                  status
rhel-7-server-extras-rpms/x86_64               Red Hat Enterprise Linux 7    536
rhel-7-server-openstack-10-rpms/7Server/x86_64 Red Hat OpenStack Platform  1,179
rhel-7-server-optional-rpms/7Server/x86_64     Red Hat Enterprise Linux 7 11,098
rhel-7-server-ose-3.2-rpms/x86_64              Red Hat OpenShift Enterpri    847
rhel-7-server-rpms/7Server/x86_64              Red Hat Enterprise Linux 7 14,619
repolist: 28,279

6.5. Console Access

This section covers logging into the OpenShift Container Platform management console via the GUI and the CLI. After logging in via one of these methods applications can then be deployed and managed.

6.5.1. Log into GUI console and deploy an application

Perform the following steps from the local workstation.

To log into the GUI console access the CNAME for the load balancer. Open a browser and access https://devs.ocp3.example.com:8443/console

To deploy an application, click on the New Project button. Provide a Name and click Create. Next, deploy the jenkins-ephemeral instant app by clicking the corresponding box. Accept the defaults and click Create. Instructions along with a URL are provided for how to access the application on the next screen. Click Continue to Overview and bring up the management page for the application. Click on the link provided and access the appliction to confirm functionality.

6.5.2. Log into CLI and Deploy an Application

Perform the following steps from a local workstation.

Install the oc client which can be installed by visiting the public URL of the OpenShift deployment. For example, https://devs.ocp3.example.com:8443/console/command-line and click latest release. When directed to https://access.redhat.com, login with the valid Red Hat customer credentials and download the client relevant to the current workstation. Follow the instructions located on the production documentation site for getting started.

Log in with a user that exists in the LDAP database. For this example use the openshift user we used as the LDAP BIND_DN user.

oc login --username openshift
Authentication required for https://devs.ocp3.example.com:8443 (openshift)
Username: openshift
Password:
Login successful.

You don't have any projects. You can try to create a new project, by running

    $ oc new-project <projectname>

After access has been granted, create a new project and deploy an application.

$ oc new-project test-app

$ oc new-app https://github.com/openshift/cakephp-ex.git --name=php
--> Found image 2997627 (7 days old) in image stream "php" in project "openshift" under tag "5.6" for "php"

    Apache 2.4 with PHP 5.6
    -----------------------
    Platform for building and running PHP 5.6 applications

    Tags: builder, php, php56, rh-php56

    * The source repository appears to match: php
    * A source build using source code from https://github.com/openshift/cakephp-ex.git is created
      * The resulting image is pushed to image stream "php:latest"
    * This image is deployed in deployment config "php"
    * Port 8080/tcp is load balanced by service "php"
      * Other containers access this service through the hostname "php"

--> Creating resources with label app=php ...
    imagestream "php" created
    buildconfig "php" created
    deploymentconfig "php" created
    service "php" created
--> Success
    Build scheduled, use 'oc logs -f bc/php' to track its progress.
    Run 'oc status' to view apps.


$ oc expose service php
route "php" exposed

Display the status of the application.

$ oc status
In project test-app on server https://openshift-master.sysdeseng.com:8443

http://test-app.apps.sysdeseng.com to pod port 8080-tcp (svc/php)
  dc/php deploys istag/php:latest <- bc/php builds https://github.com/openshift/cakephp-ex.git with openshift/php:5.6
    deployment #1 deployed about a minute ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

Access the application by accessing the URL provided by oc status. The CakePHP application should be visible now.

6.6. Explore the Environment

6.6.1. List Nodes and Set Permissions

The following command should fail:

# oc get nodes --show-labels
Error from server: User "user@redhat.com" cannot list all nodes in the cluster

The reason it is failing is because the permissions for that user are incorrect. Get the username and configure the permissions.

$ oc whoami
openshift

Once the username has been established, log back into a master node and enable the appropriate permissions for the user. Perform the following step from master00.

# oadm policy add-cluster-role-to-user cluster-admin openshift

Attempt to list the nodes again and show the labels.

# oc get nodes --show-labels
NAME                                           STATUS    AGE       LABELS
app-node-0.control.ocp3.example.com     Ready     5h        failure-domain.beta.kubernetes.io/region=RegionOne,kubernetes.io/hostname=app-node-0.control.ocp3.example.com,region=primary,zone=default
app-node-1.control.ocp3.example.com     Ready     5h        failure-domain.beta.kubernetes.io/region=RegionOne,kubernetes.io/hostname=app-node-1.control.ocp3.example.com,region=primary,zone=default
app-node-2.control.ocp3.example.com     Ready     5h        failure-domain.beta.kubernetes.io/region=RegionOne,kubernetes.io/hostname=app-node-2.control.ocp3.example.com,region=primary,zone=default
infra-node-0.control.ocp3.example.com   Ready     5h        failure-domain.beta.kubernetes.io/region=RegionOne,kubernetes.io/hostname=infra-node-0.control.ocp3.example.com,region=infra,zone=default
infra-node-1.control.ocp3.example.com   Ready     5h        failure-domain.beta.kubernetes.io/region=RegionOne,kubernetes.io/hostname=infra-node-1.control.ocp3.example.com,region=infra,zone=default

6.6.2. List Router and Registry

List the router and registry by changing to the default project.

Note

Perform the following steps from a workstation.

# oc project default
# oc get all
# oc status
In project default on server https://devs.ocp3.example.com:8443

svc/docker-registry - 172.30.110.31:5000
  dc/docker-registry deploys docker.io/openshift3/ocp-docker-registry:v3.2.1.7
    deployment #2 deployed 41 hours ago - 2 pods
    deployment #1 deployed 41 hours ago

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

svc/router - 172.30.235.155 ports 80, 443, 1936
  dc/router deploys docker.io/openshift3/ocp-haproxy-router:v3.2.1.7
    deployment #1 deployed 41 hours ago - 2 pods

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Observe the output of oc get all and oc status. Notice that the registry and router information is clearly listed.

6.6.3. Explore the Docker Registry

The OpenShift Ansible playbooks configure two infrastructure nodes that have two registries running. In order to understand the configuration and mapping process of the registry pods, the command 'oc describe' is used. Oc describe details how registries are configured.

Note

Perform the following steps from a workstation.

$ oc describe svc/docker-registry
Name:           docker-registry
Namespace:      default
Labels:         docker-registry=default
Selector:       docker-registry=default
Type:           ClusterIP
IP:         172.30.110.31
Port:           5000-tcp    5000/TCP
Endpoints:      172.16.4.2:5000
Session Affinity:   ClientIP
No events.
Note

Perform the following steps from the infrastructure node.

Once the endpoints are known, go to one of the infra nodes running a registry and grab some information about it. Capture the container UID in the leftmost column of the output.

# docker ps | grep ocp-docker-registry
073d869f0d5f        openshift3/ocp-docker-registry:v3.2.1.9   "/bin/sh -c 'DOCKER_R"   6 hours ago         Up 6 hours                              k8s_registry.90479e7d_docker-registry-2-jueep_default_d5882b1f-5595-11e6-a247-0eaf3ad438f1_ffc47696
sudo docker exec -it a637d95aa4c7 cat /config.yml
version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    layerinfo: inmemory
  filesystem:
    rootdirectory: /registry
  delete:
    enabled: true
auth:
  openshift:
    realm: openshift
middleware:
  repository:
    - name: openshift
      options:
        pullthrough: true

6.6.4. Explore Docker Storage

This section explores the Docker storage on an infrastructure node.

Note

The example below can be performed on any node but for this example the infrastructure node is used

sudo docker info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 4
Server Version: 1.10.3
Storage Driver: devicemapper
 Pool Name: docker-253:1-75502733-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.118 GB
 Data Space Total: 107.4 GB
 Data Space Available: 39.96 GB
 Metadata Space Used: 1.884 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Either use --storage-opt dm.thinpooldev or use --storage-opt dm.no_warn_on_loop_devices=true to suppress this warning.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
 Volume: local
 Network: host bridge null
 Authorization: rhel-push-plugin
Kernel Version: 3.10.0-327.10.1.el7.x86_64
Operating System: Employee SKU
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 2
CPUs: 2
Total Memory: 3.702 GiB
Name: infra-node-0.control.ocp3.example.com
ID: AVUO:RUKL:Y7NZ:QJKC:KIMX:5YXG:SJUY:GGH2:CL3P:3BTO:6A74:4KYD
WARNING: bridge-nf-call-ip6tables is disabled
Registries: registry.access.redhat.com (secure), docker.io (secure)
$ fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0000b3fd

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83884629    41941291   83  Linux

Disk /dev/vdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/docker-253:1-75502733-pool: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/docker-253:1-75502733-0f03fdafb3f541f0ba80fa40b28355cd78ae2ef9f5cab3c03410345dc97835f0: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/docker-253:1-75502733-e1b70ed2deb6cd2ff78e37dd16bfe356504943e16982c10d9b8173d677b5c747: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/docker-253:1-75502733-c680ec6ec5d72045fc31b941e4323cf6c17b8a14105b5b7e142298de9923d399: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/docker-253:1-75502733-80f4398cfd272820d625e9c26e6d24e57d7a93c84d92eec04ebd36d26b258533: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
$ cat /etc/sysconfig/docker-storage-setup
DEVS=/dev/xvdb
VG=docker-vol

6.6.5. Explore Security Groups

As mentioned earlier in the document several security groups have been created. The purpose of this section is to encourage exploration of the security groups that were created.

Note

Perform the following steps from the OpenStack web console.

Select the Compute tab and the Access and Security sub-tab in the upper left of the web console. Click through each group using the Manage Rules button in the right column and check out both the Inbound and Outbound rules that were created as part of the infrastructure provisioning. For example, notice how the Bastion security group only allows SSH traffic inbound. That can be further restricted to a specific network or host if required. Next take a look at the Master security group and explore all the Inbound and Outbound TCP and UDP rules and the networks from which traffic is allowd.

6.7. Testing Failure

In this section, reactions to failure are explored. After a sucessful install and some of the smoke tests noted above have been completed, failure testing is executed.

6.7.1. Generate a Master Outage

When a master instance fails, the service should remain available.

Stop one of the master instances:

Stop a Master instance

nova stop master-0.control.ocp3.example.com
Request to stop server master-0.control.ocp3.example.com has been accepted.
nova list --field name,status,power_state | grep master
| 4565505c-e48b-43e7-8c77-da6c1fc3d7d8 | master-0.control.ocp3.example.com     | SHUTOFF | Shutdown     |
| 12692288-013b-4891-a8a0-71e6967c656d | master-1.control.ocp3.example.com     | ACTIVE | Running     |
| 3cc0c6f0-59d8-4833-b294-a3a47c37d268 | master-2.control.ocp3.example.com     | ACTIVE | Running     |

Ensure the console can still be accessed by opening a browser and accessing devs.ocp3.example.com. At this point, the cluster is in a degraded state because only 2/3 master nodes are running, but complete funcionality remains.

6.7.2. Observe the Behavior of ETCD with a Failed Master Node

One master instance is down. The master instances contain the etcd daemons.

Note

Run this commands on one of the active master servers.

Check etcd cluster health

# etcdctl -C https://master-0.control.ocp3.example.com:2379,https://master-1.control.ocp3.example.com:2379,https://master-2.control.ocp3.example.com:2379 --ca-file /etc/etcd/ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key cluster-health
failed to check the health of member 82c895b7b0de4330 on https://10.30.1.251:2379: Get https://10.30.1.251:2379/health: dial tcp 10.30.1.251:2379: i/o timeout
member 82c895b7b0de4330 is unreachable: [https://10.30.1.251:2379] are all unreachable
member c8e7ac98bb93fe8c is healthy: got healthy result from https://10.30.3.74:2379
member f7bbfc4285f239ba is healthy: got healthy result from https://10.30.2.157:2379
cluster is healthy

Notice how one member of the ETCD cluster is now unreachable.

Restart master-0.

Note

Run this command from a workstation.

Restart master instance

nova start master-0.control.ocp3.example.com
Request to start server master-0.control.example.com has been accepted.
nova list --field name,status,power_state | grep master
| 4565505c-e48b-43e7-8c77-da6c1fc3d7d8 | master-0.control.ocp3.example.com     | ACTIVE | Running     |
| 12692288-013b-4891-a8a0-71e6967c656d | master-1.control.ocp3.example.com     | ACTIVE | Running     |
| 3cc0c6f0-59d8-4833-b294-a3a47c37d268 | master-2.control.ocp3.example.com     | ACTIVE | Running     |

Verify etcd cluster health

# *etcdctl -C https://master-0.control.ocp3.example.com:2379,https://master-1.control.ocp3.example.com:2379,https://master-2.control.ocp3.example.com:2379 --ca-file /etc/etcd/ca.crt --cert-file=/etc/origin/master/master.etcd-client.crt --key-file=/etc/origin/master/master.etcd-client.key cluster-health*
member 82c895b7b0de4330 is healthy: got healthy result from https://10.30.1.251:2379
member c8e7ac98bb93fe8c is healthy: got healthy result from https://10.30.3.74:2379
member f7bbfc4285f239ba is healthy: got healthy result from https://10.30.2.157:2379
cluster is healthy

6.8. Dynamic Provisioned Storage

Persistent volumes (pv) are OpenShift objects that allow for storage to be defined and then claimed by pods to allow for data persistence. Mounting of persistent volumes is done by using a persistent volume claim (pvc). This claim mounts the persistent storage to a specific directory within a pod referred to as the mountpath.

6.8.1. Creating a Storage Class

The StorageClass resource object describes and classifies storage that can be requested and provide a means for passing parameters for dynamically provisioned storage on demand. A StorageClass object can serve as a management mechanism for controlling various levels of access to storage that are defined and created by either a cluster or storage administrator.

With regards to RHOSP, the storage type that allows for dynamic provisioning is OpenStack cinder using the Provisioner Plug-in Name labeled kubernetes.io/cinder

The following is an example of storage-class.yaml that is required for dynamic provisioning using OpenStack cinder.

Storage-Class YAML file without cinder Volume Type (default)

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: <name>
provisioner: kubernetes.io/cinder
parameters:
  availability: nova

In the storage-class.yaml file example, the cluster or storage administrator requires to input a name for the StorageClass. One parameter option not shown in the above example is type. Type refers to the volume type created in cinder. By default, the value for cinder is empty, thus it is not included in the above example.

However, if the cinder volumes created by RHOSP contain a volume type, a storage-class.yaml file with the additional type parameter is required as shown below:

Storage-Class YAML file with cinder Volume Type (custom)

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: <name>
provisioner: kubernetes.io/cinder
parameters:
  type: <volumetypename>
  availability: nova

Note

StorageClass is only defined and maintained on a per project basis.

6.8.2. Creating a Persistent Volumes Claim

When creating a new application, a pod is created using non-persistent storage labeled as EmptyDir. In order to provide persistent storage to the application just created, a persistent volume claim must be created. The following example shows the creation of a MySQL application that initially uses non-persistent storage, but is then assigned a persistent volume using a persistent storage volume claim.

The following command creates the application. In this example, the application created is a MySQL application.

$ oc new-app --docker-image registry.access.redhat.com/openshift3/mysql-55-rhel7 --name=db -e 'MYSQL_USER=myuser' -e 'MYSQL_PASSWORD=d0nth@x' -e 'MYSQL_DATABASE=persistent'

The following shows that the application pod is currenty running.

$ oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
db-1-6kn6c                 1/1       Running   0          5m

With no persistent storage in place, the volumes section when describing the pod shows that the volume is in fact non-persistent storage. This is shown by the temporary directory that shares a pod’s lifetime labeled EmptyDir.

$ oc describe pod db-1-6kn6c | grep -i volumes -A3
Volumes:
  db-volume-1:
    Type:   EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:

Verify that the StorageClass has been created

$ oc get storageclass
NAME      TYPE
gold      kubernetes.io/cinder

Create a persistent storage claim yaml file. In this example the file consists of application labeled db, a storage class labeled gold, and the amount of storage requested at 10GB.

$ cat db-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
 name: db
 annotations:
   volume.beta.kubernetes.io/storage-class: gold
spec:
 accessModes:
  - ReadWriteMany
 resources:
   requests:
     storage: 10Gi

Execute the persistent volume claim yaml file.

$ oc create -f db-claim.yaml
persistentvolumeclaim "db" created

Verify the persistent volume claim has been created and is bound.

$ oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY
ACCESSMODES   AGE
db        Bound     pvc-cd8f0e34-02b6-11e7-b807-fa163e5c3cb8   10Gi       RWX
10s

Provide the persistent volume to the MySQL application pod labeled 'db'.

$ oc volume dc/db --add --overwrite --name=db-volume-1 --type=persistentVolumeClaim --claim-name=db
deploymentconfig "db" updated

Describe the db pod to ensure it is using the persistent volume claim.

$ oc describe dc/db | grep -i volumes -A3
  Volumes:
   db-volume-1:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim
in the same namespace)
    ClaimName:  db
    ReadOnly:   false