Menu Close
Chapter 5. Persistent Storage
Container storage by default is not persistent. For example, if a new container build occurs then data is lost because the storage is non-persistent or if a container terminates then of the all changes to its local filesystem are lost. OpenShift Container Platform offers many different types of persistent storage to avoid those situations. Persistent storage ensures that data that should persist between builds and container migrations is available.
For more information about the available storage options in OpenShift Container Platform see Types of Persistent Volumes
When choosing a persistent storage backend ensure that the backend supports the scaling, speed, and redundancy that the project requires. This reference architecture will focus on cloud provider specific storage.
This reference architecture is emerging and components like Container-Native Storage (CNS
), and Container-Ready Storage(CRS
) will be described in future revisions.
5.1. Persistent Volumes
Container storage is defined by the concept of persistent volumes
(pv
) which are OpenShift Container Platform objects that allow for storage to be defined and then used by pods for data persistence. Requesting of persistent volumes
is done by using a persistent volume claim
(pvc
) object. This claim, when successfully fulfilled by the system will also mount the persistent storage to a specific directory within a pod or multiple pods. This directory is referred to as the mountPath
and facilitated using a concept known as bind-mount
.
For more information about the persistent volumes
and its lifecycle, see Lifecycle of a Volume and Claim
Persistent volumes
can be preprovisioned by the OpenShift Container Platform administrator by creating them in the underlying infrastructure and in OpenShift Container Platform manually, or the administrator can configure OpenShift Container Platform to create automatically the proper persistent volumes
when users request them using the dynamic provisioning
and storage classes
capabilities of OpenShift Container Platform.
5.2. Storage Classes
The StorageClass
resource object describes and classifies different types of storage that can be requested, as well as provides a means for passing parameters to the backend for dynamically provisioned storage on demand. StorageClass
objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin)
or Storage Administrators (storage-admin)
define and create the StorageClass
objects that users can use without needing any intimate knowledge about the underlying storage volume sources. Because of this the naming of the storage class
defined in the StorageClass
object should be useful in understanding the type of storage it maps to (ie., HDD
vs SDD
or Premium_LRS
vs Standard_LRS
).
5.3. Cloud Provider Specific Storage
Cloud provider specific storage is storage that is provided from Microsoft Azure. This type of storage is presented as an Data disk VHD
and can be mounted by one pod at a time. It is needed to configure OpenShift Container Platform with Microsoft Azure settings like the resourceGroup
or subscriptionID
in the /etc/azure/azure.conf
file on masters and nodes as well as in the OpenShift Container Platform masters and nodes configuration file to be able to use VHD
as persistent storage
for pods. The settings needed are automatically configured as part of the installation process using the code provided in the openshift-ansible-contrib git repository.
For more information about the required settings, see Configuring Azure
Cloud provider storage can be created manually and assigned as a persistent volume
or a persistent volume
can be created dynamically using a StorageClass
object. Note that VHD
storage can only use the access mode of Read-Write-Once
(RWO
).
The VHDs
used in Microsoft Azure are .vhd
files stored as page blobs in a standard or premium storage account in Microsoft Azure where standard delivers cost-effective storage and premium delivers high-performance, low-latency storage.
5.3.1. Creating a Storage Class
When requesting cloud provider specific storage in Microsoft Azure for OpenShift Container Platform, there are two options to define a storage class
:
-
Create a
service account
in the sameresource group
where the OpenShift Container Platform cluster has been deployed in Microsoft Azure where all theVHDs
will be created. -
Provide a
skuName
andlocation
to OpenShift Container Platform where all storage accounts associated with the resource group are searched to find one that matches.
In this reference architecture the first options has been chosen as it is simpler and avoids searching for matching service accounts
where they will be provided before using them.
Besides the /etc/azure/azure.conf
configuration file, it is required to create a storage account
per storage class
created in OpenShift Container Platform in order to be able to use dynamic provisioning
of volumes for the pod storage.
This reference architecture creates automatically two different storage accounts
for pod storage that will be used in different storage classes
to demonstrate the process.
There are two Microsoft Azure storage accounts
created as part of the installation process using the ARM
template:
-
sapv<resourcegroup>
- For thegeneric
storage class (using premium storage) -
sapvlm<resourcegroup>
- To store metrics and logging volumes (using premium storage)
To create more storage accounts, the azure-cli can be used as:
$ azure storage account create --sku-name <sku> --kind "Storage" -g <resourcegroup> -l <region> <storage account name>
This example shows how to create a sapv3sysdeseng
storage class
using standard storage
in the westus
region:
$ azure storage account create --sku-name "LRS" --kind "Storage" -g sysdeseng sapv3sysdeseng -l "westus"
info: Executing command storage account create
+ Checking availability of the storage account name
+ Creating storage account
info: storage account create command OK
Once the storage account
has been created, a StorageClass
OpenShift Container Platform object can be created to map it as:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: mystorageclass provisioner: kubernetes.io/azure-disk parameters: storageAccount: sapv3sysdeseng
The cluster-admin or storage-admin can then create the StorageClass
object using the yaml file.
$ oc create -f my-storage-class.yaml
Multiple StorageClassess
objects can be defined depending on the storage needs of the pods within OpenShift Container Platform.
5.3.2. Creating and using a Persistent Volumes Claim
The example below shows a dynamically provisioned volume being requested from the StorageClass
named mystorageclass
.
$ vi db-claim.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: db annotations: volume.beta.kubernetes.io/storage-class: mystorageclass spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi $ oc create -f db-claim.yaml persistentvolumeclaim "db" created $ oc get pvc db NAME STATUS VOLUME CAPACITY ACCESSMODES AGE db Bound pvc-be63668e-451e-11e7-b30b-000d3a36dea3 10Gi RWO 1m
The cluster-admin
role can also view more information about the persistent volume
$ oc describe pv pvc-be63668e-451e-11e7-b30b-000d3a36dea3
Name: pvc-be63668e-451e-11e7-b30b-000d3a36dea3
Labels: <none>
StorageClass: mystorageclass
Status: Bound
Claim: testdev/db
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 10Gi
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: kubernetes-dynamic-pvc-be63668e-451e-11e7-b30b-000d3a36dea3.vhd
DiskURI: https://sapv3sysdeseng.blob.core.windows.net/vhds/kubernetes-dynamic-pvc-be63668e-451e-11e7-b30b-000d3a36dea3.vhd
FSType: ext4
CachingMode: None
ReadOnly: false
No events.
5.3.3. Deleting a PVC (Optional)
There may become a point in which a pvc
is no longer necessary for a project. The following can be done to remove the pvc
.
$ oc delete pvc db persistentvolumeclaim "db" deleted $ oc get pvc db No resources found. Error from server: persistentvolumeclaims "db" not found
Microsoft Azure does not support the Recycle reclaim policy, so all the data will be erased