Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed.
Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway.
Perform the following steps to deploy OpenShift Data Foundation:
2.1. Installing Local Storage Operator
Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Type
local storagein the Filter by keyword… box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page:
-
Update channel as
stable. - Installation Mode as A specific namespace on the cluster.
- Installed Namespace as Operator recommended namespace openshift-local-storage.
- Approval Strategy as Automatic.
-
Update channel as
- Click Install.
Verification steps
- Verify that the Local Storage Operator shows a green tick indicating successful installation.
2.2. Installing Red Hat OpenShift Data Foundation Operator
You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub.
For information about the hardware and software requirements, see Planning your deployment.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminand Operator installation permissions. - You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster.
-
When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the
openshift-storagenamespace (create openshift-storage namespace in this case):
$ oc annotate namespace openshift-storage openshift.io/node-selector=
-
Taint a node as
infrato ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide.
Procedure
- Log in to the OpenShift Web Console.
- Click Operators → OperatorHub.
-
Scroll or type
OpenShift Data Foundationinto the Filter by keyword box to find the OpenShift Data Foundation Operator. - Click Install.
Set the following options on the Install Operator page:
- Update Channel as stable-4.12.
- Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If Namespace
openshift-storagedoes not exist, it is created during the operator installation.
Select Approval Strategy as Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- Ensure that the Enable option is selected for the Console plugin.
- Click Install.
Verification steps
- Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation.
After the operator is successfully installed, a pop-up with a message,
Web console update is availableappears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- In the Web Console, navigate to Storage and verify if Data Foundation is available.
2.3. Enabling cluster-wide encryption with KMS using the Token authentication method
You can enable the key value backend path and policy in the vault for token authentication.
Prerequisites
- Administrator access to the vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
-
Carefully, select a unique path name as the backend
paththat follows the naming convention since you cannot change it later.
Procedure
Enable the Key/Value (KV) backend path in the vault.
For vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a write or delete operation on the secret:
echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -Create a token that matches the above policy:
$ vault token create -policy=odf -format json
2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method
You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS).
Prerequisites
- Administrator access to Vault.
- A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions.
- The OpenShift Data Foundation operator must be installed from the Operator Hub.
-
Select a unique path name as the backend
paththat follows the naming convention carefully. You cannot change this path name later.
Procedure
Create a service account:
$ oc -n openshift-storage create serviceaccount <serviceaccount_name>
where,
<serviceaccount_name>specifies the name of the service account.For example:
$ oc -n openshift-storage create serviceaccount odf-vault-auth
Create
clusterrolebindingsandclusterroles:$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_
For example:
$ oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth
Create a secret for the
serviceaccounttoken and CA certificate.$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOFwhere,
<serviceaccount_name>is the service account created in the earlier step.Get the token and the CA certificate from the secret.
$ SA_JWT_TOKEN=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['token']}" | base64 --decode; echo) $ SA_CA_CRT=$(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)Retrieve the OCP cluster endpoint.
$ OCP_HOST=$(oc config view --minify --flatten -o jsonpath="{.clusters[0].cluster.server}")Fetch the service account issuer:
$ oc proxy & $ proxy_pid=$! $ issuer="$( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)" $ kill $proxy_pid
Use the information collected in the previous step to setup the Kubernetes authentication method in Vault:
$ vault auth enable kubernetes
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT" \ issuer="$issuer"ImportantTo configure the Kubernetes authentication method in Vault when the issuer is empty:
$ vault write auth/kubernetes/config \ token_reviewer_jwt="$SA_JWT_TOKEN" \ kubernetes_host="$OCP_HOST" \ kubernetes_ca_cert="$SA_CA_CRT"Enable the Key/Value (KV) backend path in Vault.
For Vault KV secret engine API, version 1:
$ vault secrets enable -path=odf kv
For Vault KV secret engine API, version 2:
$ vault secrets enable -path=odf kv-v2
Create a policy to restrict the users to perform a
writeordeleteoperation on the secret:echo ' path "odf/*" { capabilities = ["create", "read", "update", "delete", "list"] } path "sys/mounts" { capabilities = ["read"] }'| vault policy write odf -Generate the roles:
$ vault write auth/kubernetes/role/odf-rook-ceph-op \ bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440hThe role
odf-rook-ceph-opis later used while you configure the KMS connection details during the creation of the storage system.$ vault write auth/kubernetes/role/odf-rook-ceph-osd \ bound_service_account_names=rook-ceph-osd \ bound_service_account_namespaces=openshift-storage \ policies=odf \ ttl=1440h
2.5. Finding available storage devices
Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power.
Procedure
List and verify the name of the worker nodes with the OpenShift Data Foundation label.
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
Example output:
NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf
Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform.
$ oc debug node/<node name>
Example output:
$ oc debug node/worker-0 Starting pod/worker-0-debug ... To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part
In this example, for worker-0, the available local devices of 500G are
sda,sdc,sde,sdg,sdi,sdk,sdm,sdo.- Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details.
2.6. Creating OpenShift Data Foundation cluster on IBM Power
Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator.
Prerequisites
- Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met.
- You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power.
Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation:
oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'
To identify storage devices on each node, refer to Finding available storage devices.
Procedure
- Log into the OpenShift Web Console.
-
In
openshift-local-storagenamespace Click Operators → Installed Operators to view the installed operators. - Click the Local Storage installed operator.
- On the Operator Details page, click the Local Volume link.
- Click Create Local Volume.
- Click on YAML view for configuring Local Volume.
Define a
LocalVolumecustom resource for block PVs using the following YAML.apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: BlockThe above definition selects
sdalocal device from theworker-0,worker-1andworker-2nodes. Thelocalblockstorage class is created and persistent volumes are provisioned fromsda.ImportantSpecify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths.
- Click Create.
Confirm whether
diskmaker-managerpods andPersistent Volumesare created.For Pods
- Click Workloads → Pods from the left pane of the OpenShift Web Console.
- Select openshift-local-storage from the Project drop-down list.
-
Check if there are
diskmaker-managerpods for each of the worker node that you used while creating LocalVolume CR.
For Persistent Volumes
- Click Storage → PersistentVolumes from the left pane of the OpenShift Web Console.
Check the Persistent Volumes with the name
local-pv-*. Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR.ImportantThe flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones.
This feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a previous version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Data Foundation cluster in the New features section of Release Notes.
- Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on.
In the OpenShift Web Console, click Operators → Installed Operators to view all the installed operators.
Ensure that the Project selected is
openshift-storage.- Click on the OpenShift Data Foundation operator and then click Create StorageSystem.
In the Backing storage page, perform the following:
- Select Full Deployment for the Deployment type option.
- Select the Use an existing StorageClass option.
Select the required Storage Class that you used while installing LocalVolume.
By default, it is set to
none.- Click Next.
In the Capacity and nodes page, configure the following:
- Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up.
- The Selected nodes list shows the nodes based on the storage class.
- Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation.
- Click Next.
Optional: In the Security and network page, configure the following based on your requirement:
- To enable encryption, select Enable data encryption for block and file storage.
Choose one or both of the following Encryption level:
Cluster-wide encryption
Encrypts the entire cluster (block and file).
StorageClass encryption
Creates encrypted persistent volume (block only) using encryption enabled storage class.
Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected KMS Vendor (using KMIP), go to step iii.
Select an Authentication Method.
- Using Token authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step d.
- Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vaultconfiguration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate and Client Private Key .
- Click Save and skip to step d.
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the endpoints. For example:
- Address: 123.34.3.2 <IP>)
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local. - Select a Network.
- Select Default (SDN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power.
- Click Next.
In the Review and create page::
- Review the configurations details. To modify any configuration settings, click Back to go back to the previous configuration page.
- Click Create StorageSystem.
Verification steps
To verify the final Status of the installed storage cluster:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources.
-
Verify that
StatusofStorageClusterisReadyand has a green tick mark next to it.
To verify if flexible scaling is enabled on your storage cluster, perform the following steps:
- In the OpenShift Web Console, navigate to Installed Operators → OpenShift Data Foundation → Storage System → ocs-storagecluster-storagesystem → Resources → ocs-storagecluster.
In the YAML tab, search for the keys
flexibleScalinginspecsection andfailureDomaininstatussection. Ifflexible scalingis true andfailureDomainis set to host, flexible scaling feature is enabled.spec: flexibleScaling: true […] status: failureDomain: host
- To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment.
Additional resources
- To expand the capacity of the initial cluster, see the Scaling Storage guide.