Chapter 6. Installing OpenShift DR Cluster Operator on Managed clusters
Procedure
- On each managed cluster, navigate to OperatorHub and filter for OpenShift DR Cluster Operator.
Follow the screen instructions to install the operator into the project
openshift-dr-system
.NoteThe
OpenShift DR Cluster Operator
must be installed on both the Primary managed cluster and Secondary managed cluster.Configure SSL access between the s3 endpoints so that metadata can be stored on the alternate cluster in a MCG object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets.
NoteIf all of your OpenShift clusters are deployed using a signed and trusted set of certificates for your environment then this section can be skipped.
Extract the ingress certificate for the Primary managed cluster and save the output to
primary.crt
.$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > primary.crt
Extract the ingress certificate for the Secondary managed cluster and save the output to
secondary.crt
.$ oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath="{['data']['ca-bundle\.crt']}" > secondary.crt
Create a new ConfigMap to hold certificate bundle of the remote cluster with file name
cm-clusters-crt.yaml
on the Primary managed cluster, Secondary managed cluster and the Hub cluster.NoteThere could be more or less than three certificates for each cluster as shown in this example file.
apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config
Run the following command on the Primary managed cluster, Secondary managed cluster, and the Hub cluster to create the file.
$ oc create -f cm-clusters-crt.yaml
Example output:
configmap/user-ca-bundle created
ImportantFor the Hub cluster to verify access to the object buckets using the DRPolicy resource, the same ConfigMap,
cm-clusters-crt.yaml
, must be created on the Hub cluster.Modify default Proxy cluster resource.
Copy and save the following content into the new YAML file
proxy-ca.yaml
.apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: user-ca-bundle
Apply this new file to the default proxy resource on the Primary managed cluster, Secondary managed cluster, and the Hub cluster.
$ oc apply -f proxy-ca.yaml
Example output:
proxy.config.openshift.io/cluster configured
Retrieve Multicloud Object Gateway (MCG) keys and external S3 endpoint.
Check if MCG is installed on the Primary managed cluster and the Secondary managed cluster, and if Phase is
Ready
.$ oc get noobaa -n openshift-storage
Example output:
NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa ["https://10.70.56.161:30145"] ["https://10.70.56.84:31721"] quay.io/rhceph-dev/mcg-core@sha256:c4b8857ee9832e6efc5a8597a08b81730b774b2c12a31a436e0c3fadff48e73d Ready 27h
Copy the following YAML file to filename
odrbucket.yaml
.apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: odrbucket namespace: openshift-dr-system spec: generateBucketName: "odrbucket" storageClassName: openshift-storage.noobaa.io
Create a MCG bucket
odrbucket
on both the Primary managed cluster and the Secondary managed cluster.$ oc create -f odrbucket.yaml
Example output:
objectbucketclaim.objectbucket.io/odrbucket created
Extract the
odrbucket
OBC access key for each managed cluster as their base-64 encoded values by using the following command.$ oc get secret odrbucket -n openshift-dr-system -o jsonpath='{.data.AWS_ACCESS_KEY_ID}{"\n"}'
Example output:
cFpIYTZWN1NhemJjbEUyWlpwN1E=
Extract the
odrbucket
OBC secret key for each managed cluster as their base-64 encoded values by using the following command.$ oc get secret odrbucket -n openshift-dr-system -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}{"\n"}'
Example output:
V1hUSnMzZUoxMHRRTXdGMU9jQXRmUlAyMmd5bGwwYjNvMHprZVhtNw==
Create S3 Secrets for Managed clusters.
Now that the necessary MCG information has been extracted there must be new Secrets created on the Primary managed cluster and the Secondary managed cluster. These new Secrets stores the MCG access and secret keys for both managed clusters.
NoteOpenShift DR requires one or more S3 stores to store relevant cluster data of a workload from the managed clusters and to orchestrate a recovery of the workload during failover or relocate actions. These instructions are applicable for creating the necessary object bucket(s) using Multicloud Gateway (MCG). MCG should already be installed as a result of installing OpenShift Data Foundation.
Copy the following S3 secret YAML format for the Primary managed cluster to filename
odr-s3secret-primary.yaml
.apiVersion: v1 data: AWS_ACCESS_KEY_ID: <primary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <primary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-primary namespace: openshift-dr-system
Replace <primary cluster base-64 encoded access key> and <primary cluster base-64 encoded secret access key> with actual values retrieved in an earlier step.
Create this secret on the Primary managed cluster and the Secondary managed cluster.
$ oc create -f odr-s3secret-primary.yaml
Example output:
secret/odr-s3secret-primary created
Copy the following S3 secret YAML format for the Secondary managed cluster to filename
odr-s3secret-secondary.yaml
.apiVersion: v1 data: AWS_ACCESS_KEY_ID: <secondary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <secondary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-secondary namespace: openshift-dr-system
Replace <secondary cluster base-64 encoded access key> and <secondary cluster base-64 encoded secret access key> with actual values retrieved in step 4.
Create this secret on the Primary managed cluster and the Secondary managed cluster.
$ oc create -f odr-s3secret-secondary.yaml
Example output:
secret/odr-s3secret-secondary created
ImportantThe values for the access and secret key must be base-64 encoded. The encoded values for the keys were retrieved in an earlier step.
Configure OpenShift DR Cluster Operator ConfigMaps on each of the managed clusters.
Search for the external S3 endpoint s3CompatibleEndpoint or route for MCG on each managed cluster by using the following command.
$ oc get route s3 -n openshift-storage -o jsonpath --template="https://{.spec.host}{'\n'}"
Example output:
https://s3-openshift-storage.apps.perf1.example.com
ImportantThe unique s3CompatibleEndpoint route or
s3-openshift-storage.apps.<primary clusterID>.<baseDomain>
ands3-openshift-storage.apps.<secondary clusterID>.<baseDomain>
must be retrieved for both the Primary managed cluster and Secondary managed cluster respectively.Search for the
odrbucket
OBC bucket name.$ oc get configmap odrbucket -n openshift-dr-system -o jsonpath='{.data.BUCKET_NAME}{"\n"}'
Example output:
odrbucket-2f2d44e4-59cb-4577-b303-7219be809dcd
ImportantThe unique s3Bucket name odrbucket-<your value1> and odrbucket-<your value2> must be retrieved on both the Primary managed cluster and Secondary managed cluster respectively.
Modify the ConfigMap
ramen-dr-cluster-operator-config
to add the new content.$ oc edit configmap ramen-dr-cluster-operator-config -n openshift-dr-system
Add the following new content starting at
s3StoreProfiles
to the ConfigMap on the Primary managed cluster and the Secondary managed cluster.[...] data: ramen_manager_config.yaml: | apiVersion: ramendr.openshift.io/v1alpha1 kind: RamenConfig [...] ramenControllerType: "dr-cluster" ### Start of new content to be added s3StoreProfiles: - s3ProfileName: s3-primary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<primary clusterID>.<baseDomain> s3Region: primary s3Bucket: odrbucket-<your value1> s3SecretRef: name: odr-s3secret-primary namespace: openshift-dr-system - s3ProfileName: s3-secondary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<secondary clusterID>.<baseDomain> s3Region: secondary s3Bucket: odrbucket-<your value2> s3SecretRef: name: odr-s3secret-secondary namespace: openshift-dr-system [...]