Chapter 1. Security

Manage your security and role-based access control (RBAC) of Red Hat Advanced Cluster Management for Kubernetes components. Govern your cluster with defined policies and processes to identify and minimize risks. Use policies to define rules and set controls.

Prerequisite: You must configure authentication service requirements for Red Hat Advanced Cluster Management for Kubernetes to onboard workloads to Identity and Access Management (IAM). For more information see, Understanding authentication in Understanding authentication in the OpenShift Container Platform documentation.

Review the following topics to learn more about securing your cluster:

1.1. Role-based access control

Red Hat Advanced Cluster Management for Kubernetes supports role-based access control (RBAC). Your role determines the actions that you can perform. RBAC is based on the authorization mechanisms in Kubernetes, similar to Red Hat OpenShift Container Platform. For more information about RBAC, see the OpenShift RBAC overview in the OpenShift Container Platform documentation.

Note: Action buttons are disabled from the console if the user-role access is impermissible.

View the following sections for details of supported RBAC by component:

1.1.1. Overview of roles

Some product resources are cluster-wide and some are namespace-scoped. You must apply cluster role bindings and namespace role bindings to your users for consistent access controls. View the table list of the following role definitions that are supported in Red Hat Advanced Cluster Management for Kubernetes:

Table 1.1. Role definition table

RoleDefinition

cluster-admin

A user with cluster-wide binding to the cluster-admin role is an OpenShift Container Platform super user, who has all access.

open-cluster-management:cluster-manager-admin

A user with cluster-wide binding to the cluster-manager-admin role is a Red Hat Advanced Cluster Management for Kubernetes super user, who has all access. This role allows the user to create a ManagedCluster resource.

open-cluster-management:managed-cluster-x (admin)

A user with cluster binding to the managed-cluster-x role has administrator access to managedcluster “X” resource.

open-cluster-management:managed-cluster-x (viewer)

A user with cluster-wide binding to the managed-cluster-x role has view access to managedcluster “X” resource.

open-cluster-management:subscription-admin

A user with the subscription-admin role can create Git subscriptions that deploy resources to multiple namespaces. The resources are specified in Kubernetes resource YAML files in the subscribed Git repository. Note: When a non-subscription-admin user creates a subscription, all resources are deployed into the subscription namespace regardless of specified namespaces in the resources. For more information, see the Application lifecycle RBAC section.

admin, edit, view

Admin, edit, and view are OpenShift Container Platform default roles. A user with a namespace-scoped binding to these roles has access to open-cluster-management resources in a specific namespace, while cluster-wide binding to the same roles gives access to all of the open-cluster-management resources cluster-wide.

Important:

  • Any user can create projects from OpenShift Container Platform, which gives administrator role permissions for the namespace.
  • If a user does not have role access to a cluster, the cluster name is not visible. The cluster name is displayed with the following symbol: -.

1.1.2. RBAC implementation

RBAC is validated at the console level and at the API level. Actions in the console can be enabled or disabled based on user access role permissions. View the following sections for more information on RBAC for specific lifecycles in the product.

1.1.2.1. Cluster lifecycle RBAC

View the following cluster lifecycle RBAC operations.

To create and administer all managed clusters:

  • Create a cluster role binding to the cluster role open-cluster-management:cluster-manager-admin. This role is a super user, which has access to all resources and actions. This role allows you to create cluster-scoped managedcluster resources, the namespace for the resources that manage the managed cluster, and the resources in the namespace. This role also allows access to provider connections and to bare metal assets that are used to create managed clusters.

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:cluster-manager-admin

To administer a managed cluster named cluster-name:

  • Create a cluster role binding to the cluster role open-cluster-management:admin:<cluster-name>. This role allows read/write access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource and not a namespace-scoped resource.

    oc create clusterrolebinding (role-binding-name) --clusterrole=open-cluster-management:admin:<cluster-name>
  • Create a namespace role binding to the cluster role admin. This role allows read/write access to the resources in the namespace of the managed cluster.

    oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=admin

To view a managed cluster named cluster-name:

  • Create a cluster role binding to the cluster role open-cluster-management:view:<cluster-name>. This role allows read access to the cluster-scoped managedcluster resource. This is needed because the managedcluster is a cluster-scoped resource and not a namespace-scoped resource.

    oc create clusterrolebinding <role-binding-name> --clusterrole=open-cluster-management:view:<cluster-name>
  • Create a namespace role binding to the cluster role view. This role allows read-only access to the resources in the namespace of the managed cluster.

    oc create rolebinding <role-binding-name> -n <cluster-name> --clusterrole=view

View the following console and API RBAC tables for cluster lifecycle:

Table 1.2. Console RBAC table for cluster lifecycle

ActionAdminEditView

Clusters

read, update, delete

read, update

read

Provider connections

create, read, update, and delete

create, read, update, and delete

read

Bare metal asset

create, read, update, delete

read, update

read

Table 1.3. API RBAC table for cluster lifecycle

APIAdminEditView

managedclusters.cluster.open-cluster-management.io

create, read, update, delete

read, update

read

baremetalassets.inventory.open-cluster-management.io

create, read, update, delete

read, update

read

klusterletaddonconfigs.agent.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusteractions.action.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterviews.view.open-cluster-management.io

create, read, update, delete

read, update

read

managedclusterinfos.internal.open-cluster-management.io

create, read, update, delete

read, update

read

manifestworks.work.open-cluster-management.io

create, read, update, delete

read, update

read

1.1.2.2. Application lifecycle RBAC

When you create an application, the subscription namespace is created and the configuration map is created in the subscription namespace. You must also have access to the channel namespace. When you want to apply a subscription, you must be a subscription administrator. For more information on managing applications, see Creating and managing subscriptions.

To perform application lifecycle tasks, users with the admin role must have access to the application namespace where the application is created, and to the managed cluster namespace. For example, the required access to create applications in namespace "N" is a namespace-scoped binding to the admin role for namespace "N".

View the following console and API RBAC tables for Application lifecycle:

Table 1.4. Console RBAC table for Application lifecycle

ActionAdminEditView

Application

create, read, update, delete

create, read, update, delete

read

Channel

create, read, update, delete

create, read, update, delete

read

Subscription

create, read, update, delete

create, read, update, delete

read

Placement rule

create, read, update, delete

create, read, update, delete

read

Table 1.5. API RBAC table for application lifecycle

APIAdminEditView

applications.app.k8s.io

create, read, update, delete

create, read, update, delete

read

channels.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

deployables.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

helmreleases.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

placementrules.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

subscriptions.apps.open-cluster-management.io

create, read, update, delete

create, read, update, delete

read

configmaps

create, read, update, delete

create, read, update, delete

read

secrets

create, read, update, delete

create, read, update, delete

read

namespaces

create, read, update, delete

create, read, update, delete

read

1.1.2.3. Governance lifecycle RBAC

To perform governance lifecycle operations, users must have access to the namespace where the policy is created, along with access to the managedcluster namespace where the policy is applied.

View the following examples:

  • To view policies in namespace "N" the following role is required:

    • A namespace-scoped binding to the view role for namespace "N".
  • To create a policy in namespace "N" and apply it on managedcluster "X", the following roles are required:

    • A namespace-scoped binding to the admin role for namespace "N".
    • A namespace-scoped binding to the admin role for namespace "X".

View the following console and API RBAC tables for Governance lifecycle:

Table 1.6. Console RBAC table for governance lifecycle

ActionAdminEditView

Policies

create, read, update, delete

read, update

read

PlacementBindings

create, read, update, delete

read, update

read

PlacementRules

create, read, update, delete

read, update

read

Table 1.7. API RBAC table for Governance lifecycle

APIAdminEditView

policies.policy.open-cluster-management.io

create, read, update, delete

read, update

read

placementbindings.policy.open-cluster-management.io

create, read, update, delete

read, update

read

1.1.2.4. Observability RBAC

To view the observability metrics for a managed cluster, you must have view access to that managed cluster on the hub cluster. View the following list of observability features:

  • Access managed cluster metrics.

    Users are denied access to managed cluster metrics, if they are not assigned to the view role for the managed cluster on the hub cluster.

  • Search for resources.

To view observability data in Grafana, you must have a RoleBinding resource in the same namespace of the managed cluster. View the following RoleBinding example:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: <replace-with-name-of-rolebinding>
 namespace: <replace-with-name-of-managedcluster-namespace>
subjects:
 - kind: <replace with User|Group|ServiceAccount>
   apiGroup: rbac.authorization.k8s.io
   name: <replace with name of User|Group|ServiceAccount>
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: view

See Role binding policy for more information. See Customizing observability to configure observability.

  • Use the Visual Web Terminal if you have access to the managed cluster.

To create, update, and delete the MultiClusterObservability custom resource. View the following RBAC table:

Table 1.8. API RBAC table for observability

API

Admin

Edit

View

multiclusterobservabilities.observability.open-cluster-management.io

create, read, update, and delete

-

-

To continue to learn more about securing your cluster, see Security.

1.2. Credentials

You can rotate your credentials for your Red Hat Advanced Cluster Management for Kubernetes clusters when your cloud provider access credentials have changed. Continue reading for the procedure to manually propagate your updated cloud provider credentials.

Required access: Cluster administrator

1.2.1. Provider credentials

Connection secrets for a cloud provider can be rotated. See the following list of provider credentials:

1.2.1.1. Amazon Web Services

  • aws_access_key_id: Your provisioned cluster access key.
  • aws_secret_access_key: Your provisioned secret access key.

    1. View the resources in the namespace that has the same name as the cluster with the expired credential.
    2. Find the secret name <cluster_name>-<cloud_provider>-creds. For example: my_cluster-aws-creds1.
    3. Edit the secret to replace the existing value with the updated value.

1.2.2. Agents

Agents are responsible for connections. See how you can rotate the following credentials:

  • registration-agent: Connects the registration agent to the hub cluster.
  • work-agent: Connects the work agent to the hub cluster.

    To rotate credentials, delete the hub-kubeconfig secret to restart the registration pods.

  • APIServer: Connects agents and add-ons to the hub cluster.

    1. On the hub cluster, display the import command by entering the following command:

      oc get secret -n ${CLUSTER_NAME} ${CLUSTER_NAME}-import -ojsonpath='{.data.import\.yaml}' | base64 --decode  > import.yaml
    2. On the managed cluster, apply the import.yaml file. Run the following command: oc apply -f import.yaml.

1.3. Certificates

Various certificates are created and used throughout Red Hat Advanced Cluster Management for Kubernetes.

You can bring your own certificates. You must create a Kubernetes TLS Secret for your certificate. After you create your certificates, you can replace certain certificates that are created by the Red Hat Advanced Cluster Management installer.

Required access: Cluster administrator or team administrator.

Note: Replacing certificates is supported only on native Red Hat Advanced Cluster Management installations.

All certificates required by services that run on Red Hat Advanced Cluster Management are created during the installation of Red Hat Advanced Cluster Management. Certificates are created and managed by the Red Hat Advanced Cluster Management Certificate manager (cert-manager) service. The Red Hat Advanced Cluster Management Root Certificate Authority (CA) certificate is stored within the Kubernetes Secret multicloud-ca-cert in the hub cluster namespace. The certificate can be imported into your client truststores to access Red Hat Advanced Cluster Management Platform APIs.

See the following topics to replace certificates:

1.3.1. List managed certificates

You can view a list of managed certificates that use cert-manager internally by running the following command:

oc get certificates.certmanager.k8s.io -n open-cluster-management

Note: If observability is enabled, there are additional namespaces where certificates are created.

1.3.2. Refresh a managed certificate

You can refresh a managed certificate by running the command in the List managed certificates section. When you identify the certificate that you need to refresh, delete the secret that is associated with the certificate. For example, you can delete a secret by running the following command:

oc delete secret grc-0c925-grc-secrets -n open-cluster-management

1.3.3. Refresh managed certificates for Red Hat Advanced Cluster Management for Kubernetes

You can refresh all managed certificates that are issued by the Red Hat Advanced Cluster Management CA. During the refresh, the Kubernetes secret that is associated with each cert-manager certificate is deleted. The service restarts automatically to use the certificate. Run the following command:

oc delete secret -n open-cluster-management $(oc get certificates.certmanager.k8s.io -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')

The Red Hat OpenShift Container Platform certificate is not included in the Red Hat Advanced Cluster Management for Kubernetes management ingress. For more information, see the Security known issues.

1.3.4. Refresh internal certificates

You can refresh internal certificates, which are certificates that are used by Red Hat Advanced Cluster Management webhooks and the proxy server.

Complete the following steps to refresh internal certificates:

  1. Delete the secret that is associated with the internal certificate by running the following command:

    oc delete secret -n open-cluster-management ocm-webhook-secret

    Note: Some services might not have a secret that needs to be deleted.

  2. Restart the services that are associated with the internal certificate(s) by running the following command:

    oc delete po -n open-cluster-management ocm-webhook-679444669c-5cg76

    Remember: There are replicas of many services; each service must be restarted.

View the following table for a summarized list of the pods that contain certificates and whether a secret needs to be deleted prior to restarting the pod:

Table 1.9. Pods that contain internal certificates

Service nameNamespaceSample pod nameSecret name (if applicable)

channels-apps-open-cluster-management-webhook-svc

open-cluster-management

multicluster-operators-application-8c446664c-5lbfk

-

multicluster-operators-application-svc

open-cluster-management

multicluster-operators-application-8c446664c-5lbfk

-

multiclusterhub-operator-webhook

open-cluster-management

multiclusterhub-operator-bfd948595-mnhjc

-

ocm-webhook

open-cluster-management

ocm-webhook-679444669c-5cg76

ocm-webhook-secret

cluster-manager-registration-webhook

open-cluster-management-hub

cluster-manager-registration-webhook-fb7b99c-d8wfc

registration-webhook-serving-cert

cluster-manager-work-webhook

open-cluster-management-hub

cluster-manager-work-webhook-89b8d7fc-f4pv8

work-webhook-serving-cert

1.3.4.1. Rotating the gatekeeper webhook certificate

Complete the following steps to rotate the gatekeeper webhook certificate:

  1. Edit the secret that contains the certificate with the following command:

    oc edit secret -n openshift-gatekeeper-system gatekeeper-webhook-server-cert
  2. Delete the following content in the data section: ca.crt, ca.key, tls.crt`, and tls.key.
  3. Restart the gatekeeper webhook service by deleting the gatekeeper-controller-manager pods with the following command:

    oc delete po -n openshift-gatekeeper-system -l control-plane=controller-manager

The gatekeeper webhook certificate is rotated.

1.3.4.2. Rotating the integrity shield webhook certificate (Technology preview)

Complete the following steps to rotate the integrity shield webhook certificate:

  1. Edit the IntegrityShield custom resource and add the integrity-shield-operator-system namespace to the excluded list of namespaces in the inScopeNamespaceSelector setting. Run the following command to edit the resource:

    oc edit integrityshield integrity-shield-server -n integrity-shield-operator-system
  2. Delete the secret that contains the integrity shield certificate by running the following command:

    oc delete secret -n integrity-shield-operator-system ishield-server-tls
  3. Delete the operator so that the secret is recreated. Be sure that the operator pod name matches the pod name on your system. Run the following command:

    oc delete po -n integrity-shield-operator-system integrity-shield-operator-controller-manager-64549569f8-v4pz6
  4. Delete the integrity shield server pod to begin using the new certificate with the following command:

    oc delete po -n integrity-shield-operator-system integrity-shield-server-5fbdfbbbd4-bbfbz

1.3.4.3. Observability certificates

When Red Hat Advanced Cluster Management is installed there are additional namespaces where certificates are managed. The open-cluster-management-observability namespace and the managed cluster namespaces contain certificates managed by cert-manager for the observability service.

Observability certificates are automatically refreshed upon expiration. View the following list to understand the effects when certificates are automatically renewed:

  • Components on your hub cluster automatically restart to retrieve the refreshed certificate.
  • Red Hat Advanced Cluster Management sends the refreshed certificates to managed clusters.
  • The metrics-collector restarts to mount the renewed certificates.

    Note: metrics-collector can push metrics to the hub cluster before and after certificates are removed. For more information about refreshing certificates across your clusters, see the Refresh internal certificates section. Be sure to specify the appropriate namespace when you refresh a certificate.

1.3.4.4. Channel certificates

CA certificates can be associated with Git channel that are a part of the Red Hat Advanced Cluster Management application management. See Using custom CA certificates for a secure HTTPS connection for more details.

Helm channels allow you to disable certificate validation. Helm channels where certificate validation is disabled, must be configured in development environments. Disabling certificate validation introduces security risks.

1.3.4.5. Managed cluster certificates

Certificates are used to authenticate managed clusters with the hub. Therefore, it is important to be aware of troubleshooting scenarios associated with these certificates. View Troubleshooting imported clusters offline after certificate change for more details.

The managed cluster certificates are refreshed automatically.

Use the certificate policy controller to create and manage certificate policies on managed clusters. See Policy controllers to learn more about controllers. Return to the Security page for more information.

1.3.5. Replacing the root CA certificate

You can replace the root CA certificate.

1.3.5.1. Prerequisites for root CA certificate

Verify that your Red Hat Advanced Cluster Management for Kubernetes cluster is running.

Back up the existing Red Hat Advanced Cluster Management for Kubernetes certificate resource by running the following command:

oc get cert multicloud-ca-cert -n open-cluster-management -o yaml > multicloud-ca-cert-backup.yaml

1.3.5.2. Creating the root CA certificate with OpenSSL

Complete the following steps to create a root CA certificate with OpenSSL:

  1. Generate your certificate authority (CA) RSA private key by running the following command:

    openssl genrsa -out ca.key 4096
  2. Generate a self-signed CA certificate by using your CA key. Run the following command:

    openssl req -x509 -new -nodes -key ca.key -days 400 -out ca.crt -config req.cnf

    Your req.cnf file might resemble the following file:

    [ req ]               # Main settings
    default_bits = 4096       # Default key size in bits.
    prompt = no               # Disables prompting for certificate values so the configuration file values are used.
    default_md = sha256       # Specifies the digest algorithm.
    distinguished_name = dn   # Specifies the section that includes the distinguished name information.
    x509_extensions = v3_ca   # The extentions to add to the self signed cert
    
    [ dn ]               # Distinguished name settings
    C = US                    # Country
    ST = North Carolina             # State or province
    L = Raleigh                # Locality
    O = Red Hat Open Shift     # Organization
    OU = Red Hat Advanced Container Management        # Organizational unit
    CN = www.redhat.com  # Common name.
    
    [ v3_ca ]          # x509v3 extensions
    basicConstraints=critical,CA:TRUE # Indicates whether the certificate is a CA certificate during the certificate chain verification process.

1.3.5.3. Replacing root CA certificates

  1. Create a new secret with the CA certificate by running the following command:

    kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.key
  2. Edit the CA issuer to point to the BYO certificate. Run the following commnad:

    oc edit issuer -n open-cluster-management multicloud-ca-issuer
  3. Replace the string mulicloud-ca-cert with byo-ca-cert. Save your deployment and quit the editor.
  4. Edit the management ingress deployment to reference the Bring Your Own (BYO) CA certificate. Run the following command:

    oc edit deployment management-ingress-435ab
  5. Replace the multicloud-ca-cert string with the byo-ca-cert. Save your deployment and quit the editor.
  6. Validate the custom CA is in use by logging in to the console and view the details of the certificate being used.

1.3.5.4. Refreshing cert-manager certificates

After the root CA is replaced, all certificates that are signed by the root CA must be refreshed and the services that use those certificates must be restarted. Cert-manager creates the default Issuer from the root CA so all of the certificates issued by cert-manager, and signed by the default ClusterIssuer must also be refreshed.

Delete the Kubernetes secrets associated with each cert-manager certificate to refresh the certificate and restart the services that use the certificate. Run the following command:

oc delete secret -n open-cluster-management $(oc get cert -n open-cluster-management -o wide | grep multicloud-ca-issuer | awk '{print $3}')

1.3.5.5. Restoring root CA certificates

To restore the root CA certificate, update the CA issuer by completing the following steps:

  1. Edit the CA issuer. Run the following command:

    oc edit issuer -n open-cluster-management multicloud-ca-issuer
  2. Replace the byo-ca-cert string with multicloud-ca-cert in the editor. Save the issuer and quit the editor.
  3. Edit the management ingress depolyment to reference the original CA certificate. Run the following command:

    oc edit deployment management-ingress-435ab
  4. Replace the byo-ca-cert string with the multicloud-ca-cert string. Save your deployment and quit the editor.
  5. Delete the BYO CA certificate. Run the following commnad:

    oc delete secret -n open-cluster-management byo-ca-cert

Refresh all cert-manager certificates that use the CA. For more information, see the forementioned section, Refreshing cert-manager certificates.

See Certificates for more information about certificates that are created and managed by Red Hat Advanced Cluster Management for Kubernates.

1.3.6. Replacing the management ingress certificates

You can replace management ingress certificates.

1.3.6.1. Prerequisites to replace management ingress certificate

Prepare and have your management-ingress certificates and private keys ready. If needed, you can generate a TLS certificate by using OpenSSL. Set the common name parameter,CN, on the certificate to manangement-ingress. If you are generating the certificate, include the following settings:

  • Include the route name for Red Hat Advanced Cluster Management for Kubernetes as the domain name in your certificate Subject Alternative Name (SAN) list.

    • The service name for the management ingress: management-ingress.
    • Include the route name for Red Hat Advanced Cluster Management for Kubernetes.

      Receive the route name by running the following command:

      oc get route -n open-cluster-management

      You might receive the following response:

      multicloud-console.apps.grchub2.dev08.red-chesterfield.com
    • Add the localhost IP address: 127.0.0.1.
    • Add the localhost entry: localhost.
1.3.6.1.1. Example configuration file for generating a certificate

The following example configuration file and OpenSSL commands provide an example for how to generate a TLS certificate by using OpenSSL. View the following csr.cnf configuration file, which defines the configuration settings for generating certificates with OpenSSL.

[ req ]               # Main settings
default_bits = 2048       # Default key size in bits.
prompt = no               # Disables prompting for certificate values so the configuration file values are used.
default_md = sha256       # Specifies the digest algorithm.
req_extensions = req_ext  # Specifies the configuration file section that includes any extensions.
distinguished_name = dn   # Specifies the section that includes the distinguished name information.

[ dn ]               # Distinguished name settings
C = US                    # Country
ST = North Carolina             # State or province
L = Raleigh                # Locality
O = Red Hat Open Shift     # Organization
OU = Red Hat Advanced Container Management        # Organizational unit
CN = management-ingress  # Common name.

[ req_ext ]          # Extensions
subjectAltName = @alt_names # Subject alternative names

[ alt_names ]        # Subject alternative names
DNS.1 = management-ingress
DNS.2 = multicloud-console.apps.grchub2.dev08.red-chesterfield.com
DNS.3 = localhost
DNS.4 = 127.0.0.1

[ v3_ext ]          # x509v3 extensions
authorityKeyIdentifier=keyid,issuer:always  # Specifies the public key that corresponds to the private key that is used to sign a certificate.
basicConstraints=CA:FALSE                   # Indicates whether the certificate is a CA certificate during the certificate chain verification process.
#keyUsage=keyEncipherment,dataEncipherment   # Defines the purpose of the key that is contained in the certificate.
extendedKeyUsage=serverAuth                 # Defines the purposes for which the public key can be used.
subjectAltName=@alt_names                   # Identifies the subject alternative names for the identify that is bound to the public key by the CA.

Note: Be sure to update the SAN labeled, DNS.2 with the correct hostname for your management ingress.

1.3.6.1.2. OpenSSL commands for generating a certificate

The following OpenSSL commands are used with the preceding configuration file to generate the required TLS certificate.

  1. Generate your certificate authority (CA) RSA private key:

    openssl genrsa -out ca.key 4096
  2. Generate a self-signed CA certificate by using your CA key:

    openssl req -x509 -new -nodes -key ca.key -subj "/C=US/ST=North Carolina/L=Raleigh/O=Red Hat OpenShift" -days 400 -out ca.crt
  3. Generate the RSA private key for your certificate:

    openssl genrsa -out ingress.key 4096
  4. Generate the Certificate Signing request (CSR) by using the private key:

    openssl req -new -key ingress.key -out ingress.csr -config csr.cnf
  5. Generate a signed certificate by using your CA certificate and key and CSR:

    openssl x509 -req -in ingress.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out ingress.crt -sha256 -days 300 -extensions v3_ext -extfile csr.cnf
  6. Examine the certificate contents:

    openssl x509  -noout -text -in ./ingress.crt

1.3.6.2. Replace the Bring Your Own (BYO) ingress certificate

Complete the following steps to replace your BYO ingress certificate:

  1. Create the byo-ingress-tls secret by using your certificate and private key. Run the following command:

    kubectl -n open-cluster-management create secret tls byo-ingress-tls-secret --cert ./ingress.crt --key ./ingress.key
  2. Verify that the secret is created in the correct namespace with the following command:

    kubectl get secret -n open-cluster-management | grep -e byo-ingress-tls-secret -e byo-ca-cert
  3. Create a secret containing the CA certificate by running the following command:

    kubectl -n open-cluster-management create secret tls byo-ca-cert --cert ./ca.crt --key ./ca.key
  4. Edit the management ingress deployment and get the name of the deployment with the following commands:

    export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress`
    
    oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-management
    • Replace the multicloud-ca-cert string with byo-ca-cert.
    • Replace the $MANAGEMENT_INGRESS-tls-secret string with byo-ingress-tls-secret.
    • Save your deployment and close the editor. + The management ingress automatically restarts.
  5. Verify that the current certificate is your certificate, and that all console access and login functionality remain the same.

1.3.6.3. Restore the default self-signed certificate for management ingress

  1. Edit the management ingress deployment. Replace the string multicloud-ca-cert with byo-ca-cert and get the name of the deployment with the following commands:

    export MANAGEMENT_INGRESS=`oc get deployment -o custom-columns=:.metadata.name | grep management-ingress`
    
    oc edit deployment $MANAGEMENT_INGRESS -n open-cluster-management
    1. Replace the byo-ca-cert string with multicloud-ca-cert.
    2. Replace the byo-ingress-tls-secret string with the $MANAGEMENT_INGRESS-tls-secret.
    3. Save your deployment and close the editor. The management ingress automatically restarts.
  2. After all pods are restarted, navigate to the Red Hat Advanced Cluster Management for Kubernetes console from your browser.
  3. Verify that the current certificate is your certificate, and that all console access and login functionality remain the same.
  4. Delete the Bring Your Own (BYO) ingress secret and ingress CA certificate by running the following commands:

    oc delete secret -n open-cluster-management byo-ingress-tls-secret
    oc delete secret -n open-cluster-management byo-ca-cert

See Certificates for more information about certificates that are created and managed by Red Hat Advanced Cluster Management. Return to the Security page for more information on securing your cluster.