1.9. etcd tasks

Back up etcd, enable or disable etcd encryption, or defragment etcd data.

1.9.1. About etcd encryption

By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.

When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:

  • Secrets
  • Config maps
  • Routes
  • OAuth access tokens
  • OAuth authorize tokens

When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys in order to restore from an etcd backup.

1.9.2. Enabling etcd encryption

You can enable etcd encryption to encrypt sensitive resources in your cluster.

警告

It is not recommended to take a backup of etcd until the initial encryption process is complete. If the encryption process has not completed, the backup might be only partially encrypted.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

Procedure

  1. Modify the APIServer object:

    $ oc edit apiserver
  2. Set the encryption field type to aescbc:

    spec:
      encryption:
        type: aescbc 1
    1
    The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption.
  3. Save the file to apply the changes.

    The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd encryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      EncryptionCompleted
      All resources encrypted: routes.route.openshift.io, oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io

      If the output shows EncryptionInProgress, this means that encryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      EncryptionCompleted
      All resources encrypted: secrets, configmaps

      If the output shows EncryptionInProgress, this means that encryption is still in progress. Wait a few minutes and try again.

1.9.3. Disabling etcd encryption

You can disable encryption of etcd data in your cluster.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

Procedure

  1. Modify the APIServer object:

    $ oc edit apiserver
  2. Set the encryption field type to identity:

    spec:
      encryption:
        type: identity 1
    1
    The identity type is the default value and means that no encryption is performed.
  3. Save the file to apply the changes.

    The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd decryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      DecryptionCompleted
      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, this means that decryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      DecryptionCompleted
      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, this means that decryption is still in progress. Wait a few minutes and try again.

1.9.4. Backing up etcd data

Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.

重要

Only save a backup from a single master host. Do not take a backup from each master host in the cluster.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have checked whether the cluster-wide proxy is enabled.

    提示

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

Procedure

  1. Start a debug session for a master node:

    $ oc debug node/<node_name>
  2. Change your root directory to the host:

    sh-4.2# chroot /host
  3. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.
  4. Run the cluster-backup.sh script and pass in the location to save the backup to.

    提示

    The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command.

    sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup

    Example script output

    1bf371f1b5a483927cd01bb593b0e12cff406eb8d7d0acf4ab079c36a0abd3f7
    etcdctl version: 3.3.18
    API version: 3.3
    found latest kube-apiserver-pod: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-7
    found latest kube-controller-manager-pod: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-8
    found latest kube-scheduler-pod: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6
    found latest etcd-pod: /etc/kubernetes/static-pod-resources/etcd-pod-2
    Snapshot saved at /home/core/assets/backup/snapshot_2020-03-18_220218.db
    snapshot db and kube resources are successfully saved to /home/core/assets/backup

    In this example, two files are created in the /home/core/assets/backup/ directory on the master host:

    • snapshot_<datetimestamp>.db: This file is the etcd snapshot.
    • static_kuberesources_<datetimestamp>.tar.gz: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.

      注意

      If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required in order to restore from the etcd snapshot.

      Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.

1.9.5. Defragmenting etcd data

Manual defragmentation must be performed periodically to reclaim disk space after etcd history compaction and other events cause disk fragmentation.

History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

Because etcd writes data to disk, its performance strongly depends on disk performance. Consider defragmenting etcd every month, twice a month, or as needed for your cluster. You can also monitor the etcd_db_total_size_in_bytes metric to determine whether defragmentation is necessary.

警告

Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

Follow this procedure to defragment etcd data on each etcd member.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Determine which etcd member is the leader, because the leader should be defragmented last.

    1. Get the list of etcd pods:

      $ oc get pods -n openshift-etcd -o wide | grep etcd

      Example output

      etcd-ip-10-0-159-225.example.redhat.com                3/3     Running     0          175m   10.0.159.225   ip-10-0-159-225.example.redhat.com   <none>           <none>
      etcd-ip-10-0-191-37.example.redhat.com                 3/3     Running     0          173m   10.0.191.37    ip-10-0-191-37.example.redhat.com    <none>           <none>
      etcd-ip-10-0-199-170.example.redhat.com                3/3     Running     0          176m   10.0.199.170   ip-10-0-199-170.example.redhat.com   <none>           <none>

    2. Choose a pod and run the following command to determine which etcd member is the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.us-west-1.compute.internal etcdctl endpoint status --cluster -w table

      Example output

      Defaulting container name to etcdctl.
      Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.4.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

  2. Defragment an etcd member.

    1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

      $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
    2. Unset the ETCDCTL_ENDPOINTS environment variable:

      sh-4.4# unset ETCDCTL_ENDPOINTS
    3. Defragment the etcd member:

      sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag

      Example output

      Finished defragmenting etcd member[https://localhost:2379]

      If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

    4. Verify that the database size was reduced:

      sh-4.4# etcdctl endpoint status -w table --cluster

      Example output

      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
      |  https://10.0.191.37:2379 | 251cd44483d811c3 |   3.4.9 |  104 MB |     false |      false |         7 |      91624 |              91624 |        |
      | https://10.0.159.225:2379 | 264c7c58ecbdabee |   3.4.9 |   41 MB |     false |      false |         7 |      91624 |              91624 |        | 1
      | https://10.0.199.170:2379 | 9ac311f93915cc79 |   3.4.9 |  104 MB |      true |      false |         7 |      91624 |              91624 |        |
      +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

      This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

    5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

      Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

  3. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them.

    1. Check if there are any NOSPACE alarms:

      sh-4.4# etcdctl alarm list

      Example output

      memberID:12345678912345678912 alarm:NOSPACE

    2. Clear the alarms:

      sh-4.4# etcdctl alarm disarm

1.9.6. Restoring to a previous cluster state

You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining master hosts.

重要

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.5.2 cluster must use an etcd backup that was taken from 4.5.2.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • SSH access to master hosts.
  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

Procedure

  1. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    重要

    If you do not complete this step, you will not be able to access the master hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  4. Stop the static pods on all other control plane nodes.

    注意

    It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.

    1. Access a control plane host that is not the recovery host.
    2. Move the existing etcd pod file out of the kubelet manifest directory:

      [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
    3. Verify that the etcd pods are stopped.

      [core@ip-10-0-154-194 ~]$ sudo crictl ps | grep etcd | grep -v operator

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    4. Move the existing Kubernetes API server pod file out of the kubelet manifest directory:

      [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
    5. Verify that the Kubernetes API server pods are stopped.

      [core@ip-10-0-154-194 ~]$ sudo crictl ps | grep kube-apiserver | grep -v operator

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    6. Move the etcd data directory to a different location:

      [core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp
    7. Repeat this step on each of the other master hosts that is not the recovery host.
  5. Access the recovery control plane host.
  6. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.

    提示

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

  7. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:

    [core@ip-10-0-143-125 ~]$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup

    Example script output

    ...stopping kube-scheduler-pod.yaml
    ...stopping kube-controller-manager-pod.yaml
    ...stopping etcd-pod.yaml
    ...stopping kube-apiserver-pod.yaml
    Waiting for container etcd to stop
    .complete
    Waiting for container etcdctl to stop
    .............................complete
    Waiting for container etcd-metrics to stop
    complete
    Waiting for container kube-controller-manager to stop
    complete
    Waiting for container kube-apiserver to stop
    ..........................................................................................complete
    Waiting for container kube-scheduler to stop
    complete
    Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
    starting restore-etcd static pod
    starting kube-apiserver-pod.yaml
    static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
    starting kube-controller-manager-pod.yaml
    static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
    starting kube-scheduler-pod.yaml
    static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml

  8. Restart the kubelet service on all master hosts.

    1. From the recovery host, run the following command:

      [core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service
    2. Repeat this step on all other master hosts.
  9. Verify that the single member control plane has started successfully.

    1. From the recovery host, verify that the etcd container is running.

      [core@ip-10-0-143-125 ~]$ sudo crictl ps | grep etcd | grep -v operator

      Example output

      3ad41b7908e32       36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009                                                         About a minute ago   Running             etcd                                          0                   7c05f8af362f0

    2. From the recovery host, verify that the etcd pod is running.

      [core@ip-10-0-143-125 ~]$ oc get pods -n openshift-etcd | grep etcd
      注意

      If you attempt to run oc login prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.

      Unable to connect to the server: EOF

      Example output

      NAME                                             READY   STATUS      RESTARTS   AGE
      etcd-ip-10-0-143-125.ec2.internal                1/1     Running     1          2m47s

      If the status is Pending, or the output lists more than one running etcd pod, wait a few minutes and check again.

  10. Force etcd redeployment.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 1
    1 1
    The forceRedeploymentReason value must be unique, which is why a timestamp is appended.

    When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.

  11. Verify all nodes are updated to the latest revision.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    $ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    AllNodesAtLatestRevision
    3 nodes are at revision 7 1
    1
    In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  12. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.

    In a terminal that has access to the cluster as a cluster-admin user, run the following commands.

    1. Update the kubeapiserver:

      $ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      AllNodesAtLatestRevision
      3 nodes are at revision 7 1
      1
      In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    2. Update the kubecontrollermanager:

      $ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      $ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      AllNodesAtLatestRevision
      3 nodes are at revision 7 1
      1
      In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    3. Update the kubescheduler:

      $ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      $ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      AllNodesAtLatestRevision
      3 nodes are at revision 7 1
      1
      In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  13. Verify that all master hosts have started and joined the cluster.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    $ oc get pods -n openshift-etcd | grep etcd

    Example output

    etcd-ip-10-0-143-125.ec2.internal                2/2     Running     0          9h
    etcd-ip-10-0-154-194.ec2.internal                2/2     Running     0          9h
    etcd-ip-10-0-173-171.ec2.internal                2/2     Running     0          9h

Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.