Security and compliance

OpenShift Container Platform 4.6

Learning about and managing security for OpenShift Container Platform

Red Hat OpenShift Documentation Team


This document discusses container security, configuring certificates, and enabling encryption to help secure the cluster.

Chapter 1. Compliance Operator

1.1. Understanding the Compliance Operator

The Compliance Operator lets OpenShift Container Platform administrators describe the desired compliance state of a cluster and provides them with an overview of gaps and ways to remediate them. The Compliance Operator assesses compliance of both the Kubernetes API resources of OpenShift Container Platform, as well as the nodes running the cluster. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.


The Compliance Operator is available for Red Hat CoreOS deployments only.

1.1.1. Compliance Operator profiles

There are several profiles available as part of the Compliance Operator installation.

View the available profiles:

$ oc get -n <namespace> profiles.compliance

Example output

NAME              AGE
ocp4-cis          4h52m
ocp4-cis-node     4h52m
ocp4-e8           4h52m
ocp4-moderate     4h52m
ocp4-ncp          4h52m
rhcos4-e8         4h52m
rhcos4-moderate   4h52m
rhcos4-ncp        4h52m

These profiles represent different compliance benchmarks.

View the details of a profile:

$ oc get -n <namespace> -oyaml profiles.compliance <profile name>

Example output

description: |-
  This profile contains configuration checks for Red Hat
  Enterprise Linux CoreOS that align to the Australian
  Cyber Security Centre (ACSC) Essential Eight.
  A copy of the Essential Eight in Linux Environments guide can
  be found at the ACSC website: ...
id: xccdf_org.ssgproject.content_profile_e8
kind: Profile
  annotations: redhat_enterprise_linux_coreos_4 Node
    creationTimestamp: "2020-09-07T11:42:51Z"
    generation: 1
  labels: rhcos4
    name: rhcos4-e8
  namespace: openshift-compliance
- rhcos4-accounts-no-uid-except-zero
- rhcos4-audit-rules-dac-modification-chmod
- rhcos4-audit-rules-dac-modification-chown
- rhcos4-audit-rules-execution-chcon
- rhcos4-audit-rules-execution-restorecon
- rhcos4-audit-rules-execution-semanage
- rhcos4-audit-rules-execution-setfiles
- rhcos4-audit-rules-execution-setsebool
- rhcos4-audit-rules-execution-seunshare
- rhcos4-audit-rules-kernel-module-loading
- rhcos4-audit-rules-login-events
- rhcos4-audit-rules-login-events-faillock
- rhcos4-audit-rules-login-events-lastlog
- rhcos4-audit-rules-login-events-tallylog
- rhcos4-audit-rules-networkconfig-modification
- rhcos4-audit-rules-sysadmin-actions
- rhcos4-audit-rules-time-adjtimex
- rhcos4-audit-rules-time-clock-settime
- rhcos4-audit-rules-time-settimeofday
- rhcos4-audit-rules-time-stime
- rhcos4-audit-rules-time-watch-localtime
- rhcos4-audit-rules-usergroup-modification
- rhcos4-auditd-data-retention-flush
- rhcos4-auditd-freq
- rhcos4-auditd-local-events
- rhcos4-auditd-log-format
- rhcos4-auditd-name-format
- rhcos4-auditd-write-logs
- rhcos4-configure-crypto-policy
- rhcos4-configure-ssh-crypto-policy
- rhcos4-no-empty-passwords
- rhcos4-selinux-policytype
- rhcos4-selinux-state
- rhcos4-service-auditd-enabled
- rhcos4-sshd-disable-empty-passwords
- rhcos4-sshd-disable-gssapi-auth
- rhcos4-sshd-disable-rhosts
- rhcos4-sshd-disable-root-login
- rhcos4-sshd-disable-user-known-hosts
- rhcos4-sshd-do-not-permit-user-env
- rhcos4-sshd-enable-strictmodes
- rhcos4-sshd-print-last-log
- rhcos4-sshd-set-loglevel-info
- rhcos4-sshd-use-priv-separation
- rhcos4-sysctl-kernel-dmesg-restrict
- rhcos4-sysctl-kernel-kexec-load-disabled
- rhcos4-sysctl-kernel-kptr-restrict
- rhcos4-sysctl-kernel-randomize-va-space
- rhcos4-sysctl-kernel-unprivileged-bpf-disabled
- rhcos4-sysctl-kernel-yama-ptrace-scope
- rhcos4-sysctl-net-core-bpf-jit-harden
title: Australian Cyber Security Centre (ACSC) Essential Eight

View the rules within a desired profile:

$ oc get -n <namespace> -oyaml rules.compliance <rule_name>

Example output

description: '<code>auditd</code><code>augenrules</code><code>.rules</code><code>/etc/audit/rules.d</code><pre>-w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins</pre><code>auditd</code><code>auditctl</code><code>/etc/audit/audit.rules</code><pre>-w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins</pre>file in order to watch for unattempted manual edits of files involved in storing logon events:'
id: xccdf_org.ssgproject.content_rule_audit_rules_login_events
kind: Rule
  annotations: audit-rules-login-events AU-2(d);AU-12(c);AC-6(9);CM-6(a) AU-2(d),AU-12(c),AC-6(9),CM-6(a) NIST-800-53
    creationTimestamp: "2020-09-07T11:43:03Z"
    generation: 1
  labels: rhcos4
  name: rhcos4-audit-rules-login-events
  namespace: openshift-compliance
  rationale: |-
    Manual editing of these files may indicate nefarious activity,
    such as an attacker attempting to remove evidence of an
  severity: medium
  title: Record Attempts to Alter Logon and Logout Events
  warning: |-
    This rule checks for multiple syscalls related to login
    events and was written with DISA STIG in mind.
    Other policies should use separate rule for
    each syscall that needs to be checked.

Each profile has the product name that it applies to added as a prefix to the profile’s name. ocp4-e8 applies the Essential 8 benchmark to the OpenShift Container Platform product, while rhcos4-e8 applies the Essential 8 benchmark to the Red Hat CoreOS product.

1.2. Managinging the Compliance Operator

This section describes the lifecycle of security content, including how to use an updated version of compliance content and how to create a custom ProfileBundle object.

1.2.1. Updating security content

Security content is shipped as container images that the ProfileBundle objects refer to. To accurately track updates to ProfileBundles and the custom resources parsed from the bundles such as rules or profiles, identify the container image with the compliance content using a digest instead of a tag:

Example output

  kind: ProfileBundle
    name: rhcos4
    contentImage: 1
    contentFile: ssg-rhcos4-ds.xml

Security container image.

Each ProfileBundle is backed by a deployment. When the Compliance Operator detects that the container image digest has changed, the deployment is updated to reflect the change and parse the content again. Using the digest instead of a tag ensures that you use a stable and predictable set of profiles.

1.2.2. Using image streams

The contentImage reference points to a valid ImageStreamTag, and the Compliance Operator ensures that the content stays up to date automatically.


ProfileBundle objects also accept ImageStream references.

Example image stream

$ oc get is

Example output

NAME           	   IMAGE REPOSITORY                                                                       	TAGS     UPDATED
openscap-ocp4-ds   image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds   latest   32 seconds ago


  1. Ensure that the lookup policy is set to local:

    $ oc patch is openscap-ocp4-ds \
        -p '{"spec":{"lookupPolicy":{"local":true}}}' \
        --type=merge patched
  2. Use the name of the ImageStreamTag for the ProfileBundle by retrieving the istag name:

    $ oc get istag

    Example output

    NAME                  	IMAGE REFERENCE                                                                                                                                              	UPDATED
    openscap-ocp4-ds:latest   image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds@sha256:46d7ca9b7055fe56ade818ec3e62882cfcc2d27b9bf0d1cbae9f4b6df2710c96   3 minutes ago

  3. Create the ProfileBundle:

    $ cat << EOF | oc create -f -
    kind: ProfileBundle
      name: mybundle
         contentImage: openscap-ocp4-ds:latest
         contentFile: ssg-rhcos4-ds.xml

This ProfileBundle will track the image and any changes that are applied to it, such as updating the tag to point to a different hash, will immediately be reflected in the ProfileBundle.

1.2.3. ProfileBundle example

The bundle object needs two pieces of information: the URL of a container image that contains the contentImage and the file that contains the compliance content. The contentFile parameter is relative to the root of the file system. The built-in rhcos4 ProfileBundle can be defined in the example below:

  kind: ProfileBundle
    name: rhcos4
    contentImage: 1
    contentFile: ssg-rhcos4-ds.xml 2
Content image location.
File containing the compliance content location.

The base image used for the content images must include coreutils.

1.3. Managing Compliance Operator remediation

Each ComplianceCheckResult represents a result of one compliance rule check. If the rule can be remediated automatically, a ComplianceRemediation object with the same name, owned by the ComplianceCheckResult is created. Unless requested, the remediations are not applied automatically, which gives an OpenShift Container Platform administrator the opportunity to review what the remediation does and only apply a remediation once it has been verified.

1.3.1. Reviewing a remediation

Review both the ComplianceRemediation object and the ComplianceCheckResult object that owns the remediation. The ComplianceCheckResult object contains human-readable descriptions of what the check does and the hardening trying to prevent, as well as other metadata like the severity and the associated security controls. The ComplianceRemediation object represents a way to fix the problem described in the ComplianceCheckResult.

Below is an example of a check and a remediation called sysctl-net-ipv4-conf-all-accept-redirects. This example is redacted to only show spec and status and omits metadata:

  apply: false
    kind: MachineConfig
          version: 2.2.0
            - contents:
              source: data:,net.ipv4.conf.all.accept_redirects%3D0
              filesystem: root
              mode: 420
              path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf
  outdated: {}
  applicationState: NotApplied

The remediation payload is stored in the spec.current attribute. The payload can be any Kubernetes object, but because this remediation was produced by a node scan, the remediation payload in the above example is a MachineConfig object. For Platform scans, the remediation payload is often a different kind of an object (for example, a ConfigMap or Secret object), but typically applying that remediation is up to the administrator, because otherwise the Compliance Operator would have required a very broad set of permissions in order to manipulate any generic Kubernetes object. An example of remediating a Platform check is provided later in the text.

To see exactly what the remediation does when applied, the MachineConfig object contents use the Ignition objects for the configuration. Refer to the Ignition specification for further information about the format. In our example, the[0].path attribute specifies the file that is being create by this remediation (/etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf) and the[0].contents.source attribute specifies the contents of that file.


The contents of the files are URL-encoded.

Use the following Python script to view the contents:

$ echo "net.ipv4.conf.all.accept_redirects" | python3 -c "import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))"

Example output


1.3.2. Applying a remediation

The boolean attribute spec.apply controls whether the remediation should be applied by the Compliance Operator. You can apply the remediation by setting the attribute to true:

$ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{"spec":{"apply":true}}' --type=merge

After the Compliance Operator processes the applied remediation, the status.ApplicationState attribute would change to Applied or to Error if incorrect. When a machine config remediation is applied, that remediation along with all other applied remediations are rendered into a MachineConfig object named 75-$scan-name-$suite-name. That MachineConfig object is subsequently rendered by the Machine Config Operator and finally applied to all the nodes in a machine config pool by an instance of the machine control daemon running on each node.

Note that when the Machine Config Operator applies a new MachineConfig object to nodes in a pool, all the nodes belonging to the pool are rebooted. This might be inconvenient when applying multiple remediations, each of which re-renders the composite 75-$scan-name-$suite-name MachineConfig object. To prevent applying the remediation immediately, you can pause the machine config pool by setting the .spec.paused attribute of a MachineConfigPool object to true.

The Compliance Operator can apply remediations automatically. Set autoApplyRemediations: true in the ScanSetting top-level object.


Applying remediations automatically should only be done with careful consideration.

1.3.3. Remediating a platform check manually

Checks for Platform scans typically have to be remediated manually by the administrator for two reasons:

  • It is not always possible to automatically determine the value that must be set. One of the checks requires that a list of allowed registries is provided, but the scanner has no way of knowing which registries the organization wants to allow.
  • Different checks modify different API objects, requiring automated remediation to possess root or superuser access to modify objects in the cluster, which is not advised.


  1. The example below uses the ocp4-ocp-allowed-registries-for-import rule, which would fail on a default OpenShift Container Platform installation. Inspect the rule oc get rule.compliance/ocp4-ocp-allowed-registries-for-import -oyaml, the rule is to limit the registries the users are allowed to import images from by setting the allowedRegistriesForImport attribute, The warning attribute of the rule also shows the API object checked, so it can be modified and remediate the issue:

    $ oc edit

    Example output

    kind: Image
      annotations: "true"
      creationTimestamp: "2020-09-10T10:12:54Z"
      generation: 2
      name: cluster
      resourceVersion: "363096"
      selfLink: /apis/
      uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e
      - domainName:
      internalRegistryHostname: image-registry.openshift-image-registry.svc:5000

  2. Re-run the scan:

    $ oc annotate compliancescans/<scan_name>

1.3.4. Updating remediations

When a new version of compliance content is used, it might deliver a new and different version of a remediation than the previous version. The Compliance Operator will keep the old version of the remediation applied. The OpenShift Container Platform administrator is also notified of the new version to review and apply. A ComplianceRemediation object that had been applied earlier, but was updated changes its status to Outdated. The outdated objects are labeled so that they can be searched for easily.

The previously applied remediation contents would then be stored in the spec.outdated attribute of a ComplianceRemediation object and the new updated contents would be stored in the spec.current attribute. After updating the content to a newer version, the administrator then needs to review the remediation. As long as the spec.outdated attribute exists, it would be used to render the resulting MachineConfig object. After the spec.outdated attribute is removed, the Compliance Operator re-renders the resulting MachineConfig object, which causes the Operator to push the configuration to the nodes.


  1. Search for any outdated remediations:

    $ oc get complianceremediations

    Example output

    NAME                              STATE
    workers-scan-no-empty-passwords   Outdated

    The currently applied remediation is stored in the Outdated attribute and the new, unapplied remediation is stored in the Current attribute. If you are satisfied with the new version, remove the Outdated field. If you want to keep the updated content, remove the Current and Outdated attributes.

  2. Apply the newer version of the remediation:

    $ oc patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{"op":"remove", "path":/spec/outdated}]'
  3. The remediation state will switch from Outdated to Applied:

    $ oc get complianceremediations workers-scan-no-empty-passwords

    Example output

    NAME                              STATE
    workers-scan-no-empty-passwords   Applied

  4. The nodes will apply the newer remediation version and reboot.

1.3.5. Unapplying a remediation

It might be required to unapply a remediation that was previously applied.


  1. Toggle the flag to false:

    $ oc patch complianceremediations/<scan_name>-sysctl-net-ipv4-conf-all-accept-redirects
  2. The remediation status will change to NotApplied and the composite MachineConfig object would be re-rendered to not include the remediation.


    All affected nodes with the remediation will be rebooted.

1.3.6. Inconsistent remediations

The ScanSetting lists the node roles that the compliance scans generated from the ScanSetting or ScanSettingBinding would scan. Each node role usually maps to a machine config pool.


It is expected that all machines in a machine config pool are identical and all scan results from the nodes in a Pool should be identical.

If some of the results are different from others, the Compliance Operator flags a ComplianceCheckResult object where some of the nodes will report as INCONSISTENT. All ComplianceCheckResult objects are also labeled with

Because the number of machines in a pool might be quite large, the Compliance Operator attempts to find the most common state and list the nodes that differ from the common state. The most common state is stored in the annotation and the annotation contains pairs of hostname:status of check statuses that differ from the most common status. If no common state can be found, all the hostname:status pairs are listed in the annotation.

If possible, a remediation is still created so that the cluster can converge to a compliant status. However, this might not always be possible and correcting the difference between nodes must be done manually. The ComplianceScan must be re-run to get a consistent result by annotating the scan with the option:

$ oc annotate compliancescans/<scan_name>

1.4. Tailoring the Compliance Operator

While the Compliance Operator comes with ready-to-use profiles, they must be modified in order to fit the organizations’ needs and requirements. The process of modifying a profile is called tailoring.

The Compliance Operator provides an object to easily tailor profiles called a TailoredProfile. This assumes that you are extending a pre-existing profile, and allows you to enable and disable rules and values which come from the ProfileBundle.


You will only be able to use rules and variables that are available as part of the ProfileBundle that the profile you want to extend belongs to.

1.4.1. Using tailored profiles

While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map, which must contain a key called tailoring.xml and the value of this key is the tailoring contents.


  1. Browse the available rules for the Red Hat Enterprise Linux CoreOS (RHCOS) ProfileBundle:

    $ oc get rules.compliance -l
  2. Browse the available variables in the same ProfileBundle:

    $ oc get variables.compliance -l
  3. Choose which rules you want to add to the TailoredProfile. This TailoredProfile example disables two rules and changes one value. Use the rationale value to describe why these changes were made:

    Example output

    kind: TailoredProfile
      name: nist-moderate-modified
      extends: rhcos4-moderate
      title: My modified NIST moderate profile
      - name: rhcos4-file-permissions-node-config
        rationale: This breaks X application.
      - name: rhcos4-account-disable-post-pw-expiration
        rationale: No need to check this as it comes from the IdP
      - name: rhcos4-var-selinux-state
        rationale: Organizational requirements
        value: permissive

  4. Add the profile to the ScanSettingsBinding object:

    $ cat nist-moderate-modified.yaml

    Example output

    kind: ScanSettingBinding
      name: nist-moderate-modified
      - apiGroup:
        kind: Profile
        name: ocp4-moderate
      - apiGroup:
        kind: TailoredProfile
        name: nist-moderate-modified
      kind: ScanSetting
      name: default

  5. Create the TailoredProfile:

    $ oc create -n <namespace> -f <file-name>.yaml

    Example output created

1.5. Retrieving Compliance Operator raw results

When proving compliance for your OpenShift Container Platform cluster, you might need to provide the scan results for auditing purposes.

1.5.1. Obtaining Compliance Operator raw results from a persistent volume


The Compliance Operator generates and stores the raw results in a persistent volume. These results are in Asset Reporting Format (ARF).

  1. Explore the ComplianceSuite object:

    $ oc get compliancesuites nist-moderate-modified -o json \
        | jq '.status.scanStatuses[].resultsStorage'
          "name": "rhcos4-moderate-worker",
          "namespace": "openshift-compliance"
          "name": "rhcos4-moderate-master",
          "namespace": "openshift-compliance"

    This shows the persistent volume claims where the raw results are accessible.

  2. Verify the raw data location by using the name and namespace of one of the results:

    $ oc get pvc -n openshift-compliance rhcos4-moderate-worker

    Example output

    NAME                 	STATUS   VOLUME                                 	CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    rhcos4-moderate-worker   Bound	pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a   1Gi    	RWO        	gp2        	92m

  3. Fetch the raw results by spawning a pod that mounts the volume and copying the results:

    Example pod

    apiVersion: "v1"
    kind: Pod
      name: pv-extract
        - name: pv-extract-pod
          command: ["sleep", "3000"]
          - mountPath: "/workers-scan-results"
            name: workers-scan-vol
         - name: workers-scan-vol
             claimName: rhcos4-moderate-worker

  4. After the pod is running, download the results:

    $ oc cp pv-extract:/workers-scan-results .

    Spawning a pod that mounts the persistent volume will keep the claim as Bound. If the volume’s storage class in use has permissions set to ReadWriteOnce, the volume is only mountable by one pod at a time. You must delete the pod upon completion, or it will be possible for the Operator to schedule a pod and continue storing results in this location.

  5. After the extraction is complete, the pod can be deleted:

    $ oc delete pod pv-extract

1.6. Performing advanced Compliance Operator tasks

The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.

1.6.1. Using the ComplianceSuite and ComplianceScan objects directly

While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly:

  • Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information.
  • Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool.
  • Pointing the Scan to a bespoke config map with a tailoring file.
  • For testing or development when the overhead of parsing profiles from bundles is not required.

The following example shows a ComplianceSuite that scans the worker machines with only a single rule:

kind: ComplianceSuite
  name: workers-compliancesuite
    - name: workers-scan
      profile: xccdf_org.ssgproject.content_profile_moderate
      content: ssg-rhcos4-ds.xml
      debug: true
      rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins
      nodeSelector: ""

The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects.

To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite.

1.6.2. Using raw tailored profiles

While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents.


  1. Create the ConfigMap object from a file:

    $ oc create configmap <scan_name> --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
  2. Reference the tailoring file in a scan that belongs to a suite:

    kind: ComplianceSuite
      name: workers-compliancesuite
      debug: true
        - name: workers-scan
          profile: xccdf_org.ssgproject.content_profile_moderate
          content: ssg-rhcos4-ds.xml
          debug: true
          name: <scan_name>
      nodeSelector: ""

1.6.3. Performing a rescan

Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the option:

$ oc annotate compliancescans/<scan_name>

1.6.4. Setting custom storage size for results

While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources.

A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment. Using custom result storage values

Because OpenShift Container Platform can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute.


If your cluster does not specify a default storage class, this attribute must be set.

Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:

Example ScanSetting CR

kind: ScanSetting
  name: default
  namespace: openshift-compliance
  storageClassName: standard
  rotation: 10
  size: 10Gi
- worker
- master
- effect: NoSchedule
  operator: Exists
schedule: '0 1 * * *'

1.7. Troubleshooting the Compliance Operator

This section describes how to troubleshoot the Compliance Operator. The information can be useful either to diagnose a problem or provide information in a bug report. Some general tips:

  • The Compliance Operator emits Kubernetes events when something important happens. You can either view all events in the cluster using the command:

     $ oc get events -nopenshift-compliance

    Or view events for an object like a scan using the command:

    $ oc describe compliancescan/<scan_name>
  • The Compliance Operator consists of several controllers, approximately one per API object. It could be useful to filter only those controllers that correspond to the API object having issues. If a ComplianceRemediation cannot be applied, view the messages from the remediationctrl controller. You can filter the messages from a single controller by parsing with jq:

    $ oc logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == "profilebundlectrl")'
  • The timestamps are logged as seconds since UNIX epoch in UTC. To convert them to a human-readable date, use date -d @timestamp --utc, for example:

    $ date -d @1596184628.955853 --utc
  • Many custom resources, most importantly ComplianceSuite and ScanSetting, allow the debug option to be set. Enabling this option increases verbosity of the OpenSCAP scanner pods, as well as some other helper pods.
  • If a single rule is passing or failing unexpectedly, it could be helpful to run a single scan or a suite with only that rule to find the rule ID from the corresponding ComplianceCheckResult object and use it as the rule attribute value in a Scan CR. Then, together with the debug option enabled, the scanner container logs in the scanner pod would show the raw OpenSCAP logs.

1.7.1. Anatomy of a scan

The following sections outline the components and stages of Compliance Operator scans. Compliance sources

The compliance content is stored in Profile objects that are generated from a ProfileBundle. The Compliance Operator creates a ProfileBundle for the cluster and another for the cluster nodes.

$ oc get profilebundle.compliance
$ oc get profile.compliance

The ProfileBundle objects are processed by deployments labeled with the Bundle name. To troubleshoot an issue with the Bundle, you can find the deployment and view logs of the pods in a deployment:

$ oc logs -lprofile-bundle=ocp4 -c profileparser
$ oc get deployments,pods -lprofile-bundle=ocp4
$ oc logs pods/<pod-name>
$ oc describe pod/<pod-name> -c profileparser The ScanSetting and ScanSettingBinding lifecycle and debugging

With valid compliance content sources, the high-level ScanSetting and ScanSettingBinding objects can be used to generate ComplianceSuite and ComplianceScan objects:

kind: ScanSetting
  name: my-companys-constraints
debug: true
# For each role, a separate scan will be created pointing
# to a node-role specified in roles
  - worker
kind: ScanSettingBinding
  name: my-companys-compliance-requirements
  # Node checks
  - name: rhcos4-e8
    kind: Profile
  # Cluster checks
  - name: ocp4-e8
    kind: Profile
  name: my-companys-constraints
  kind: ScanSetting

Both ScanSetting and ScanSettingBinding objects are handled by the same controller tagged with logger=scansettingbindingctrl. These objects have no status. Any issues are communicated in form of events:

  Type     Reason        Age    From                    Message
  ----     ------        ----   ----                    -------
  Normal   SuiteCreated  9m52s  scansettingbindingctrl  ComplianceSuite openshift-compliance/my-companys-compliance-requirements created

Now a ComplianceSuite object is created. The flow continues to reconcile the newly created ComplianceSuite. ComplianceSuite lifecycle and debugging

The ComplianceSuite CR is a wrapper around ComplianceScan CRs. The ComplianceSuite CR is handled by controller tagged with logger=suitectrl. This controller handles creating scans from a suite, reconciling and aggregating individual Scan statuses into a single Suite status. If a suite is set to execute periodically, the suitectrl also handles creating a CronJob CR that re-runs the scans in the suite after the initial run is done:

$ oc get cronjobs

Example output

NAME                                           SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
<cron_name>                                    0 1 * * *   False     0        <none>          151m

For the most important issues, events are emitted. View them with oc describe compliancesuites/<name>. The Suite objects also have a Status subresource that is updated when any of Scan objects that belong to this suite update their Status subresource. After all expected scans are created, control is passed to the scan controller. ComplianceScan lifecycle and debugging

The ComplianceScan CRs are handled by the scanctrl controller. This is also where the actual scans happen and the scan results are created. Each scan goes through several phases: Pending phase

The scan is validated for correctness in this phase. If some parameters like storage size are invalid, the scan transitions to DONE with ERROR result, otherwise proceeds to the Launching phase. Launching phase

In this phase, several config maps that contain either environment for the scanner pods or directly the script that the scanner pods will be evaluating. List the config maps:

$ oc get cm,

These config maps will be used by the scanner pods. If you ever needed to modify the scanner behavior, change the scanner debug level or print the raw results, modifying the config maps is the way to go. Afterwards, a persistent volume claim is created per scan in order to store the raw ARF results:

$ oc get pvc<scan_name>

The PVCs are mounted by a per-scan ResultServer deployment. A ResultServer is a simple HTTP server where the individual scanner pods upload the full ARF results to. Each server can run on a different node. The full ARF results might be very large and you cannot presume that it would be possible to create a volume that could be mounted from multiple nodes at the same time. After the scan is finished, the ResultServer deployment is scaled down. The PVC with the raw results can be mounted from another custom pod and the results can be fetched or inspected. The traffic between the scanner pods and the ResultServer is protected by mutual TLS protocols.

Finally, the scanner pods are launched in this phase; one scanner pod for a Platform scan instance and one scanner pod per matching node for a node scan instance. The per-node pods are labeled with the node name. Each pod is always labeled with the ComplianceScan name:

$ oc get pods,workload=scanner --show-labels

Example output

NAME                                                              READY   STATUS      RESTARTS   AGE   LABELS   0/2     Completed   0          39m,,workload=scanner
At this point, the scan proceeds to the Running phase. Running phase

The running phase waits until the scanner pods finish. The following terms and processes are in use in the running phase:

  • init container: There is one init container called content-container. It runs the contentImage container and executes a single command that copies the contentFile to the /content directory shared with the other containers in this pod.
  • scanner: This container runs the scan. For node scans, the container mounts the node filesystem as /host and mounts the content delivered by the init container. The container also mounts the entrypoint ConfigMap created in the Launching phase and executes it. The default script in the entrypoint ConfigMap executes OpenSCAP and stores the result files in the /results directory shared between the pod’s containers. Logs from this pod can be viewed to determine what the OpenSCAP scanner checked. More verbose output can be viewed with the debug flag.
  • logcollector: The logcollector container waits until the scanner container finishes. Then, it uploads the full ARF results to the ResultServer and separately uploads the XCCDF results along with scan result and OpenSCAP result code as a ConfigMap. These result config maps are labeled with the scan name (<scan_name>):

          $ oc describe cm/
          Namespace:    openshift-compliance
          Annotations:  compliance-remediations/processed:
          <?xml version="1.0" encoding="UTF-8"?>

Scanner pods for Platform scans are similar, except:

  • There is one extra init container called api-resource-collector that reads the OpenSCAP content provided by the content-container init, container, figures out which API resources the content needs to examine and stores those API resources to a shared directory where the scanner container would read them from.
  • The scanner container does not need to mount the host file system.

When the scanner pods are done, the scans move on to the Aggregating phase. Aggregating phase

In the aggregating phase, the scan controller spawns yet another pod called the aggregator pod. Its purpose it to take the result ConfigMap objects, read the results and for each check result create the corresponding Kubernetes object. If the check failure can be automatically remediated, a ComplianceRemediation object is created. To provide human-readable metadata for the checks and remediations, the aggregator pod also mounts the OpenSCAP content using an init container.

When a config map is processed by an aggregator pod, it is labeled the compliance-remediations/processed label. The result of this phase are ComplianceCheckResult objects:

$ oc get compliancecheckresults
NAME                                                       STATUS   SEVERITY
rhcos4-e8-worker-accounts-no-uid-except-zero               PASS     high
rhcos4-e8-worker-audit-rules-dac-modification-chmod        FAIL     medium

and ComplianceRemediation objects:

$ oc get complianceremediations
NAME                                                       STATE
rhcos4-e8-worker-audit-rules-dac-modification-chmod        NotApplied
rhcos4-e8-worker-audit-rules-dac-modification-chown        NotApplied
rhcos4-e8-worker-audit-rules-execution-chcon               NotApplied
rhcos4-e8-worker-audit-rules-execution-restorecon          NotApplied
rhcos4-e8-worker-audit-rules-execution-semanage            NotApplied
rhcos4-e8-worker-audit-rules-execution-setfiles            NotApplied

After these CRs are created, the aggregator pod exits and the scan moves on to the Done phase. Done phase

In the final scan phase, the scan resources are cleaned up if needed and the ResultServer deployment is either scaled down (if the scan was one-time) or deleted if the scan is continuous; the next scan instance would then recreate the deployment again.

It is also possible to trigger a re-run of a scan in the Done phase by annotating it:

$ oc annotate compliancescans/<scan_name>

After the scan reaches the Done phase, nothing else happens on its own unless the remediations are set to be applied automatically with autoApplyRemediations: true. The OpenShift Container Platform administrator would now review the remediations and apply them as needed. If the remediations are set to be applied automatically, the ComplianceSuite controller takes over in the Done phase, pauses the machine config pool to which the scan maps to and applies all the remediations in one go. If a remediation is applied, the ComplianceRemediation controller takes over. ComplianceRemediation lifecycle and debugging

The example scan has reported some findings. One of the remediations can be enabled by toggling its apply attribute to true:

$ oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{"spec":{"apply":true}}' --type=merge

The ComplianceRemediation controller (logger=remediationctrl) reconciles the modified object. The result of the reconciliation is change of status of the remediation object that is reconciled, but also a change of the rendered per-suite MachineConfig object that contains all the applied remediations.

The MachineConfig object always begins with 75- and is named after the scan and the suite:

$ oc get mc | grep 75-
75-rhcos4-e8-worker-my-companys-compliance-requirements                                                2.2.0             2m46s

The remediations the mc currently consists of are listed in the machine config’s annotations:

$ oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements
Name:         75-rhcos4-e8-worker-my-companys-compliance-requirements
Annotations:  remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:

The ComplianceRemediation controller’s algorithm works like this:

  • All currently applied remediations are read into an initial remediation set.
  • If the reconciled remediation is supposed to be applied, it is added to the set.
  • A MachineConfig object is rendered from the set and annotated with names of remediations in the set. If the set is empty (the last remediation was unapplied), the rendered MachineConfig object is removed.
  • If and only if the rendered machine config is different from the one already applied in the cluster, the applied MC is updated (or created, or deleted).
  • Creating or modifying a MachineConfig object triggers a reboot of nodes that match the label - see the Machine Config Operator documentation for more details.

The remediation loop ends once the rendered machine config is updated, if needed, and the reconciled remediation object status is updated. In our case, applying the remediation would trigger a reboot. After the reboot, annotate the scan to re-run it:

$ oc annotate compliancescans/<scan_name>

The scan will run and finish. Check for the remediation to pass:

$ oc get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod
NAME                                                  STATUS   SEVERITY
rhcos4-e8-worker-audit-rules-dac-modification-chmod   PASS     medium

1.7.2. Useful labels

Each pod that is spawned by the compliance-operator is labeled specifically with the scan it belongs to and the work it does. The scan identifier is labeled with the label. The workload identifier is labeled with the workload label.

The compliance-operator schedules the following workloads:

  • scanner: Performs the compliance scan.
  • resultserver: Stores the raw results for the compliance scan.
  • aggregator: Aggregates the results, detects inconsistencies and outputs result objects (checkresults and remediations).
  • suitererunner: Will tag a suite to be re-run (when a schedule is set).
  • profileparser: Parses a datastream and creates the appropriate profiles, rules and variables.

When debugging and logs are required for a certain workload, run:

$ oc logs -l workload=<workload_name> -c <container-name>

1.7.3. Getting support

If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal. From the Customer Portal, you can:

  • Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products.
  • Submit a support case to Red Hat Support.
  • Access other product documentation.

To identify issues with your cluster, you can use Insights in Red Hat OpenShift Cluster Manager. Insights provides details about issues and, if available, information on how to solve a problem.

If you have a suggestion for improving this documentation or have found an error, please submit a Bugzilla report against the OpenShift Container Platform product for the Documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version.

Chapter 2. File Integrity Operator

2.1. Understanding the File Integrity Operator

The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods.


Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported.

2.1.1. Understanding the FileIntegrity custom resource

An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes.

Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification.

The following example FileIntegrity CR enables scans on only the worker nodes, but otherwise uses the defaults.

Example FileIntegrity CR

kind: FileIntegrity
  name: worker-fileintegrity
  namespace: openshift-file-integrity
  nodeSelector: ""
  config: {}

2.1.2. Checking the FileIntegrity custom resource status

The FileIntegrity custom resource (CR) reports its status through the .status.phase subresource.


  1. To query the FileIntegrity CR status, run:

    $ oc get fileintegrities/worker-fileintegrity  -o jsonpath="{ .status.phase }"

    Example output


2.1.3. FileIntegrity custom resource phases

  • Pending - The phase after the custom resource (CR) is created.
  • Active - The phase when the backing daemon set is up and running.
  • Initializing - The phase when the AIDE database is being reinitialized.

2.1.4. Understanding the FileIntegrityNodeStatuses object

The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses.

$ oc get fileintegritynodestatuses

Example output

NAME                                                AGE
worker-fileintegrity-ip-10-0-130-192.ec2.internal   101s
worker-fileintegrity-ip-10-0-147-133.ec2.internal   109s
worker-fileintegrity-ip-10-0-165-160.ec2.internal   102s


FileIntegrityNodeStatus might not be created until the second run of the scanner is finished. The period is configurable.

There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions.

$ oc get -ojsonpath='{.items[*].results}' | jq

2.1.5. FileIntegrityNodeStatus status types

These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus:

  • Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized.
  • Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized.
  • Error - The AIDE scanner encountered an internal error. FileIntegrityNodeStatus success status example

Example output of a condition with a success status

    "condition": "Succeeded",
    "lastProbeTime": "2020-09-15T12:45:57Z"
    "condition": "Succeeded",
    "lastProbeTime": "2020-09-15T12:46:03Z"
    "condition": "Succeeded",
    "lastProbeTime": "2020-09-15T12:45:48Z"

In this case, all three scans succeeded and so far there are no other conditions. FileIntegrityNodeStatus failure status example

To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes:

$ oc debug node/ip-10-0-130-192.ec2.internal

Example output

Creating debug namespace/openshift-debug-node-ldfbj ...
Starting pod/ip-10-0-130-192ec2internal-debug ...
To use host binaries, run `chroot /host`
Pod IP:
If you don't see a command prompt, try pressing enter.
sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf
sh-4.2# exit

Removing debug pod ...
Removing debug namespace/openshift-debug-node-ldfbj ...

After some time, the Failed condition was reported in the results array of the corresponding FileIntegrityNodeStatus. The previous Succeeded condition is retained, which allows you to pinpoint the time the check failed.

$ oc get -ojsonpath='{.results}' | jq -r

Alternatively, if you are not mentioning the object name, run:

$ oc get -ojsonpath='{.items[*].results}' | jq

Example output

    "condition": "Succeeded",
    "lastProbeTime": "2020-09-15T12:54:14Z"
    "condition": "Failed",
    "filesChanged": 1,
    "lastProbeTime": "2020-09-15T12:57:20Z",
    "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed",
    "resultConfigMapNamespace": "openshift-file-integrity"

The Failed condition points to a config map that gives more details about what exactly failed and why:

$ oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed

Example output

Name:         aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed
Namespace:    openshift-file-integrity
Annotations: 0


AIDE 0.15.1 found differences between database and filesystem!!
Start timestamp: 2020-09-15 12:58:15

  Total number of files:  31553
  Added files:                0
  Removed files:            0
  Changed files:            1

Changed files:

changed: /hostroot/etc/resolv.conf

Detailed information about changes:

File: /hostroot/etc/resolv.conf
 SHA512   : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg

Events:  <none>

2.2. Configuring the Custom File Integrity Operator

2.2.1. Viewing FileIntegrity object attributes

As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity, and then look at the individual attributes using:

$ oc explain fileintegrity.spec
$ oc explain fileintegrity.spec.config

2.2.2. Important attributes

Table 2.1. Important spec and spec.config attributes



A map of key-values pairs that must match with node’s labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where "" schedules AIDE on all worker nodes, "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes.


A boolean attribute. If set to true, the daemon running in the AIDE deamon set’s pods would output extra information.


Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on master nodes.


The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900, or 15 minutes., spec.config.namespace, spec.config.key

These three attributes allow you to set a custom AIDE configuration. When the name or namespace are unset, the File Integrity Operator generates a configuration suitable for RHCOS systems. The name and namespace attributes point to the config map; the key points to a key inside that config map. Use the key attribute to specify a custom key that contains the actual config and defaults to aide.conf.

2.2.3. Examine the default configuration

The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR.


  • To examine the default config, run:

    $ oc describe cm/worker-fileintegrity

2.2.4. Understanding the default File Integrity Operator configuration

Below is an excerpt from the aide.conf key of the config map:

@@define DBDIR /hostroot/etc/kubernetes
@@define LOGDIR /hostroot/etc/kubernetes
PERMS = p+u+g+acl+selinux+xattrs
CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs

/hostroot/boot/    	CONTENT_EX
/hostroot/root/\..* PERMS
/hostroot/root/   CONTENT_EX

The default configuration for a FileIntegrity instance provides coverage for files under the following directories:

  • /root
  • /boot
  • /usr
  • /etc

The following directories are not covered:

  • /var
  • /opt
  • Some OpenShift-specific excludes under /etc/

2.2.5. Supplying a custom AIDE configuration

Any entries that configure AIDE internal behavior such as DBDIR, LOGDIR, database, and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier.


/hostroot is the directory where the pods running AIDE mount the host’s file system. Changing the configuration triggers a reinitializing of the database.

2.2.6. Defining a custom File Integrity Operator configuration

This example focuses on defining a custom configuration for a scanner that runs on the master nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the master nodes.


  1. Make a copy of the default configuration.
  2. Edit the default configuration with the files that must be watched or excluded.
  3. Store the edited contents in a new config map.
  4. Point the FileIntegrity object to the new config map through the attributes in spec.config.
  5. Extract the default configuration:

    $ oc extract cm/worker-fileintegrity --keys=aide.conf

    This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix:

    $ vim aide.conf

    Example output


    Exclude a path specific to master nodes:


    Store the other content in /etc:

    /hostroot/etc/	CONTENT_EX
  6. Create a config map based on this file:

    $ oc create cm master-aide-conf --from-file=aide.conf
  7. Define a FileIntegrity CR manifest that references the config map:

    kind: FileIntegrity
      name: master-fileintegrity
      namespace: openshift-file-integrity
          name: master-aide-conf
          namespace: openshift-file-integrity

    The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object:

    $ oc describe cm/master-fileintegrity | grep /opt/mydaemon

    Example output


2.2.7. Changing the custom File Integrity configuration

To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the, namespace, and key attributes.

Currently, you must also annotate the FileIntegrity object in order to force reconciliation of the object and its config map. For example:

$ oc annotate fileintegrities/worker-fileintegrity

The value of the annotation or whether it is an annotation does not really matter; the purpose is of this operation is to force reconciliation of the FileIntegrity object and re-render the processed config map with the generated configuration.

2.3. Performing advanced Custom File Integrity Operator tasks

2.3.1. Reinitializing the database

If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. To do that, annotate the FileIntegrity custom resource (CR) with

$ oc annotate fileintegrities/worker-fileintegrity

The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes, as seen in the following output from a pod spawned using oc debug:

Example output

 ls -lR /host/etc/kubernetes/aide.*
-rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz
-rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38
-rw-------. 1 root root   73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55
-rw-r--r--. 1 root root       0 Sep 17 15:08 /host/etc/kubernetes/aide.log
-rw-------. 1 root root     613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38
-rw-r--r--. 1 root root       0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55

In order to provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any previous integrity failures would still be visible in the FileIntegrityNodeStatus object.

2.3.2. Machine config integration

In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume.

This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations.

2.3.3. Exploring the daemon sets

Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set.

To find the daemon set that represents a FileIntegrity object, run:

$ oc get ds/aide-ds-$file-integrity-object-name

To list the pods in that daemon set, run:

$ oc get pods -lapp=$ds-name

To view logs of a single AIDE pod, call oc logs on one of the pods.

Example output

debug: aide files locked by aideLoop
running aide check
aide check returned status 0
debug: aide files unlocked by aideLoop
debug: Getting FileIntegrity openshift-file-integrity/worker-fileintegrity
Created OK configMap ''

The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to.

2.4. Troubleshooting the File Integrity Operator

2.4.1. General troubleshooting

You want to generally troubleshoot issues with the File Integrity Operator.
Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks.

2.4.2. Checking the AIDE configuration

You want to check the AIDE configuration.
The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with

2.4.3. Determining the FileIntegrity object’s phase

You want to determine if the FileIntegrity object exists and see its current status.

To see the FileIntegrity object’s current status, run:

$ oc get fileintegrities/worker-fileintegrity  -o jsonpath="{ .status }"

Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active. If it does not, check the Operator pod logs.

2.4.4. Determining that the daemon set’s pods are running on the expected nodes

You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on.


$ oc get pods -lapp=aide-ds-$(<FIO_NAME>)
  • FIO_NAME is the name of the FileIntegrity object to get a list of the pods.
  • Adding -owide adds the IP address of the node the pod is running on.

To check the logs of the daemon pods, run oc logs

Check the return value of the AIDE command to see if the check passed or failed.

Chapter 3. Container security

3.1. Understanding container security

Securing a containerized application relies on multiple levels of security:

  • Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline.
  • When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it.
  • Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images.

Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center.

Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization’s security standards.

This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures.

This guide contains the following information:

  • Why container security is important and how it compares with existing security standards.
  • Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform.
  • How to evaluate your container content and sources for vulnerabilities.
  • How to design your build and deployment process to proactively check container content.
  • How to control access to containers through authentication and authorization.
  • How networking and attached storage are secured in OpenShift Container Platform.
  • Containerized solutions for API management and SSO.

The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization’s security goals.

3.1.1. What are containers?

Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers.

Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud.

Some of the benefits of using containers include:


Sandboxed application processes on a shared Linux operating system kernel

Package my application and all of its dependencies

Simpler, lighter, and denser than virtual machines

Deploy to any environment in seconds and enable CI/CD

Portable across different environments

Easily access and share containerized components

See Understanding Linux containers from the Red Hat customer portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation.

3.1.2. What is OpenShift Container Platform?

Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers.

Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary.

OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components.

OpenShift Container Platform can leverage Red Hat’s experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat’s experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs.

3.2. Understanding host and VM security

Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other.

3.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS)

Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue.

In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other.

Because OpenShift Container Platform 4.6 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift more secure:

  • Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Overview of Containers in Red Hat Systems from the RHEL 7 container documentation for details on the types of namespaces.
  • SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file.
  • CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other.
  • Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the OpenShift Security Guide for details about seccomp.
  • Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features.

RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift services.

To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters.

3.2.2. Comparing virtualization and containers

Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies.

With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS.

Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps in order to lift-and-shift an application to the cloud.

Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU.

See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs.

3.2.3. Securing OpenShift Container Platform

When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS compliance or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments.

Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include:

  • Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways.
  • Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes.

In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include:

  • Adding kernel arguments
  • Adding kernel modules
  • Enabling support for FIPS cryptography
  • Configuring disk encryption
  • Configuring the chrony time service

Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates.

3.3. Hardening RHCOS

RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening.

A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention.

So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening.

3.3.1. Choosing what to harden in RHCOS

The RHEL 8 Security Hardening guide describes how you should approach security for any RHEL system.

Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices.

With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS.

3.3.2. Choosing how to harden RHCOS

Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and master nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier.

There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. Hardening before installation

For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as SELinux or various low-level settings, such as symmetric multithreading.

Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. Hardening during installation

You can interrupt the OpenShift installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node’s first boot. Hardening after the cluster is running

After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS:

  • DaemonSet: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet.
  • MachineConfig: MachineConfig objects contain a subset of Ignition configs in the same format. By applying MachineConfigs to all worker or control plane nodes, you can ensure that the next node of the same type that is added to the cluster has the same changes applied.

All of the features noted here are described in the OpenShift Container Platform product documentation.

3.4. Understanding compliance

For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization’s corporate governance framework.

3.4.1. Understanding compliance and risk management

FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. To understand Red Hat’s view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book.

Additional resources

3.5. Securing container content

To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images.

3.5.1. Securing inside the container

Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js.

Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them.

Some questions to answer include:

  • Will what is inside the containers compromise your infrastructure?
  • Are there known vulnerabilities in the application layer?
  • Are the runtime and operating system layers current?

By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images.

To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Container Security Operator can be added to check container images used in OpenShift Container Platform.

3.5.2. Creating redistributable images with UBI

To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system’s file system.

Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software.

Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images:

  • UBI: There are standard UBI images for RHEL 7 and 8 (ubi7/ubi and ubi8/ubi), as well as minimal images based on those systems (ubi7/ubi-minimal and ubi8/ubi-mimimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu.
  • Red Hat Software Collections: Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd (rhscl/httpd-*), Python (rhscl/python-*), Ruby (rhscl/ruby-*), Node.js (rhscl/nodejs-*) and Perl (rhscl/perl-*) rhscl images.

Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions.

See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images.

3.5.3. Security scanning in RHEL

For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation.

OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries in order to provide metadata on those libraries such as known vulnerabilities. Scanning OpenShift images

For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces.

Container image scanning for Red Hat Quay is performed by the Clair security scanner. In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software.

3.5.4. Integrating external scanning

OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. Image metadata

There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved:<qualityType>.<providerId>: {}

Table 3.1. Annotation key format

ComponentDescriptionAcceptable values


Metadata type



Provider ID string

jfrog Example annotation keys {} {} {} {}

The value of the image quality annotation is structured data that must adhere to the following format:

Table 3.2. Annotation value format




Provider display name




Scan timestamp




Short description




URL of information source or more details. Required so user may validate the data.




Scanner version




Compliance pass or fail




Summary of issues found

List (see table below)

The summary field must adhere to the following format:

Table 3.3. Summary field value format



Display label for component (for example, "critical," "important," "moderate," "low," or "health")



Data for this component (for example, count of vulnerabilities found or score)



Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low.



URL of information source or more details. Optional.

String Example annotation values

This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean:

OpenSCAP annotation

  "name": "OpenSCAP",
  "description": "OpenSCAP vulnerability score",
  "timestamp": "2016-09-08T05:04:46Z",
  "reference": "",
  "compliant": true,
  "scannerVersion": "1.2",
  "summary": [
    { "label": "critical", "data": "4", "severityIndex": 3, "reference": null },
    { "label": "important", "data": "12", "severityIndex": 2, "reference": null },
    { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null },
    { "label": "low", "data": "26", "severityIndex": 0, "reference": null }

This example shows the Red Hat Container Catalog annotation for an image with health index data with an external URL for additional details:

Red Hat Container Catalog annotation

  "name": "Red Hat Container Catalog",
  "description": "Container health index",
  "timestamp": "2016-09-08T05:04:46Z",
  "reference": "",
  "compliant": null,
  "scannerVersion": "1.2",
  "summary": [
    { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null }
} Annotating image objects

While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. Example annotate CLI command

Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2:

$ oc annotate image <image> \'{ \
    "name": "Red Hat Container Catalog", \
    "description": "Container health index", \
    "timestamp": "2020-06-01T05:04:46Z", \
    "compliant": null, \
    "scannerVersion": "1.2", \
    "reference": "", \
    "summary": "[ \
      { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' Controlling pod execution

Use the image policy to programmatically control if an image can be run. Example annotation
annotations: true Integration reference

In most cases, external tools such as vulnerability scanners develop a script or plug-in that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.6 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. Example REST API call

The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token>, <openshift_server>, <image_id>, and <image_annotation>.

Patch API call

$ curl -X PATCH \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/merge-patch+json" \
  https://<openshift_server>:8443/oapi/v1/images/<image_id> \
  --data '{ <image_annotation> }'

The following is an example of PATCH payload data:

Patch call data

"metadata": {
  "annotations": {
       "{ 'name': 'Red Hat Container Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': '', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }"

Additional resources

3.6. Using container registries securely

Container registries store container images to:

  • Make images accessible to others
  • Organize images into repositories that can include multiple versions of an image
  • Optionally limit access to images, based on different authentication methods, or make them publicly available

There are public container registries, such as and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay.

From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images.

3.6.1. Knowing where containers come from?

There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources.

3.6.2. Immutable and certified containers

Consuming security updates is particularly important when managing immutable containers. Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it.

Red Hat certified images are:

  • Free of known vulnerabilities in the platform components or layers
  • Compatible across the RHEL platforms, from bare metal to cloud
  • Supported by Red Hat

The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image.

3.6.3. Getting containers from Red Hat Registry and Ecosystem Catalog

Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores.

Red Hat images are actually stored in what is referred to as the Red Hat Registry, which is represented by a public container registry ( and an authenticated registry ( Both include basically the same set of container images, with including some additional images that require authentication with Red Hat subscription credentials.

Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc, DROWN, or Dirty Cow, any affected container images are also rebuilt and pushed to the Red Hat Registry.

Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure.

To illustrate the age of containers, the Red Hat Container Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Container Catalog for more details on this grading system.

Refer to the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs.

3.6.4. OpenShift Container Registry

OpenShift Container Platform includes the OpenShift Container Registry, a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images.

OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay.

3.6.5. Storing containers using Red Hat Quay

Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay. Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at

Security-related features of Red Hat Quay include:

  • Time Machine: Allows images with older tags to expire after a set period of time or based on a user-selected expiration time.
  • Repository mirroring: Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used.
  • Action log storage: Save Red Hat Quay logging output to Elasticsearch storage to allow for later search and analysis.
  • Clair security scanning: Scan images against a variety of Linux vulnerability databases, based on the origins of each container image.
  • Internal authentication: Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication.
  • External authorization (OAuth): Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication.
  • Access settings: Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion.

Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift Container Platform registry with Red Hat Quay. The Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries.

3.7. Securing the build process

In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack.

3.7.1. Building once, deploying everywhere

Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production.

It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them.

As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software:

OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit. You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications.

3.7.2. Managing builds

You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this.

When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions:

  • Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code.
  • Automatically deploy the newly built image for testing.
  • Promote the tested image to production where it can be automatically deployed using a CI process.
Source-to-Image Builds

You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry.

In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry.

3.7.3. Securing inputs during builds

In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose.

For example, when building a Node.js application, you can set up your private mirror for Node.js modules. In order to download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image.

Using this example scenario, you can add an input secret to a new BuildConfig:

  1. Create the secret, if it does not exist:

    $ oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc

    This creates a new secret named secret-npmrc, which contains the base64 encoded content of the ~/.npmrc file.

  2. Add the secret to the source section in the existing BuildConfig:

      - destinationDir: .
          name: secret-npmrc
  3. To include the secret in a new BuildConfig, run the following command:

    $ oc new-build \
        openshift/nodejs-010-centos7~ \
        --build-secret secret-npmrc

3.7.4. Designing your build process

You can design your container image management and build process to use container layers so that you can separate control.

Designing Your Build Process

For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code.

Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example:

  • SAST / DAST – Static and Dynamic security testing tools.
  • Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages.

Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment.

Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure.

3.7.5. Building Knative serverless applications

Relying on Kubernetes and Kourier, you can build, deploy and manage serverless applications using Knative in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console.

3.8. Deploying containers

You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified.

3.8.1. Controlling container deployments with triggers

If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended.

Secure Deployments

For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image.

You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example:

$ oc set triggers deploy/deployment-example \
    --from-image=example:latest \

3.8.2. Controlling what image sources can be deployed

It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy:

  • one or more registries, with optional project namespace
  • trust type, such as accept, reject, or require public key(s)

You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment).

Example image signature policy file

    "default": [{"type": "reject"}],
    "transports": {
        "docker": {
            "": [
                    "type": "signedBy",
                    "keyType": "GPGKeys",
                    "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
        "atomic": {
            "": [
                    "type": "signedBy",
                    "keyType": "GPGKeys",
                    "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
            "": [
                    "type": "signedBy",
                    "keyType": "GPGKeys",
                    "keyPath": "/etc/pki/"
            "": [{"type": "reject"}]

The policy can be saved onto a node as /etc/containers/policy.json. Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules:

  • Require images from the Red Hat Registry ( to be signed by the Red Hat public key.
  • Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key.
  • Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for
  • Reject all other registries not specified by the global default definition.

3.8.3. Using signature transports

A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports.

  • atomic: Managed by the OpenShift Container Platform API.
  • docker: Served as a local file or by a web server.

The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required.

Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures.

However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore:

Example registries.d file


In this example, the Red Hat Registry,, is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/ and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime.

3.8.4. Creating Secrets and ConfigMaps

The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plug-in or the system can use secrets to perform actions on behalf of a pod.

For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following:


  1. Log in to the OpenShift Container Platform web console.
  2. Create a new project.
  3. Navigate to ResourcesSecrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository.
  4. When creating a deployment configuration (for example, from the Add to ProjectDeploy Image page), set the Pull Secret to your new secret.

ConfigMaps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers.

3.8.5. Automating continuous deployment

You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform.

By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment.

Additional resources

3.9. Securing the container platform

OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to:

  • Validate and configure the data for pods, services, and replication controllers.
  • Perform project validation on incoming requests and invoke triggers on other major system components.

Security-related features in OpenShift Container Platform that are based on Kubernetes include:

  • Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels.
  • Admission plug-ins, which form boundaries between an API and those making requests to the API.

OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features.

3.9.1. Isolating containers with multitenancy

Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces.

In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects. Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects.

RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings:

  • Rules define what a user can create or access in a project.
  • Roles are collections of rules that you can bind to selected users or groups.
  • Bindings define the association between users or groups and roles.

Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin, basic-user, cluster-admin, and cluster-status access.

3.9.2. Protecting control plane with admission plug-ins

While RBAC controls access rules between users and groups and available projects, admission plug-ins define access to the OpenShift Container Platform master API. Admission plug-ins form a chain of rules that consist of:

  • Default admissions plug-ins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane.
  • Mutating admission plug-ins: These plug-ins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource.
  • Validating admission plug-ins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again.

API requests go through admissions plug-ins in a chain, with any failure along the way causing the request to be rejected. Each admission plug-in is associated with particular resources and only responds to requests for those resources. Security context constraints (SCCs)

You can use security context constraints (SCCs) to define a set of conditions that a pod must run with in order to be accepted into the system.

Some aspects that can be managed by SCCs include:

  • Running of privileged containers
  • Capabilities a container can request to be added
  • Use of host directories as volumes
  • SELinux context of the container
  • Container user ID

If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. Granting roles to service accounts

You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account:

  • is limited in scope to a particular project
  • derives its name from its project
  • is automatically assigned an API token and credentials to access the OpenShift Container Registry

Service accounts associated with platform components automatically have their keys rotated.

3.9.3. Authentication and authorization Controlling access using OAuth

You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API.

As an administrator, you can configure OAuth to authenticate using an identity provider, such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or post-installation. API access control and management

Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access.

3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0.

You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers.

For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. Red Hat Single Sign-On

The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect–based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. Secure self-service web console

OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following:

  • Access to the master uses Transport Layer Security (TLS)
  • Access to the API Server uses X.509 certificates or OAuth access tokens
  • Project quota limits the damage that a rogue token could do
  • The etcd service is not exposed directly to the cluster

3.9.4. Managing certificates for the platform

OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform’s installer configures these certificates during installation. There are some primary components that generate this traffic:

  • masters (API server and controllers)
  • etcd
  • nodes
  • registry
  • router Configuring custom certificates

You can configure custom serving certificates for the public host names of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA.

3.10. Securing networks

Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications.

3.10.1. Using network namespaces

OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster.

Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services.

3.10.2. Isolating pods with network policies

Using network policies, you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the ingress controller, reject connections from pods in other projects, or set similar rules for how networks behave.

Additional resources

3.10.3. Using multiple pod networks

Each running container has only one network interface by default. The Multus CNI plug-in lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node.

Additional resources

3.10.4. Isolating applications

OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources.

3.10.5. Securing ingress traffic

There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application’s service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster.

3.10.6. Securing egress traffic

OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use IP whitelisting to control database access. A cluster administrator can assign one or more egress IP addresses to a project in an OpenShift Container Platform SDN network provider. Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall.

By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod’s access to specific internal subnets.

3.11. Securing attached storage

OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface.

3.11.1. Persistent volume plug-ins

Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface.

OpenShift Container Platform provides plug-ins for multiple types of storage, including:

  • Red Hat OpenShift Container Storage *
  • AWS Elastic Block Stores (EBS) *
  • AWS Elastic File System (EFS) *
  • Azure Disk *
  • Azure File *
  • OpenStack Cinder *
  • GCE Persistent Disks *
  • VMware vSphere *
  • Network File System (NFS)
  • FlexVolume
  • Fibre Channel
  • iSCSI

Plug-ins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other.

You can mount PersistentVolume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume.

For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV’s capabilities, such as ReadWriteOnce, ReadOnlyMany, and ReadWriteMany.

3.11.2. Shared storage

For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage.

3.11.3. Block storage

For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated.

3.12. Monitoring cluster events and logs

The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage.

There are two main sources of cluster-level information that are useful for this purpose: events and logging.

3.12.1. Watching cluster events

Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as Node events and resource events related to infrastructure components.

The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep:

$ oc get event -n default | grep Node

Example output

1h         20h         3         origin-node-1.example.local   Node      Normal    NodeHasDiskPressure   ...

A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events:

$ oc get events -n default -o json \
  | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")'

Example output

  "apiVersion": "v1",
  "count": 3,
  "involvedObject": {
    "kind": "Node",
    "name": "origin-node-1.example.local",
    "uid": "origin-node-1.example.local"
  "kind": "Event",
  "reason": "NodeHasDiskPressure",

Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images:

$ oc get events --all-namespaces -o json \
  | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length'

Example output



When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time.

3.12.2. Logging

Using the oc log command, you can view container logs, Buildconfigs and Deploymentconfigs in real time. Different can users have access different access to logs:

  • Users who have access to a project are able to see the logs for that project by default.
  • Users with admin roles can access all container logs.

To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade cluster logging through the Elasticsearch Operator and Cluster Logging Operator.

3.12.3. Audit logs

With audit logs, you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server.

Chapter 4. Configuring certificates

4.1. Replacing the default ingress certificate

4.1.1. Understanding the default ingress certificate

By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well.

The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain.

4.1.2. Replacing the default ingress certificate

You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate.


  • You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file.
  • The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform.
  • The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain>.
  • The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate.
  • Copy the root CA certificate into an additional PEM format file.


  1. Create a ConfigMap that includes only the root CA certificate used to sign the wildcard certificate:

    $ oc create configmap custom-ca \
         --from-file=ca-bundle.crt=</path/to/example-ca.crt> \1
         -n openshift-config
    </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system.
  2. Update the cluster-wide proxy configuration with the newly created ConfigMap:

    $ oc patch proxy/cluster \
         --type=merge \
  3. Create a secret that contains the wildcard certificate chain and key:

    $ oc create secret tls <secret> \1
         --cert=</path/to/cert.crt> \2
         --key=</path/to/cert.key> \3
         -n openshift-ingress
    <secret> is the name of the secret that will contain the certificate chain and private key.
    </path/to/cert.crt> is the path to the certificate chain on your local file system.
    </path/to/cert.key> is the path to the private key associated with this certificate.
  4. Update the Ingress Controller configuration with the newly created secret:

    $ oc patch ingresscontroller.operator default \
         --type=merge -p \
         '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \1
         -n openshift-ingress-operator
    Replace <secret> with the name used for the secret in the previous step.

4.2. Adding API server certificates

The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server’s certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust.

4.2.1. Add an API server named certificate

The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used.


  • You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file.
  • The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform.
  • The certificate must include the subjectAltName extension showing the FQDN.
  • The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate.

Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain>). Doing so will leave your cluster in a degraded state.


  1. Create a secret that contains the certificate chain and private key in the openshift-config namespace.

    $ oc create secret tls <secret> \1
         --cert=</path/to/cert.crt> \2
         --key=</path/to/cert.key> \3
         -n openshift-config
    <secret> is the name of the secret that will contain the certificate chain and private key.
    </path/to/cert.crt> is the path to the certificate chain on your local file system.
    </path/to/cert.key> is the path to the private key associated with this certificate.
  2. Update the API server to reference the created secret.

    $ oc patch apiserver cluster \
         --type=merge -p \
         '{"spec":{"servingCerts": {"namedCertificates":
         [{"names": ["<FQDN>"], 1
         "servingCertificate": {"name": "<secret>"}}]}}}' 2
    Replace <FQDN> with the FQDN that the API server should provide the certificate for.
    Replace <secret> with the name used for the secret in the previous step.
  3. Examine the apiserver/cluster object and confirm the secret is now referenced.

    $ oc get apiserver cluster -o yaml

    Example output

        - names:
          - <FQDN>
            name: <secret>

4.3. Securing service traffic using service serving certificate secrets

4.3.1. Understanding service serving certificates

Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates.

The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates.

The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration.

The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than six months validity left. After rotation, the previous service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the previous service CA expires.


You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted.

$ for I in $(oc get ns -o jsonpath='{range .items[*]} {}{"\n"} {end}'); \
      do oc delete pods --all -n $I; \
      sleep 1; \

4.3.2. Add a service certificate

To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service.


The generated certificate is only valid for the internal service DNS name <>.<service.namespace>.svc, and are only valid for internal communications.


  • You must have a service defined.


  1. Annotate the service with

    $ oc annotate service <service-name> \1<secret-name> 2
    Replace <service-name> with the name of the service to secure.
    <secret-name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service-name>.

    For example, use the following command to annotate the service foo:

    $ oc annotate service foo
  2. Examine the service to confirm that the annotations are present:

    $ oc describe service <service-name>

    Example output

    Annotations:     <service-name>

  3. After the cluster generates a secret for your service, your PodSpec can mount it, and the Pod will run after it becomes available.

4.3.3. Add the service CA bundle to a ConfigMap

A Pod can access the service CA certificate by mounting a ConfigMap that is annotated with Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the ConfigMap. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates.


After adding this annotation to a ConfigMap all existing data in it is deleted. It is recommended to use a separate ConfigMap to contain the service-ca.crt, instead of using the same ConfigMap that stores your Pod’s configuration.


  1. Annotate the ConfigMap with

    $ oc annotate configmap <configmap-name> \1
    Replace <configmap-name> with the name of the ConfigMap to annotate.

    Explicitly referencing the service-ca.crt key in a volumeMount will prevent a Pod from starting until the ConfigMap has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume’s serving certificate configuration.

    For example, use the following command to annotate the ConfigMap foo:

    $ oc annotate configmap foo
  2. View the ConfigMap to ensure that the service CA bundle has been injected:

    $ oc get configmap <configmap-name> -o yaml

    The CA bundle is displayed as the value of the service-ca.crt key in the YAML output:

    apiVersion: v1
      service-ca.crt: |
        -----BEGIN CERTIFICATE-----

4.3.4. Add the service CA bundle to an APIService

You can annotate an APIService with to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint.


  1. Annotate the APIService with

    $ oc annotate apiservice <apiservice-name> \1
    Replace <apiservice-name> with the name of the APIService to annotate.

    For example, use the following command to annotate the APIService foo:

    $ oc annotate apiservice foo
  2. View the APIService to ensure that the service CA bundle has been injected:

    $ oc get apiservice <apiservice-name> -o yaml

    The CA bundle is displayed in the spec.caBundle field in the YAML output:

    kind: APIService
      annotations: "true"
      caBundle: <CA_BUNDLE>

4.3.5. Add the service CA bundle to a Custom Resource Definition

You can annotate a Custom Resource Definition (CRD) with to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint.


The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD’s webhook is secured with a service CA certificate.


  1. Annotate the CRD with

    $ oc annotate crd <crd-name> \1
    Replace <crd-name> with the name of the CRD to annotate.

    For example, use the following command to annotate the CRD foo:

    $ oc annotate crd foo
  2. View the CRD to ensure that the service CA bundle has been injected:

    $ oc get crd <crd-name> -o yaml

    The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output:

    kind: CustomResourceDefinition
      annotations: "true"
        strategy: Webhook
            caBundle: <CA_BUNDLE>

4.3.6. Add the service CA bundle to a MutatingWebhookConfiguration

You can annotate a MutatingWebhookConfiguration with to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint.


Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks.


  1. Annotate the MutatingWebhookConfiguration with

    $ oc annotate mutatingwebhookconfigurations <mutatingwebhook-name> \1
    Replace <mutatingwebhook-name> with the name of the MutatingWebhookConfiguration to annotate.

    For example, use the following command to annotate the MutatingWebhookConfiguration foo:

    $ oc annotate mutatingwebhookconfigurations foo
  2. View the MutatingWebhookConfiguration to ensure that the service CA bundle has been injected:

    $ oc get mutatingwebhookconfigurations <mutatingwebhook-name> -o yaml

    The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output:

    kind: MutatingWebhookConfiguration
      annotations: "true"
    - myWebhook:
      - v1beta1
        caBundle: <CA_BUNDLE>

4.3.7. Add the service CA bundle to a ValidatingWebhookConfiguration

You can annotate a ValidatingWebhookConfiguration with to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint.


Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks.


  1. Annotate the ValidatingWebhookConfiguration with

    $ oc annotate validatingwebhookconfigurations <validatingwebhook-name> \1
    Replace <validatingwebhook-name> with the name of the ValidatingWebhookConfiguration to annotate.

    For example, use the following command to annotate the ValidatingWebhookConfiguration foo:

    $ oc annotate validatingwebhookconfigurations foo
  2. View the ValidatingWebhookConfiguration to ensure that the service CA bundle has been injected:

    $ oc get validatingwebhookconfigurations <validatingwebhook-name> -o yaml

    The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output:

    kind: ValidatingWebhookConfiguration
      annotations: "true"
    - myWebhook:
      - v1beta1
        caBundle: <CA_BUNDLE>

4.3.8. Manually rotate the generated service certificate

You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate.


  • A secret containing the certificate and key pair must have been generated for the service.


  1. Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below.

    $ oc describe service <service-name>

    Example output

    ... <secret>

  2. Delete the generated secret for the service. This process will automatically recreate the secret.

    $ oc delete secret <secret> 1
    Replace <secret> with the name of the secret from the previous step.
  3. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE.

    $ oc get secret <service-name>

    Example output

    NAME              TYPE                DATA   AGE
    <>   2      1s

4.3.9. Manually rotate the service CA certificate

The service CA is valid for 26 months and is automatically refreshed when there is less than six months validity left.

If necessary, you can manually refresh the service CA by using the following procedure.


A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA.


  • You must be logged in as a cluster admin.


  1. View the expiration date of the current service CA certificate by using the following command.

    $ oc get secrets/signing-key -n openshift-service-ca \
         -o template='{{index .data "tls.crt"}}' \
         | base64 -d \
         | openssl x509 -noout -enddate
  2. Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates.

    $ oc delete secret/signing-key -n openshift-service-ca
  3. To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates.

    $ for I in $(oc get ns -o jsonpath='{range .items[*]} {}{"\n"} {end}'); \
          do oc delete pods --all -n $I; \
          sleep 1; \

    This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted.

Chapter 5. Certificate types and descriptions

5.1. User-provided certificates for the API server

5.1.1. Purpose

The API server is accessible by clients external to the cluster at api.<cluster_name>.<base_domain>. You might want clients to access the API server at a different host name or without the need to distribute the cluster-managed certificate authority (CA) certificates to the clients. The administrator must set a custom default certificate to be used by the API server when serving content.

5.1.2. Location

The user-provided certificates must be provided in a type Secret in the openshift-config namespace. Update the API server cluster configuration, the apiserver/cluster resource, to enable the use of the user-provided certificate.

5.1.3. Management

User-provided certificates are managed by the user.

5.1.4. Expiration

API server client certificate expiration is less than five minutes.

User-provided certificates are managed by the user.

5.1.5. Customization

Update the secret containing the user-managed certificate as needed.

Additional resources

5.2. Proxy certificates

5.2.1. Purpose

Proxy certificates allow users to specify one or more custom certificate authority (CA) certificates used by platform components when making egress connections.

The trustedCA field of the Proxy object is a reference to a ConfigMap that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure.

The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a ConfigMap named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the ConfigMap referenced by trustedCA is openshift-config:

apiVersion: v1
kind: ConfigMap
  name: user-ca-bundle
  namespace: openshift-config
  ca-bundle.crt: |
  Custom CA certificate bundle.
Additional resources

5.2.2. Managing proxy certificates during installation

The additionalTrustBundle value of the installer configuration is used to specify any proxy-trusted CA certificates during installation. For example:

$ cat install-config.yaml

Example output

  httpProxy: http://<>
  httpsProxy: https://<>
	noProxy: <,>
additionalTrustBundle: |
    -----END CERTIFICATE-----

5.2.3. Location

The user-provided trust bundle is represented as a ConfigMap. The ConfigMap is mounted into the file system of platform components that make egress HTTPS calls. Typically, Operators mount the ConfigMap to /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem, but this is not required by the proxy. A proxy can modify or inspect the HTTPS connection. In either case, the proxy must generate and sign a new certificate for the connection.

Complete proxy support means connecting to the specified proxy and trusting any signatures it has generated. Therefore, it is necessary to let the user specify a trusted root, such that any certificate chain connected to that trusted root is also trusted.

If using the RHCOS trust bundle, place CA certificates in /etc/pki/ca-trust/source/anchors.

See Using shared system certificates in the Red Hat Enterprise Linux documentation for more information.

5.2.4. Expiration

The user sets the expiration term of the user-provided trust bundle.

The default expiration term is defined by the CA certificate itself. It is up to the CA administrator to configure this for the certificate before it can be used by OpenShift Container Platform or RHCOS.


Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle.

5.2.5. Services

By default, all platform components that make egress HTTPS calls will use the RHCOS trust bundle. If trustedCA is defined, it will also be used.

Any service that is running on the RHCOS node is able to use the trust bundle of the node.

5.2.6. Management

These certificates are managed by the system and not the user.

5.2.7. Customization

Updating the user-provided trust bundle consists of either:

  • updating the PEM-encoded certificates in the ConfigMap referenced by trustedCA, or
  • creating a ConfigMap in the namespace openshift-config that contains the new trust bundle and updating trustedCA to reference the name of the new ConfigMap.

The mechanism for writing CA certificates to the RHCOS trust bundle is exactly the same as writing any other file to RHCOS, which is done through the use of MachineConfigs. When the Machine Config Operator (MCO) applies the new MachineConfig that contains the new CA certificates, the node is rebooted. During the next boot, the service coreos-update-ca-trust.service runs on the RHCOS nodes, which automatically update the trust bundle with the new CA certificates. For example:

kind: MachineConfig
  labels: worker
  name: 50-examplecorp-ca-cert
      version: 3.1.0
      - contents:
        mode: 0644
        overwrite: true
        path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt

The trust store of machines must also support updating the trust store of nodes.

5.2.8. Renewal

There are no Operators that can auto-renew certificates on the RHCOS nodes.


Red Hat does not monitor for when CAs expire. However, due to the long life of CAs, this is generally not an issue. However, you might need to periodically update the trust bundle.

5.3. Service CA certificates

5.3.1. Purpose

service-ca is an Operator that creates a self-signed CA when an OpenShift Container Platform cluster is deployed.

5.3.2. Expiration

A custom expiration term is not supported. The self-signed CA is stored in a secret with qualified name service-ca/signing-key in fields tls.crt (certificate(s)), tls.key (private key), and ca-bundle.crt (CA bundle).

Other services can request a service serving certificate by annotating a service resource with <secret name>. In response, the Operator generates a new certificate, as tls.crt, and private key, as tls.key to the named secret. The certificate is valid for two years.

Other services can request that the CA bundle for the service CA be injected into APIService or ConfigMap resources by annotating with true to support validating certificates generated from the service CA. In response, the Operator writes its current CA bundle to the CABundle field of APIService or as service-ca.crt to a ConfigMap.

As of OpenShift Container Platform 4.3.5, automated rotation is supported and is backported to some 4.2.z and 4.3.z releases. For any release supporting automated rotation, the service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA.

The service CA expiration of 26 months is longer than the expected upgrade interval for a supported OpenShift Container Platform cluster, such that non-control plane consumers of service CA certificates will be refreshed after CA rotation and prior to the expiration of the pre-rotation CA.


A manually-rotated service CA does not maintain trust with the previous service CA. You might experience a temporary service disruption until the Pods in the cluster are restarted, which ensures that Pods are using service serving certificates issued by the new service CA.

5.3.3. Management

These certificates are managed by the system and not the user.

5.3.4. Services

Services that use service CA certificates include:

  • cluster-autoscaler-operator
  • cluster-monitoring-operator
  • cluster-authentication-operator
  • cluster-image-registry-operator
  • cluster-ingress-operator
  • cluster-kube-apiserver-operator
  • cluster-kube-controller-manager-operator
  • cluster-kube-scheduler-operator
  • cluster-networking-operator
  • cluster-openshift-apiserver-operator
  • cluster-openshift-controller-manager-operator
  • cluster-samples-operator
  • machine-config-operator
  • console-operator
  • insights-operator
  • machine-api-operator
  • operator-lifecycle-manager

This is not a comprehensive list.

Additional resources

5.4. Node certificates

5.4.1. Purpose

Node certificates are signed by the cluster; they come from a certificate authority (CA) that is generated by the bootstrap process. Once the cluster is installed, the node certificates are auto-rotated.

5.4.2. Management

These certificates are managed by the system and not the user.

Additional resources

5.5. Bootstrap certificates

5.5.1. Purpose

The kubelet, in OpenShift Container Platform 4 and later, uses the bootstrap certificate located in /etc/kubernetes/kubeconfig to initially bootstrap. This is followed by the bootstrap initialization process and authorization of the kubelet to create a CSR.

In that process, the kubelet generates a CSR while communicating over the bootstrap channel. The controller manager signs the CSR, resulting in a certificate that the kubelet manages.

5.5.2. Management

These certificates are managed by the system and not the user.

5.5.3. Expiration

This bootstrap CA is valid for 10 years.

The kubelet-managed certificate is valid for one year and rotates automatically at around the 80 percent mark of that one year.

5.5.4. Customization

You cannot customize the bootstrap certificates.

5.6. etcd certificates

5.6.1. Purpose

etcd certificates are signed by the etcd-signer; they come from a certificate authority (CA) that is generated by the bootstrap process.

5.6.2. Expiration

The CA certificates are valid for 10 years. The peer, client, and server certificates are valid for three years.

5.6.3. Management

These certificates are managed by the system and not the user.

5.6.4. Services

etcd certificates are used for encrypted communication between etcd member peers, as well as encrypted client traffic. The following certificates are generated and used by etcd and other processes that communicate with etcd:

  • Peer certificates: Used for communication between etcd members.
  • Client certificates: Used for encrypted server-client communication. Client certificates are currently used by the API server only, and no other service should connect to etcd directly except for the proxy. Client secrets (etcd-client, etcd-metric-client, etcd-metric-signer, and etcd-signer) are added to the openshift-config, openshift-monitoring, and openshift-kube-apiserver namespaces.
  • Server certificates: Used by the etcd server for authenticating client requests.
  • Metric certificates: All metric consumers connect to proxy with metric-client certificates.
Additional resources

5.7. OLM certificates

5.7.1. Management

All certificates for OpenShift Lifecycle Manager (OLM) components (olm-operator, catalog-operator, packageserver, and marketplace-operator) are managed by the system.

Operators installed via OLM can have certificates generated for them if they are providing API services. packageserver is one example.

Certificates in the openshift-operator-lifecycle-manager namespace are managed by OLM with the exception of certificates used by Operators that require a validating or mutating webhook.

Operators that install validating or mutating webhooks must currently manage those certificates themselves. They do not require the user to manage the certificates.

OLM will not update the certificates of Operators that it manages in proxy environments. These certificates must be managed by the user via the subscription config.

5.8. User-provided certificates for default ingress

5.8.1. Purpose

Applications are usually exposed at <route_name>.apps.<cluster_name>.<base_domain>. The <cluster_name> and <base_domain> come from the installation config file. <route_name> is the host field of the route, if specified, or the route name. For example, hello-openshift is the name of the route and the route is in the default namespace. You might want clients to access the applications without the need to distribute the cluster-managed CA certificates to the clients. The administrator must set a custom default certificate when serving application content.


The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use operator-generated default certificates in production clusters.

5.8.2. Location

The user-provided certificates must be provided in a tls type Secret resource in the openshift-ingress namespace. Update the IngressController CR in the openshift-ingress-operator namespace to enable the use of the user-provided certificate. For more information on this process, see Setting a custom default certificate.

5.8.3. Management

User-provided certificates are managed by the user.

5.8.4. Expiration

User-provided certificates are managed by the user.

5.8.5. Services

Applications deployed on the cluster use user-provided certificates for default ingress.

5.8.6. Customization

Update the secret containing the user-managed certificate as needed.

Additional resources

5.9. Ingress certificates

5.9.1. Purpose

The Ingress Operator uses certificates for:

  • Securing access to metrics for Prometheus.
  • Securing access to routes.

5.9.2. Location

To secure access to Ingress Operator and Ingress Controller metrics, the Ingress Operator uses service serving certificates. The Operator requests a certificate from the service-ca controller for its own metrics, and the service-ca controller puts the certificate in a secret named metrics-tls in the openshift-ingress-operator namespace. Additionally, the Ingress Operator requests a certificate for each Ingress Controller, and the service-ca controller puts the certificate in a secret named router-metrics-certs-<name>, where <name> is the name of the Ingress Controller, in the openshift-ingress namespace.

Each Ingress Controller has a default certificate that it uses for secured routes that do not specify their own certificates. Unless you specify a custom certificate, the Operator uses a self-signed certificate by default. The Operator uses its own self-signed signing certificate to sign any default certificate that it generates. The Operator generates this signing certificate and puts it in a secret named router-ca in the openshift-ingress-operator namespace. When the Operator generates a default certificate, it puts the default certificate in a secret named router-certs-<name> (where <name> is the name of the Ingress Controller) in the openshift-ingress namespace.


The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters.

5.9.3. Workflow

Figure 5.1. Custom certificate workflow

Figure 5.2. Default certificate workflow

20 An empty defaultCertificate field causes the Ingress Operator to use its self-signed CA to generate a serving certificate for the specified domain.

20 The default CA certificate and key generated by the Ingress Operator. Used to sign Operator-generated default serving certificates.

20 In the default workflow, the wildcard default serving certificate, created by the Ingress Operator and signed using the generated default CA certificate. In the custom workflow, this is the user-provided certificate.

20 The router deployment. Uses the certificate in secrets/router-certs-default as its default front-end server certificate.

20 In the default workflow, the contents of the wildcard default serving certificate (public and private parts) are copied here to enable OAuth integration. In the custom workflow, this is the user-provided certificate.

20 The public (certificate) part of the default serving certificate. Replaces the configmaps/router-ca resource.

20 The user updates the cluster proxy configuration with the CA certificate that signed the ingresscontroller serving certificate. This enables components like auth, console, and the registry to trust the serving certificate.

20 The cluster-wide trusted CA bundle containing the combined Red Hat Enterprise Linux CoreOS (RHCOS) and user-provided CA bundles or an RHCOS-only bundle if a user bundle is not provided.

20 The custom CA certificate bundle, which instructs other components (for example, auth and console) to trust an ingresscontroller configured with a custom certificate.

20 The trustedCA field is used to reference the user-provided CA bundle.

20 The Cluster Network Operator injects the trusted CA bundle into the proxy-ca ConfigMap.

20 OpenShift Container Platform 4.6 and newer use default-ingress-cert.

5.9.4. Expiration

The expiration terms for the Ingress Operator’s certificates are as follows:

  • The expiration date for metrics certificates that the service-ca controller creates is two years after the date of creation.
  • The expiration date for the Operator’s signing certificate is two years after the date of creation.
  • The expiration date for default certificates that the Operator generates is two years after the date of creation.

You cannot specify custom expiration terms on certificates that the Ingress Operator or service-ca controller creates.

You cannot specify expiration terms when installing OpenShift Container Platform for certificates that the Ingress Operator or service-ca controller creates.

5.9.5. Services

Prometheus uses the certificates that secure metrics.

The Ingress Operator uses its signing certificate to sign default certificates that it generates for Ingress Controllers for which you do not set custom default certificates.

Cluster components that use secured routes may use the default Ingress Controller’s default certificate.

Ingress to the cluster via a secured route uses the default certificate of the Ingress Controller by which the route is accessed unless the route specifies its own certificate.

5.9.6. Management

Ingress certificates are managed by the user. See Replacing the default ingress certificate for more information.

5.9.7. Renewal

The service-ca controller automatically rotates the certificates that it issues. However, it is possible to use oc delete secret <secret> to manually rotate service serving certificates.

The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure.

5.10. Monitoring and cluster logging Operator component certificates

5.10.1. Expiration

Monitoring components secure their traffic with service CA certificates. These certificates are valid for 2 years and are replaced automatically on rotation of the service CA, which is every 13 months.

If the certificate lives in the openshift-monitoring or openshift-logging namespace, it is system managed and rotated automatically.

5.10.2. Management

These certificates are managed by the system and not the user.

5.11. Control plane certificates

5.11.1. Location

Control plane certificates are included in these namespaces:

  • openshift-config-managed
  • openshift-kube-apiserver
  • openshift-kube-apiserver-operator
  • openshift-kube-controller-manager
  • openshift-kube-controller-manager-operator
  • openshift-kube-scheduler

5.11.2. Management

Control plane certificates are managed by the system and rotated automatically.

In the rare case that your control plane certificates expired, see Recovering from expired control plane certificates

Chapter 6. Viewing audit logs

Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system.

6.1. About the API audit log

Audit works at the API server level, logging all requests coming to the server. Each audit log contains the following information:

Table 6.1. Audit log fields



The audit level at which the event was generated.


A unique audit ID, generated for each request.


The stage of the request handling when this event instance was generated.


The request URI as sent by the client to a server.


The Kubernetes verb associated with the request. For non-resource requests, this is the lowercase HTTP method.


The authenticated user information.


Optional. The impersonated user information, if the request is impersonating another user.


Optional. The source IPs, from where the request originated and any intermediate proxies.


Optional. The user agent string reported by the client. Note that the user agent is provided by the client, and must not be trusted.


Optional. The object reference this request is targeted at. This does not apply for List-type requests, or non-resource requests.


Optional. The response status, populated even when the ResponseObject is not a Status type. For successful responses, this will only include the code. For non-status type error responses, this will be auto-populated with the error message.


Optional. The API object from the request, in JSON format. The RequestObject is recorded as is in the request (possibly re-encoded as JSON), prior to version conversion, defaulting, admission or merging. It is an external versioned object type, and might not be a valid object on its own. This is omitted for non-resource requests and is only logged at request level and higher.


Optional. The API object returned in the response, in JSON format. The ResponseObject is recorded after conversion to the external type, and serialized as JSON. This is omitted for non-resource requests and is only logged at response level.


The time that the request reached the API server.


The time that the request reached the current audit stage.


Optional. An unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain, including authentication, authorization and admission plugins. Note that these annotations are for the audit event, and do not correspond to the metadata.annotations of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example Values should be short. Annotations are included in the metadata level.

Example output for the Kubernetes API server:

{"kind":"Event","apiVersion":"","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"":"allow","":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}}

6.2. Viewing the audit log

You can view logs for the OpenShift Container Platform API server or the Kubernetes API server for each master node.


To view the audit log:

  1. View the OpenShift Container Platform API server logs

    1. If necessary, get the node name of the log you want to view:

      $ oc adm node-logs --role=master --path=openshift-apiserver/

      Example output

      ip-10-0-140-97.ec2.internal audit-2019-04-09T00-12-19.834.log
      ip-10-0-140-97.ec2.internal audit-2019-04-09T11-13-00.469.log
      ip-10-0-140-97.ec2.internal audit.log
      ip-10-0-153-35.ec2.internal audit-2019-04-09T00-11-49.835.log
      ip-10-0-153-35.ec2.internal audit-2019-04-09T11-08-30.469.log
      ip-10-0-153-35.ec2.internal audit.log
      ip-10-0-170-165.ec2.internal audit-2019-04-09T00-13-00.128.log
      ip-10-0-170-165.ec2.internal audit-2019-04-09T11-10-04.082.log
      ip-10-0-170-165.ec2.internal audit.log

    2. View the OpenShift Container Platform API server log for a specific master node and timestamp or view all the logs for that master:

      $ oc adm node-logs <node-name> --path=openshift-apiserver/<log-name>

      For example:

      $ oc adm node-logs ip-10-0-140-97.ec2.internal --path=openshift-apiserver/audit-2019-04-08T13-09-01.227.log
      $ oc adm node-logs ip-10-0-140-97.ec2.internal --path=openshift-apiserver/audit.log

      The output appears similar to the following:

      Example output

      {"kind":"Event","apiVersion":"","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"":"allow","":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}}

  2. View the Kubernetes API server logs:

    1. If necessary, get the node name of the log you want to view:

      $ oc adm node-logs --role=master --path=kube-apiserver/

      Example output

      ip-10-0-140-97.ec2.internal audit-2019-04-09T14-07-27.129.log
      ip-10-0-140-97.ec2.internal audit-2019-04-09T19-18-32.542.log
      ip-10-0-140-97.ec2.internal audit.log
      ip-10-0-153-35.ec2.internal audit-2019-04-09T19-24-22.620.log
      ip-10-0-153-35.ec2.internal audit-2019-04-09T19-51-30.905.log
      ip-10-0-153-35.ec2.internal audit.log
      ip-10-0-170-165.ec2.internal audit-2019-04-09T18-37-07.511.log
      ip-10-0-170-165.ec2.internal audit-2019-04-09T19-21-14.371.log
      ip-10-0-170-165.ec2.internal audit.log

    2. View the Kubernetes API server log for a specific master node and timestamp or view all the logs for that master:

      $ oc adm node-logs <node-name> --path=kube-apiserver/<log-name>

      For example:

      $ oc adm node-logs ip-10-0-140-97.ec2.internal --path=kube-apiserver/audit-2019-04-09T14-07-27.129.log
      $ oc adm node-logs ip-10-0-170-165.ec2.internal --path=kube-apiserver/audit.log

      The output appears similar to the following:

      Example output

      {"kind":"Event","apiVersion":"","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"":"allow","":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}}

Chapter 7. Configuring the audit log policy

You can control the amount of information that is logged to the API server audit logs by choosing the audit log policy profile to use.

7.1. About audit log policy profiles

Audit log profiles define how to log requests that come to the OpenShift API server, the Kubernetes API server, and the OAuth API server.

OpenShift Container Platform provides the following predefined audit policy profiles:



Logs only metadata for read and write requests; does not log request bodies. This is the default policy.


In addition to logging metadata for all requests, logs request bodies for every write request to the API servers (create, update, patch). This profile has more resource overhead than the Default profile. [1]


In addition to logging metadata for all requests, logs request bodies for every read and write request to the API servers (get, list, create, update, patch). This profile has the most resource overhead. [1]

  1. Sensitive resources, such as Secret, Route, and OAuthClient objects, are never logged past the metadata level. OAuth tokens are not logged at all if your cluster was upgraded from OpenShift Container Platform 4.5, because their object names might contain secret information.

By default, OpenShift Container Platform uses the Default audit log profile. You can use another audit policy profile that also logs request bodies, but be aware of the increased resource usage (CPU, memory, and I/O).

7.2. Configuring the audit log policy

You can configure the audit log policy to use when logging requests that come to the API servers.


  • You have access to the cluster as a user with the cluster-admin role.


  1. Edit the APIServer resource:

    $ oc edit apiserver cluster
  2. Update the spec.audit.profile field:

      kind: APIServer
          profile: WriteRequestBodies 1
    Set to Default, WriteRequestBodies, or AllRequestBodies. The default profile is Default.
  3. Save the file to apply the changes.
  4. Verify that a new revision of the Kubernetes API server pods has rolled out. This will take several minutes.

    $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    3 nodes are at revision 12

    If the output shows a message similar to one of the following, this means that the update is still in progress. Wait a few minutes and try again.

    • 3 nodes are at revision 11; 0 nodes have achieved new revision 12
    • 2 nodes are at revision 11; 1 nodes are at revision 12

Chapter 8. Allowing JavaScript-based access to the API server from additional hosts

8.1. Allowing JavaScript-based access to the API server from additional hosts

The default OpenShift Container Platform configuration only allows the OpenShift web console to send requests to the API server.

If you need to access the API server or OAuth server from a JavaScript application using a different host name, you can configure additional host names to allow.


  • Access to the cluster as a user with the cluster-admin role.


  1. Edit the API servers resource:

    $ oc edit cluster
  2. Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional host names:

    kind: APIServer
      annotations: "true"
      creationTimestamp: "2019-07-11T17:35:37Z"
      generation: 1
      name: cluster
      resourceVersion: "907"
      selfLink: /apis/
      uid: 4b45a8dd-a402-11e9-91ec-0219944e0696
      - (?i)//my\.subdomain\.domain\.com(:|\z) 1
    The host name is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server.

    This example uses the following syntax:

    • The (?i) makes it case-insensitive.
    • The // pins to the beginning of the domain and matches the double slash following http: or https:.
    • The \. escapes dots in the domain name.
    • The (:|\z) matches the end of the domain name (\z) or a port separator (:).
  3. Save the file to apply the changes.

Chapter 9. Encrypting etcd data

9.1. About etcd encryption

By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties.

When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:

  • Secrets
  • ConfigMaps
  • Routes
  • OAuth access tokens
  • OAuth authorize tokens

When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys in order to restore from an etcd backup.

9.2. Enabling etcd encryption

You can enable etcd encryption to encrypt sensitive resources in your cluster.


It is not recommended to take a backup of etcd until the initial encryption process is complete. If the encryption process has not completed, the backup might be only partially encrypted.


  • Access to the cluster as a user with the cluster-admin role.


  1. Modify the API server object:

    $ oc edit apiserver
  2. Set the encryption field type to aescbc:

        type: aescbc 1
    The aescbc type means that AES-CBC with PKCS#7 padding and a 32 byte key is used to perform the encryption.
  3. Save the file to apply the changes.

    The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd encryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      All resources encrypted:,,

      If the output shows EncryptionInProgress, this means that encryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows EncryptionCompleted upon successful encryption:

      All resources encrypted: secrets, configmaps

      If the output shows EncryptionInProgress, this means that encryption is still in progress. Wait a few minutes and try again.

9.3. Disabling etcd encryption

You can disable encryption of etcd data in your cluster.


  • Access to the cluster as a user with the cluster-admin role.


  1. Modify the API server object:

    $ oc edit apiserver
  2. Set the encryption field type to identity:

        type: identity 1
    The identity type is the default value and means that no encryption is performed.
  3. Save the file to apply the changes.

    The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster.

  4. Verify that etcd decryption was successful.

    1. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted:

      $ oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, this means that decryption is still in progress. Wait a few minutes and try again.

    2. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted:

      $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}'

      The output shows DecryptionCompleted upon successful decryption:

      Encryption mode set to identity and everything is decrypted

      If the output shows DecryptionInProgress, this means that decryption is still in progress. Wait a few minutes and try again.

Chapter 10. Scanning pods for vulnerabilities

Using the Container Security Operator (CSO), you can access vulnerability scan results from the OpenShift Container Platform web console for container images used in active pods on the cluster. The CSO:

  • Watches containers associated with pods on all or specified namespaces
  • Queries the container registry where the containers came from for vulnerability information, provided an image’s registry is running image scanning (such as or a Red Hat Quay registry with Clair scanning)
  • Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API

Using the instructions here, the CSO is installed in the openshift-operators namespace, so it is available to all namespaces on your OpenShift cluster.

10.1. Running the Container Security Operator

You can start the Container Security Operator from the OpenShift Container Platform web console by selecting and installing that Operator from the Operator Hub, as described here.


  • Have administrator privileges to the OpenShift Container Platform cluster
  • Have containers that come from a Red Hat Quay or registry running on your cluster


  1. Navigate to OperatorsOperatorHub and select Security.
  2. Select the Container Security Operator, then select Install to go to the Create Operator Subscription page.
  3. Check the settings. All namespaces and automatic approval strategy are selected, by default.
  4. Select Install. The Container Security Operator appears after a few moments on the Installed Operators screen.
  5. Optionally, you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then run the following command to add the cert to the CSO:

    $ oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators
  6. If you added a custom certificate, restart the Operator pod for the new certs to take effect.
  7. Open the OpenShift Dashboard (HomeOverview). A link to Quay Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a Quay Image Security breakdown, as shown in the following figure:

    Access image scanning data from OpenShift Container Platform dashboard

  8. You can do one of two things at this point to follow up on any detected vulnerabilities:

    • Select the link to the vulnerability. You are taken to the container registry that the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a registry:

      The CSO points you to a registry containing the vulnerable image

    • Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in the quay-enterprise namespace:

      View namespaces a vulnerable image is running in

At this point, you know what images are vulnerable, what you need to do to fix those vulnerabilities, and every namespace that the image was run in. So you can:

  • Alert anyone running the image that they need to correct the vulnerability
  • Stop the images from running by deleting the deployment or other object that started the pod that the image is in

Note that if you do delete the pod, it may take several minutes for the vulnerability to reset on the dashboard.

10.2. Querying image vulnerabilities from the CLI

Using the oc command, you can display information about vulnerabilities detected by the Container Security Operator.


  • Be running the Container Security Operator on your OpenShift Container Platform instance


  • To query for detected container image vulnerabilities, type:

    $ oc get vuln --all-namespaces

    Example output

    NAMESPACE     NAME              AGE
    default       sha256.ca90...    6m56s
    skynet        sha256.ca90...    9m37s

  • To display details for a particular vulnerability, provide the vulnerability name and its namespace to the oc describe command. This example shows an active container whose image includes an RPM package with a vulnerability:

    $ oc describe vuln --namespace mynamespace sha256.ac50e3752...

    Example output

    Name:         sha256.ac50e3752...
    Namespace:    quay-enterprise
        Name:            nss-util
        Namespace Name:  centos:7
        Version:         3.44.0-3.el7
        Versionformat:   rpm
          Description: Network Security Services (NSS) is a set of libraries...

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.