BSI Quick Check: Guidance to ensure that your OCP deployment complies with BSI IT-Grundschutz blocks SYS.1.6 Containerization and APP.4.4 Kubernetes

Updated -

Ansgar Kückes, Steffen Lützenkirchen

2023-07-24

Summary

This article explains the considerations needed to ensure that your OCP deployment complies with BSI (Bundesamt für Sicherheit in der Informationstechnik, Germany’s Federal Office for Information Security) IT-Grundschutz blocks SYS.1.6 Containerization and APP.4.4 Kubernetes. Some of these considerations are addressed by the features of the product itself, and some must be implemented organisationally. It also describes how other parts of OpenShift Platform Plus can be used in conjunction with OpenShift Container Platform to facilitate compliance.

Read the original German-language document (DOCX format)

Contents

Disclaimer

Definitions

Block SYS.1.6 Containerization

Block APP.4.4 Kubernetes

Disclaimer

We would like to point out that the implementation of the measures described in this document and the use of the technologies mentioned do not guarantee compliance with the BSI guidelines. Rather, the document is intended as a starting point to define the necessary measures depending on the respective organizational and technological requirements, and the respective protection needs.

Any liability for the completeness, accuracy, timeliness or reliability of the content provided is excluded

Copyright © 2023 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Red Hat logo and JBoss are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the USA and other countries.

Definitions

An attempt is made to follow the terms used in the BSI module. In some places it is necessary to use additional terms for the purpose of clarification. Terms not defined in the [Kubernetes glossary] are defined in the following table:

Term Definition
ACM / Advanced Cluster Management for Kubernetes A product from Red Hat that enables the management of multiple clusters at the same time using policies and supports standardized operation of container clusters.
ACS / Advanced Cluster Security for Kubernetes A Red Hat product that supports compliance with security configurations through policies and compliance rules.
Compliance Operator The OpenShift Compliance Operator checks nodes and the platform itself (API resources) and compares the results against a definable compliance profile (tailored profile).
CSI/Container Storage Interface An API specification that enables the integration of different storage solutions within Kubernetes/OpenShift using plug-ins.
Infra-Node Special compute node that is used exclusively for non-application-related tasks (infrastructure tasks).
OADP / OpenShift APIs for Data Protection An operator that provides various APIs that can be used to backup and restore cluster resources (yaml), internal images, and persistent volume data.
OpenShift GitOps GitOps functionality integrated into OpenShift to implement continuous deployment processes.
OpenShift Sandboxes Isolated runtime environments provided in OpenShift based on hypervisors (based on Kata Containers).
Project (client) OpenShift manages clients within a cluster in the form of so-called “Projects”.Each project has, among other things, its own namespace and its own administration role. Projects are encapsulated from each other and cannot access another client's resources without explicit permission. Typically, a separate project (client) is used for each application.
Prometheus Monitoring tool for exposing system performance data.
Quay A Red Hat product that provides an enterprise registry.
RHCOS/Red Hat CoreOS An operating system designed for the operation of containers and follows immutable principles.
SBOMs / Software Bill of Materials A machine-readable document that lists and makes verifiable the individual software artifacts contained in a piece of software.
Active network Network in which the (compute) nodes are placed.
Worker node

Compute node on which an application is running

or an application service.

Block SYS.1.6 Containerization

SYS.1.6 Containerization

Consulted persons:

Last update:

BSI implementation instructions:

SYS.1.6.A1 Planning the use of containers Basic requirement
Status: Implementation by: Responsible

Before containers are deployed, the goal of the container deployment (e.g. scaling, availability, disposable containers for security or CI/CD) MUST first be determined so that all security-related aspects of installation, operation and decommissioning can be planned.

This requirement must be implemented organizationally.

When planning, the operating costs that arise from container use or mixed operation SHOULD also be taken into account.

This requirement must be implemented organizationally.

The planning MUST be adequately documented.

This requirement must be implemented organizationally.

OpenShift supports all of the goals mentioned. Comprehensive handouts are available to carry out and document the planning of container use, security and compliance, architecture and installation on OpenShift. [SecGuide]

SYS.1.6.A2 Planning the management of containers Basic requirement
Status: Implementation by: Responsible:

The containers MAY ONLY be managed after appropriate planning.

This requirement must be implemented organizationally.

This planning MUST cover the entire life cycle from commissioning to decommissioning, including operation and updates.

This requirement must be implemented organizationally.

Through OpenShift GitOps, OpenShift technically supports this requirement with a standardized approach to deployment, change handling and deprovisioning via kustomize or Helm charts. OpenShift provides further support through operator-based applications and platform management that automates the processes of commissioning, decommissioning and updates.

When planning administration, it MUST be taken into account that the creator of a container should be viewed in part like an administrator due to the impact on operations.

This requirement must be implemented organizationally.

Starting, stopping and monitoring the containers MUST be done via the management software used.

Start, stop and monitoring is a basic function of OpenShift. It is not possible to bypass the OpenShift methods to start and stop. For monitoring purposes, OpenShift itself offers Prometheus-based monitoring. Using Advanced Cluster Security for Kubernetes (ACS), policy-based rules can also be used to monitor the containers.

SYS.1.6.A3 Secure deployment of containerized IT systems Basic requirement
Status: Implementation by: Responsible:

For containerized IT systems, it MUST be taken into account how containerization affects the IT systems and applications being operated, in particular the administration and suitability of the applications.

This requirement must be implemented organizationally.

Note: This requirement is actively supported by OpenShift. For example, OpenShift does not allow applications with fixed UID/GID settings as standard; on the contrary, these IDs are assigned dynamically (security-by-design). The behavior can be adjusted by administrators, for example for system tasks.

Based on the protection needs of the applications, it MUST be checked whether the requirements for isolation and encapsulation of the containerized IT systems and the virtual networks as well as the applications operated are sufficiently met.

This requirement must be implemented organizationally.

Note: OpenShift supports the requirements through strict client separation based on a “Project” (an extension to the Kubernetes namespace). The containers are separated from each other and from the host system via cgroups using SELinux. As long as all applications run “restricted” in the Security Context Constraint (SCC), OpenShift maintains strict client separation.

The operating system's own mechanisms SHOULD be included in this test.

This requirement must be implemented organizationally.

OpenShift supports this requirement by leveraging SELinux with its cgroups to create the container sandbox.


For virtual networks, the host performs the function of a network component. The building blocks of the sub-layers NET.1 networks and NET.3 network components MUST be taken into account accordingly.

This requirement must be implemented organizationally.

Logical and overlay networks MUST also be considered and modeled.

This requirement must be implemented organizationally.

Note: OpenShift supports different network infrastructures via the CNI plugin interface (e.g. OVN-Kubernetes, OpenShift-SDN, hardware networks etc.) OVN-Kubernetes, OpenShift-SDN, hardware networks etc.) The underlying network is abstracted by the network model in Openshift and usage is consistent across containers. This allows OpenShift to uniformly implement network security features such as: Firewall rules over network policies.

Furthermore, the containerized IT systems used MUST meet the requirements for availability and data throughput.

This requirement must be implemented organizationally.

Note: OpenShift provides fine-grained metrics for external capacity management via monitoring.

During ongoing operations, the performance and condition of the containerized IT systems SHOULD be monitored (so-called health checks).

OpenShift offers automated checks for the availability and health of an application. If the LivenessProbe (Health) repeatedly receives a negative result or is not reachable, the affected pod with the container is restarted. Using ReadinessProbe, a container can control whether it is ready to accept HTTP(S) based requests or not.

Note: Monitoring is considered in APP.4.4.A11.

SYS.1.6.A4 Planning the deployment and distribution of images Basic requirement
Status: Implementation by: Responsible:

The process for deploying and distributing images MUST be planned and appropriately documented.

This requirement must be implemented organizationally.

Note: OpenShift supports the requirement through the built-in functionalities and enables the highest possible level of automation. On the one hand, CI/CD tools are delivered with OpenShift pipelines and integrated into the platform. On the other hand, pre-configured build processes based on Red Hat experience are available that are based on Source2Image and thus support planning.

The built-in registry allows you to store images and other associated information, such as Helm charts or SBOMs.

The abstractions available in Openshift allow the entire image distribution process to be documented and controlled as code. This further allows the image distribution process to be managed via OpenShift GitOps.

SYS.1.6.A5 Separation of the administration and access networks for containers Basic requirement
Status: Implementation by: Responsible:

The networks for the administration of the host, the administration of the containers and their access networks MUST be separated appropriately to the protection requirements.

Hosts and containers are controlled via the Kubernetes API. This is addressed via api.<cluster-fqdn>. The load balancer used for this is located in the administration network. The load balancer for *.apps.<cluster-fqdn> is set up separately in the active network. This means that the administration is appropriately separated.

The Console (the OpenShift web UI) is used by all users. Authorization takes place at the API level and is secured via RBAC.

The control plane is to be located in an administration network.

In principle, at least, administration of the host SHOULD only be possible from the administration network.

The web UI can be configured on another router that is terminated on the administration load balancer and is therefore only accessible from the administration network. This means that it can no longer be reached from the active network.

Only the communication relationships necessary for operation SHOULD be permitted.

This is a standard OpenShift feature. The OpenShift documentation [OpenShiftDocs] contains the necessary communication paths between control plane, infrastructure and worker nodes, as well as the necessary firewall activations of the underlying network stack at hardware or IaaS level. The communication between containers or pods within a client (“Project”) is not restricted by default, but can be regulated with micro-segmentation if necessary or as a service mesh with mTLS authentication be implemented (see APP.4.4.A18).

Externally exposed services can receive their own IP and thus data traffic can also be separated outside the platform. Inter-node communication is carried out via suitable tunnel protocols (VXLAN, GENEVE) and can also be encrypted using IPSec.

SYS.1.6.A6 Use of secure images Standard requirement
Status: Implementation by: Responsible:

It MUST be ensured that all images used only come from trustworthy sources.

This requirement must be implemented organizationally.

Note: OpenShift supports the requirement by allowing only certain sources. This allows the sources from which images come to be restricted and new sources to be added in a controlled process.

Quay can also be used to provide your own registry in an open environment as a trustworthy delivery point for external software. Images can also be checked for vulnerabilities here.

The creator of the images MUST be clearly identifiable.

This requirement must be implemented organizationally.

Note: OpenShift makes it possible to verify the signatures of images before use and thus enforce the identification requirement. Red Hat Advanced Cluster Security for Kubernetes (ACS) can check and optionally enforce signatures as well as certain labels (e.g. MAINTAINER) for images.

For images delivered by Red Hat via the official Red Hat Registry, the MAINTAINER label of the container images is always maintained, through which Red Hat can be identified as the creator of the images. Images are also signed with GPG keys.

The source MUST be selected so that the creator of the image regularly checks the included software for security problems, fixes and documents them and assures his customers of this.

This requirement must be implemented organizationally.

Note: Images from the Red Hat Registry are regularly checked for security vulnerabilities and updated accordingly. The security status of the images is indicated via a health indicator.
ACS can perform technical checks through regular scans and report conspicuous containers or containers with identified vulnerabilities, thereby supporting the implementation of the requirement.

The version of base images used MUST NOT be deprecated.

This requirement must be implemented organizationally.

Note: For discontinued images with appropriate identification (e.g. through labels), policies implemented in ACS can report these violations. ACS also provides policies that report when images have not been scanned for more than 30/60/90 days. However, this means that an image must be built and rolled out at this interval so that the scans during the build process are effective. With a CI/CD pipeline with a high level of automation, this usually does not represent any increased effort.

Unique version numbers MUST be provided.

This requirement must be implemented organizationally.

Note: Image-level labels or image tags could be used here.

If an image with a newer version number is available, patch and change management MUST check whether and how it can be rolled out.

This requirement must be implemented organizationally for integrated software or self-created software.

Note: OpenShift supports Lifecycle Management (OLM) with Operator. Software managed using OLM or Cluster Operator receives updates via the Operator Hub or Cluster Updates. An automated check with automatic or alternatively manual release is possible for this software.

SYS.1.6.A7 Persistence of container logging data Standard requirement
Status: Implementation by: Responsible:

Storage of container logging data MUST occur outside of the container, at least on the container host.

OpenShift Logging stores the containers' logging data in a separate log management system. The output of the container is saved as long as it is on STDOUT and STDERR. The logging data can also be forwarded to different log storage depending on the source (e.g. infrastructure or application).

It is the responsibility of the applications to write log output to STDOUT and errors to STDERR.

SYS.1.6.A8 Secure storage of access data for containers Standard requirement
Status: Implementation by: Responsible:

Credentials MUST be stored and managed so that only authorized people and containers can access them.

OpenShift offers secrets that are only available to the containers and the people authorized via RBAC in the tenant or project (client).

In particular, it MUST be ensured that access data is only stored in specially protected locations and not in the images.

This requirement must be enforced as part of application development. OpenShift offers suitable mechanisms (secrets) with encryption of the etcd store if necessary.

The credential management mechanisms provided by the container service management software SHOULD be used.

OpenShift offers corresponding mechanisms (secrets). Unless the secrets are dynamically generated, third-party/community tools such as SealedSecrets or HashiCorp Vault can help securely deploy the secrets.

At least the following credentials MUST be stored securely:

  • passwords of all accounts,

  • API keys for services used by the application,

  • keys for symmetric encryption as well

  • private key for public key authentication.

This requirement must be implemented organizationally.

All of this information can and should be managed in Secrets.

SYS.1.6.A9 Suitability for container operation Standard requirement
Status: Implementation by: Responsible:

The application or service that is to be operated in the container SHOULD be suitable for container operation.

This requirement must be implemented organizationally.

It SHOULD be taken into account that containers can more often terminate unexpectedly for the application running within them.

This requirement must be ensured as part of application development.

The results of the test according to SYS.1.6.A3 Secure use of containerized IT systems SHOULD be documented in a comprehensible manner.

This requirement must be implemented organizationally.

Suppliers must be contractually obliged to comply.

SYS.1.6.A10 Policy for images and container operations Standard requirement
Status: Implementation by: Responsible:

A policy SHOULD be created and applied that specifies the requirements for the operation of the containers and the images allowed. The policy SHOULD also include requirements for operating and deploying the images.

This requirement must be implemented organizationally.

ACS and ACM can support the implementation of the directive. Technical parts of the policy can also be defined using SCC (Security Context Constraint) and enforced natively in OpenShift. OpenShift already contains various SCCs by default, which can serve as the basis for a technical part of the guideline.

SYS.1.6.A11 Only one service per container Standard requirement
Status: Implementation by: Responsible:

Each container SHOULD only provide one service at a time.

This requirement must be solved as part of application development.

ACS can check or enforce this rule using a policy.

SYS.1.6.A12 Distribution of secure images Standard requirement
Status: Implementation by: Responsible:

There SHOULD be adequate documentation of which image sources have been classified as trustworthy and why.

This requirement must be implemented organizationally.

In addition, the process SHOULD be adequately documented as to how images or the software components contained in the image are obtained from trustworthy sources and ultimately made available for production use.

This requirement must be implemented organizationally.

The images used SHOULD have metadata that makes the function and history of the image understandable.

This requirement is solved using image labels. Red Hat Images contain the labels io.k8s.description, summary, vender, version, url, vcs-ref and vcs-type, through which the delivered images are transparent in their function and history. For internal images, the existence of the labels can be ensured during application development.

The existence of the corresponding labels can be ensured via ACS.

Digital signatures SHOULD secure every image against change.

OpenShift can be configured to assign a digital signature to each approved registry. OpenShift then only executes images from this registry that are secured using this signature.

Images delivered by Red Hat via the official Red Hat Registry are signed with GPG keys.

SYS.1.6.A13 Release of images Standard requirement
Status: Implementation by: Responsible:

Like software products, all images for production use SHOULD go through a testing and release process in accordance with module OPS.1.1.6 Software testing and releases.

This requirement must be solved organizationally.

Note: OpenShift offers various CI/CD solutions that can be used for automation. OpenShift Pipelines (Tekton-based) and traditional Jenkins are available directly in OpenShift. If the user uses gitlab-ci or github Actions , the runners can be executed in OpenShift. If the release process contains specific artifacts such as If you require SBOMs or the ability to statically analyze Dockerfiles, Quay and ACS can provide the necessary functionality.

SYS.1.6.A14 Updating images Standard requirement
Status: Implementation by: Responsible:

When creating the concept for patch and change management in accordance with OPS.1.1.3 Patch and change management, it SHOULD be decided when and how the updates to the images or the software or service operated will be rolled out.

This requirement must be solved organizationally.

Note: Best practices use multiple environments (either separate clusters or multiple namespaces on a cluster) to support this process and enable automated testing (e.g. via OpenShift Pipelines or Jenkins ).

For persistent containers, it SHOULD be checked whether, in exceptional cases, an update of the respective container is more suitable than completely re-provisioning the container.

Note: “Persistent” containers contradict the cloud native principle and do not represent “good practice”. There is also a contradiction with APP.4.4.A21 “Regular restart of pods”. Accordingly, OpenShift does not support updates at the container level. Changes to the container image always result in the pod stopping and a new pod being restarted. With the recommended use of GitOps, this is a reprovisioning of the changed elements and also documents the status of the application at a given point in time. Due to the high level of automation, this usually does not represent any increased effort.

SYS.1.6.A15 Limitation of resources per container Standard requirement
Status: Implementation by: Responsible:

For each container, resources on the host system, such as CPU, volatile and persistent memory, and network bandwidth, SHOULD be appropriately reserved and limited.

OpenShift supports the configuration of quotas for a project (client). Applications can have their resources appropriately limited using limits/requests.

Network bandwidth is limited at the pod level and can be determined separately according to incoming and outgoing network bandwidth. In addition, outgoing traffic (egress) can be marked at the namespace level with differentiated services code point (DSCP) classifications in order to assign quality of service classes to the outgoing packets in the physical network.

It SHOULD be defined and documented how the system reacts if these limitations are exceeded.

This requirement must be implemented organizationally.

Note: The behavior of OpenShift completely replicates the standard behavior of Kubernetes. If CPU limits are exceeded, the process is slowed down. If volatile memory is exceeded, the process is stopped and restarted by the scheduler. The persistent memory management is responsible for exceeding the persistent memory - OpenShift will not enforce or limit anything here. Compliance with the limited network bandwidth is enforced by dropping packets that exceed the limit.

SYS.1.6.A16 Remote administrative access to containers Standard requirement
Status: Implementation by: Responsible:

Administrative access from a container to the container host and vice versa SHOULD in principle be viewed as administrative remote access.

Application containers can only access administrative services remotely. Privileged containers can gain access to the host, the host's file system, or the host's network. This is necessary, for example, for the infrastructure services of OpenShift (ingress router). Normal applications (application containers) may not receive such permissions.

There SHOULD NOT be remote administrative access to the container host from a container.

This requirement must be partially implemented organizationally and should be part of the guideline defined in SYS.1.6.A10. There may be exceptions for applications that should/need to make configurations to Kubernetes resources. This means they have administrative remote access to the corresponding Kubernetes resources. Remote access is controlled by Kubernetes and backup takes place via the Kubernetes functionalities (see module APP.4.4). The operating system including Mandatory Access Control is optimized as a runtime environment for Kubernetes. In general, it is possible to limit the provision/post-installation of remote access programs in the container.

Application containers SHOULD not contain any remote maintenance access.

This requirement should also be included in the policy described in SYS.1.6.A10. OpenShift only allows access to the configured ports. A container that provides remote maintenance access to these ports may not be released. Application containers should be administered exclusively via the container runtime. Using a policy, known remote access ports (e.g. 22, RDP, etc.) can be reported via ACS and their use prevented.

Administrative access to application containers SHOULD always take place via the container runtime.

This is standard in OpenShift environments. OpenShift offers a terminal login via the oc administration tool . Communication runs via the control plane to the container and is both authenticated and authorized.

SYS.1.6.A17 Execution of containers without privileges Standard requirement
Status: Implementation by: Responsible:

The container runtime and all instantiated containers SHOULD only be run by a non-privileged system account that does not have or can obtain elevated rights to the container service or the host system's operating system.

With OpenShift, application containers run in the Security Context Constraint (SCC) “restricted” by default.

The container runtime SHOULD be encapsulated through additional measures, such as using CPU virtualization extensions.

OpenShift supports encapsulation by using SELinux. If necessary, entire nodes can also be encapsulated via underlying virtualization. This is always necessary when application containers require extended security context constraints (SCCs).

With the sandbox function based on Kata Containers, OpenShift provides a convenient way to isolate containers using virtualization technology.

If containers are to take over tasks of the host system in exceptional cases, the privileges on the host system SHOULD be limited to the necessary minimum.

OpenShift offers several SCC to restrict access to the network, file system or the host itself. This should only be allowed for administrative applications such as SIEM scanners or other infrastructure services that require access to the host. These SCCs should never be given to application containers.

Exceptions SHOULD be appropriately documented.

These exceptions must be documented in the operational documentation. A list of pods with the corresponding SCC annotation can serve as the basis for the documentation.

SYS.1.6.A18 Application services accounts Standard requirement
Status: Implementation by: Responsible:

The system accounts within a container SHOULD not have permissions on the host system.

With OpenShift, accounts within the container are separated from the host system by SELinux. This includes preventing the use of privileged user and group IDs as well as corresponding rights extensions (SET-UID, Set-GID bit). A range of UIDs/GIDs is provided for use in containers.

Where this authorization is necessary for operational reasons, it SHOULD only apply to absolutely necessary data and system access.

Security Context Constraints (SCCs) allow accounts in the container to gain controlled access.

The account in the container that is necessary for this data exchange SHOULD be known in the host system.

The host system Red Hat CoreOS is immutable. The changes to the host operating system should only be made by OpenShift and should be necessary so that hardening by Red Hat is not inadvertently undermined.

Since, in contrast to an unprotected container runtime environment, SELinux enforces the separation between the container runtime and the operating system, this mirroring of account names is not necessary.

SYS.1.6.A19 Integrating data stores into containers Standard requirement
Status: Implementation by: Responsible:

The containers SHOULD ONLY be able to access the mass storage and directories necessary for operation.

Applications can access persistent volumes (PVs) and temporary (ephemeral) storage in OpenShift. Persisted volumes are connected as network storage, ephemeral storage serves primarily as volatile, short-lived mass storage and is allocated within the container file system. This configures which PV can be reached and the use of the ephemeral storage is separated per pod. This means that each pod has its own volatile mass storage. Volumes can be limited in size.

Permissions SHOULD be granted explicitly only if they are needed.

OpenShift implements the principle of least privileges. The definition is made via an explicit configuration at the deployment level.

If the container runtime includes local storage for a container, the access rights in the file system SHOULD be restricted to the container's service account.

By default, no local storage is included. For reasons of reliability, this is explicitly not recommended.

If network storage is used, the permissions SHOULD be set on the network storage itself.

The network storage dictates the permissions. OpenShift supports this with the dynamically assigned UID/GID of the projects (clients).

SYS.1.6.A20 Securing configuration data Standard requirement
Status: Implementation by: Responsible:

The description of the container configuration data SHOULD be versioned.

OpenShift only maintains the current version of the configuration. It is therefore recommended to use GitOps, in which the configuration is transferred from a git repository to the OpenShift cluster. OpenShift includes OpenShift GitOps (based on the community project ArgoCD), which supports easy implementation of a GitOps-based administration concept.

Changes SHOULD be clearly documented.

With a GitOps approach, all changes are documented in git.

SYS.1.6.A21 Advanced security policies Requirements for increased protection needs
Status: Implementation by: Responsible:

Advanced policies SHOULD limit container permissions.

By default, OpenShift blocks the containers' permissions (security-by-default).

Mandatory Access Control (MAC) or comparable technology SHOULD enforce these policies.

OpenShift already uses SELinux Mandatory Access Control to restrict permissions by default Using the Security Profiles Operator [SecurityProfile], workload-dependent SELinux and Seccomp profiles can be created and managed.

Policies SHOULD restrict at least the following access:

  • incoming and outgoing network connections,

  • file system accesses and

  • kernel requests (syscalls).

These permissions are managed in OpenShift and controlled via Security Context Constraints (SCCs). For tool-based policy management, ACS or Red Hat Advanced Cluster Management (ACM) (with Kyverno or Open Policy Agent) can be used.

The runtime SHOULD start the containers in such a way that the host system kernel prevents all activities of the containers that are not permitted by the policy (e.g. by setting up local packet filters or revoking permissions) or at least appropriately reports violations.

OpenShift already meets this requirement as standard (security-by-design).

SYS.1.6.A22 Provision for examinations Requirements for increased protection needs
Status: Implementation by: Responsible:

In order to have containers available for later investigation if necessary, an image of the state SHOULD be created according to defined rules.

The OpenShift container runtime environment used does not provide a function for creating a memory image of a running container. The running containers can be listed and different parameters can be queried and saved for them. Further data (such as running processes) can be queried via the host. Using the operating system, memory dumps (core dump) or file system data (ephemeral and persistent) can also be backed up. The memory dumps can also be created with third-party operators [CoreDump].

SYS.1.6.A23 Container immutability Requirements for increased protection needs
Status: Implementation by: Responsible:

Containers SHOULD not be able to change their file system at runtime.

This requirement must be implemented organizationally.

Note: By default, Red Hat recommends building containers so that the runtime UID does not have write permissions in the container. If the file system is changed (e.g. for a file system-based cache), this change will be lost when you restart, as the unchangeable image will be loaded again.

File systems SHOULD not be mounted with write permissions.

By default, local file systems are not mounted in containers. Containers access PVs that are integrated via OpenShift. This fulfills the requirement. Alternatively, ephemeral volumes can be used as volatile storage.

The container's root file system can be restricted to ReadOnly via the SecurityContext. Verification of this configuration can be carried out using ACS.

SYS.1.6.A24 Host-based intrusion detection Requirements for increased protection needs
Status: Implementation by: Responsible:

The behavior of the containers and the applications or services operating within them SHOULD be monitored.

ACS offers policies that monitor behavior. Baselining enables the definition of the desired behavior and policies enable the reaction to undesirable behavior (i.e. that does not exist in the baseline).

Deviations from normal behavior SHOULD be noticed and reported.

The policies provided by ACS alert via OpenShift Monitoring. Furthermore, ACS maintains a history of all violations.

Reports SHOULD be handled appropriately in the central security incident handling process.

This requirement must be implemented organizationally.

Note: The alerts from OpenShift monitoring must be forwarded to the system used by the central process for handling security incidents. The usual alert manager methods are available for this. OpenShift provides email and Slack integration. The community offers further integration such as in Teams. If necessary, an integration can be developed that receives the alert manager's webhook and forwards it appropriately to the external system.

The behavior to be monitored SHOULD include at least:

  • network connections,

  • created processes,

  • file system accesses and

  • kernel requests (syscalls).

At the host level, Red Hat CoreOS supports auditd, which is enabled by default. Policies for auditd can include network connections, created processes, file accesses and syscalls. Red Hat CoreOS provides many sample policies that cover all of the areas described.

ACS offers alerting on network connections, created processes and kernel requests. File access is not covered by ACS policies.

In addition, the files on the RHCOS nodes can be checked cryptographically using the Advanced Intrusion Detection Environment (AIDE) using the file integrity operator provided by Red Hat and changes to files can be detected [FileIntegrity].

SYS.1.6.A25 High availability of containerized applications Requirements for increased protection needs
Status: Implementation by: Responsible:

If containerized applications have high availability requirements, it SHOULD be decided at which level availability should be implemented (e.g. redundant at the host level).

OpenShift offers this by default (replicas and pod anti-affinities). The applications must support this high availability. Clusters can also be distributed across multiple fire zones (failure zones) within a region/location.

SYS.1.6.A26 Further isolation and encapsulation of containers Requirements for increased protection needs
Status: Implementation by: Responsible:

If further isolation and encapsulation of containers is required, the following measures SHOULD be examined based on increasing effectiveness:

  • fixed assignment of containers to container hosts,

  • execution of the individual containers and/or the container host with hypervisors,

  • fixed mapping of a single container to a single container host.

OpenShift offers the option of binding containers (in pods) to specific nodes using node labels and node selectors in the deployment descriptors. These can also be made available as virtual machines via hypervisors (via IaaS or via OpenShift Sandboxes). This implements all three assignments mentioned in the requirement.

Block APP.4.4: Kubernetes

APP.4.4 Kubernetes

Consulted persons:

Last update:

BSI implementation instructions:

APP.4.4.A1 Planning the separation of applications Basic requirement
Status:
Partially
Implementation by: Responsible:

Before commissioning, it MUST be planned how the applications operated in the pods and their different test and production requirements
Operating environments are separated. Based on the protection needs of the applications, the planning MUST determine which architecture of namespaces, meta tags, clusters and networks adequately addresses the risks and whether virtualized servers and networks should also be used.

The planning MUST contain regulations for network, CPU and permanent memory separation. The separation SHOULD also take into account the network zone concept and the protection requirements and be tailored to these.

Applications SHOULD each run in their own Kubernetes namespace that includes all of the application's programs. Only applications with similar protection needs and similar potential attack vectors SHOULD share a Kubernetes cluster.

These requirements must be implemented organizationally. OpenShift fully supports them.

OpenShift simplifies the implementation of the stated requirements for separating applications as well as development and production environments by setting up projects (tenants). Namespaces, networks/network separation, meta tags as well as CPU and memory separation are already configured by OpenShift as required (security-by-design). Special requirements for protection and network zone concepts can also be flexibly and easily mapped using additional measures. This particularly includes the ability to define application classes, operate in multiple, separate clusters, and automatically distribute workloads to protection zones and fire compartments. Particularly in the case of separate clusters, ACM can support rule-based distribution of applications using labels.

APP.4.4.A2 Planning automation with CI/CD Basic requirement
Status:
Partially
Implementation by: Responsible:

If automation of the operation of applications in Kubernetes takes place using CI/CD, it MUST ONLY be done after appropriate planning.

This requirement must be implemented organizationally.

Planning MUST cover the entire life cycle from commissioning to decommissioning, including development, testing, operation, monitoring and updates.

The protective measure is primarily of an organizational nature. OpenShift fully supports them. With the integrated CI/CD technologies Jenkins, Tekton and OpenShift GitOps, OpenShift already offers preconfigured solutions for automated CI/CD pipelines. Of course, other technologies such as Gitlab CI and GitHub Actions can also be integrated.

The role and rights concept as well as securing Kubernetes secrets MUST be part of the planning.

Kubernetes secrets are secured by a Role Based Access Control (RBAC) system. Depending on the protection requirement, Kubernetes secrets can be secured via an (encrypted) etcd metadata store or additionally via an integration of Vault components or "sealed secrets" for CD and GitOps mechanisms

Secrets and roles can also be managed centrally using ACM and rolled out consistently to the managed clusters using policies.

APP.4.4.A3 Identity and permission management in Kubernetes Basic requirement
Status: Implementation by: Responsible:

Kubernetes and all other control plane applications MUST authenticate and authorize every action by a user or, in automated operation, corresponding software, regardless of whether the actions take place via a client, a web interface or via an appropriate interface (API). Administrative actions MUST NOT be carried out anonymously.

In the default configuration, OpenShift restricts the use of the web console and APIs only to authenticated and authorized users. Connection to external directory services (LDAP, OIDC and others) is possible.

Each user MUST ONLY receive the absolutely necessary rights. Permissions without restrictions MUST be granted very restrictively.

Only a small group of people SHOULD be allowed to define automation processes.

OpenShift already offers roles for a least privilege concept. The RBAC roles can be adapted or supplemented with new roles. The preconfigured roles enable easy authorization assignment according to the least-privilege and need-to-know principles. User actions can be tracked via the audit log.

Only selected administrators SHOULD be given the right to create or change persistent volume shares in Kubernetes.

In the default configuration, persistent storage can only be integrated by cluster administrators. For dynamically provisioned storage, the corresponding provisioners have the necessary authorizations. These provisioners must be set up and configured by an admin. Storage requirements are controlled and restricted using quota mechanisms.

APP.4.4.A4 Separation of pods Basic requirement
Status: Implementation by: Responsible:

The operating system kernel of the nodes MUST have isolation mechanisms to limit the visibility and resource usage of the pods among themselves (see Linux namespaces and cgroups). The separation MUST include at least process IDs, inter-process communication, user IDs, file system and network including host name.

OpenShift uses Red Hat Enterprise Linux CoreOS, which is aimed at container operations, for the nodes. Optionally, Red Hat Enterprise Linux (RHEL) can also be used for worker nodes. In both configurations, CRI-O is the container runtime. At the system level in particular,

  • cgroups

  • Seccomp

  • SELinux in 'enforcing' mode

enforce the separation of the pods. OpenShift already operates according to the principle of least privilege and the need-to-know principle and uses these together with predefined security profiles (Security Context Constraints / [SCC]) as part of security-by-design and security-by-default in the standard automatically. The separation has already been implemented in OpenShift; no further measures are usually required.

APP.4.4.A5 Data backup in the cluster Basic requirement
Status: Implementation by: Responsible:

The cluster MUST be backed up. Data backup MUST include:

  • permanent storage (Persistent Volumes),

  • configuration files from Kubernetes and other control plane programs,

  • the current state of the Kubernetes cluster including the extensions,

  • configuration databases, specifically here etcd,

  • all infrastructure applications that are necessary to operate the cluster and the services within it and

  • the data storage of the code and image registries.

Snapshots for the operation of the applications SHOULD also be considered. Snapshots MUST NOT replace data backup.

The data backup of a cluster must be individually defined as part of the system architecture as part of the operating model. The areas of responsibility for the container platform (cluster administration), the infrastructure services (system administration) and the application management (technical administration) should be considered separately.

For data backup as part of cluster administration (Kubernetes configuration, current state of the Kubernetes cluster, configuration database) the integrated functions or methods of OpenShift must be used. System administration and specialist administration must be carried out in accordance with the respective specifications.

Snapshots for persistent volumes are supported when using OpenShift's Container Storage Interface (CSI) drivers. OpenShift offers an easily configurable backup system with the OpenShift API for Data Protection (OADP).

Additional third-party solutions for backup are also available in the OperatorHub.

APP.4.4.A6 Initialization of pods Standard requirement
Status: Implementation by: Responsible:

If initialization of an application, for example, takes place in the pod at the start, this SHOULD take place in its own init container. It SHOULD be ensured that the initialization terminates all processes that are already running. Kubernetes SHOULD ONLY start the additional containers if initialization is successful.

OpenShift provides the necessary resource configurations via Kubernetes. Kubernetes ensures the (process) dependencies between init containers and “normal” containers of a pod.

The requirement must be implemented by application development.

APP.4.4.A7 Separation of networks in Kubernetes Standard requirement
Status: Implementation by: Responsible:

The networks for the administration of the nodes, the control plane and the individual networks of the application services SHOULD be separated.

ONLY the network ports of the pods necessary for operation SHOULD be released into the intended networks. For multiple applications on a Kubernetes cluster, all network connections between the Kubernetes namespaces SHOULD initially be prohibited and only required network connections should be permitted (whitelisting). The network ports necessary for the administration of the nodes, the runtime and Kubernetes including its extensions SHOULD ONLY be accessible from the administration network and from pods that require them.

Only selected administrators SHOULD have permission in Kubernetes to manage the CNI and create or change rules for the network.

The requirements for restricting network ports and network connections between Kubernetes namespaces are already supported by OpenShift as standard using network policies and the option for default network policies (security by design).

The separation of the management network can also be implemented at the namespace level via network policies (incoming, the responsibility of the namespace administrator) and egress firewalls (outgoing, the responsibility of the cluster admins).

Externally exposed services can receive their own IP and thus data traffic can also be separated outside the platform. Inter-node communication is carried out via suitable tunnel protocols (VXLAN, GENEVE) and can also be encrypted using IPSec.

The determination of the necessary network policies for applications is supported by the network policy generator in ACS.

APP.4.4.A8 Securing configuration files in Kubernetes Standard requirement
Status: Implementation by: Responsible:

The Kubernetes cluster configuration files, including all extensions and applications SHOULD be versioned and annotated.

Access rights to the management software of the configuration files SHOULD be assigned minimally. Access rights for reading and writing access to the control plane configuration files SHOULD be assigned and restricted particularly carefully.

This requirement must be implemented organizationally.

OpenShift is fully controlled via Kubernetes resources and Custom Resources (CR). All CRs that are executed after the initial cluster installation as part of "Day-1" or "Day-2" belong to the configuration files.

These CRs reside in system namespaces that only cluster administrators have access to.

Versioning is done using a version system such as Git. Access restrictions must be implemented there. Red Hat Openshift supports the rollout of configurations from Git, for example using Openshift GitOps.

APP.4.4.A9 Use of Kubernetes service accounts Standard requirement
Status: Implementation by: Responsible:

Pods SHOULD NOT use the "default" service account. No rights SHOULD be granted to the “default” service account. Pods for different applications SHOULD each run under their own service accounts. Permissions for the service accounts of the application pods SHOULD be limited to those strictly necessary.

Pods that do not require a service account SHOULD not be able to view it or have access to corresponding tokens.

Only control plane pods and pods that absolutely need them SHOULD use privileged service accounts.

Automation programs SHOULD each receive their own tokens, even if they use a common service account due to similar tasks.

OpenShift has the necessary configuration options for this. On the OpenShift platform, the system namespaces (control plane and other system components) are generally only accessible to administrative users. System-side components also run with privileged permissions if necessary.

To implement the further requirements, please note the following when configuring the deployment:

  • Creating a specific service account for deployment

  • Definition of an individual role local to the namespace for deployment and assignment of the role to the service account

  • This role should be defined to the minimum amount of permissions in accordance with the least privilege principle

  • Analogous definition of service accounts, roles and role bindings for automation purposes

  • Use Red Hat Advanced Cluster Security for Kubernetes policies to prevent use of the "default" service account.

In principle, security context constraints [SCC], as a further element of authorization control similar to the RBAC model, can be used by the platform administration to provide extended privileges in a fine-grained manner for those components that require such extended authorizations. This is also possible according to the least privilege principle.

Pods should only include the ServiceAccountToken if they actually use it. Otherwise, pods should deactivate automatic mounting via “automountServiceAccountToken=false”.

APP.4.4.A10 Securing automation processes Standard requirement
Status: Implementation by: Responsible:

All automation software processes, such as CI/CD and their pipelines, SHOULD only work with absolutely necessary rights.

These requirements must be implemented organizationally.

OpenShift Pipelines (Jenkins, Tekton) meets the requirement through individual processes bound to the namespace that can be individually authorized. OpenShift GitOps has a tenant model, which enables separation of GitOps automation software at the namespace level.

If different user groups can change the configuration or start pods via the automation software, this SHOULD be done for each group through separate processes that only have the rights necessary for the respective user group.

OpenShift GitOps can be connected to the OpenShift RBAC. Alternatively, each user group can run its own OpenShift GitOps instance, whose rights are further restricted.

APP.4.4.A11 Monitoring of containers Standard requirement
Status: Implementation by: Responsible:

In pods, each container SHOULD define a health check for startup and operation (“readiness” and “liveness”). These checks SHOULD provide information about the availability of the software running in the pod. The checks SHOULD fail if the monitored software cannot perform its tasks properly. Each of these controls SHOULD define a time period appropriate to the service operating in the pod. Based on these checks, Kubernetes SHOULD delete or restart the pods.

These requirements must be implemented organizationally.

It must be implemented during application development and deployment configuration of the probes. OpenShift already provides the necessary functions for automatically stopping or deleting the pods and restarting them. The use of this functionality can be enforced using ACS.

APP.4.4.A12 Securing infrastructure applications Standard requirement
Status: Implementation by: Responsible:

If you use your own registry for images or software for automation, managing permanent memory, storing configuration files or similar, its security SHOULD at least consider:

  • use of personal and service accounts for access,

  • encrypted communication on all network ports,

  • minimal allocation of authorizations to users and service accounts,

  • logging of changes and

  • regular data backup.

This requirement must be implemented organizationally.

Securing the above-mentioned aspects is supported by OpenShift.

All operations on the infrastructure applications are secured via the OpenShift API via [RBAC] and are logged in the audit log. Control plane audit rules can log changes to the infrastructure applications via the OpenShift API [AUDIT]. At the host level, Red Hat CoreOS supports auditd, which is enabled by default. Policies for auditd can include network connections, created processes, file accesses and syscalls. Red Hat CoreOS provides many sample policies that cover all of the areas described. This can also be used to log direct changes, bypassing the control plane.

Communication between the infrastructure applications is encrypted.

The following applies to the registry:

When using the Quay Registry, the standard requirements are met.

The data backup must be carried out separately for the internal registry (see APP.4.4.A5)

The following applies to automation software (Openshift GitOps / Openshift Pipelines):

Access to the software can be regulated using the Openshift RBAC mechanisms and deployment in individual clients. The software communicates via encrypted network ports and changes can be logged using the mechanisms already described. Data backup can be done via OADP.

The following applies to software for storing configuration data (etcd):

Personal accounts should not be given direct access. Network communication is encrypted as standard. Changes to etcd are logged via the control plane as shown. Changes to the host can be logged using auditd as shown.

APP.4.4.A13 Automated configuration auditing Requirements for increased protection needs
Status: Implementation by: Responsible:

There SHOULD be an automatic audit of the settings of the nodes, Kubernetes and the application pods against a defined list of permitted settings and against standardized benchmarks.

OpenShift provides an audit log file for all actions carried out. The audit configuration of the Openshift API server and the nodes can be done centrally from ACM on a policy-based basis when using multiple clusters. Alternatively, the audit settings should be configured and activated for each cluster.

Red Hat Advanced Cluster Security for Kubernetes (ACS) can check all managed resources against standardized and customized benchmarks. Violations are reported via OpenShift monitoring and documented in the violation log in ACS. Some of the benchmarks are included, can be obtained from the community and supplemented with your own definitions.

In addition, the Compliance Operator is a tool available that can automatically check the settings of the Openshift cluster against a defined profile at configurable time intervals. This profile can be one of the profiles supplied (e.g. Essentials 8, Center for Internet Security and others) or a profile tailored to your own needs.

Kubernetes SHOULD enforce the established rules in the cluster by connecting suitable tools.

Using Red Hat Advanced Cluster Management for Kubernetes (ACM), policies can be created for all managed resources, which are then enforced when the resources are created.

APP.4.4.A14 Use of dedicated nodes Requirements for increased protection needs
Status: Implementation by: Responsible:

In a Kubernetes cluster, the nodes SHOULD be assigned dedicated tasks and only operate pods that are assigned to the respective task.

This requirement must be solved organizationally. OpenShift can bind applications to specific nodes or node groups (via labels and node selectors). ACM can take over the labeling of nodes and ensure that the nodes are labeled accordingly.

Bastion nodes SHOULD take over all incoming and outgoing data connections from applications to other networks.

OpenShift uses the concept of infra-nodes. The incoming connections can be bound to these and, by using Egress-IP, the incoming connections can also be bound.

Management nodes SHOULD operate the control plane pods and they SHOULD only take over the control plane data connections.

OpenShift uses control plane nodes for management, on which no applications are running. Data connections between applications to the outside world and to one another are not routed via the control plane as standard. The necessary requirements must be taken into account as part of the planning.

If deployed, storage nodes SHOULD only operate the solid storage services pods in the cluster.

OpenShift Data Foundation (ODF) can be linked to its own infra nodes using the OpenShift mechanisms, which only run storage services. This can be implemented equivalently with other storage solutions.

APP.4.4.A15 Separation of applications at node and cluster levels Requirements for increased protection needs
Status: Implementation by: Responsible:

Applications with very high protection requirements SHOULD use their own Kubernetes clusters or dedicated nodes that are not available for other applications.

This requirement must be implemented organizationally. OpenShift supports implementation and enforcement reproducibly via multi-cluster management (Red Hat Advanced Cluster Management for Kubernetes) and the use of labels (see APP.4.4.A14).

APP.4.4.A16 Use of operators Requirements for increased protection needs
Status: Implementation by: Responsible:

The automation of operational tasks in operators SHOULD be used in particularly critical applications and the control plane programs.

OpenShift relies consistently on the application of the concept of operators. The platform itself is operated and managed 100% by operators, meaning that all internal components of the platform are rolled out and managed by operators.

Application-specific operators must be considered as part of application development and deployment.

APP.4.4.A17 Attestation of nodes Requirements for increased protection needs
Status: Implementation by: Responsible:

Nodes SHOULD send a secured status message to the control plane, verified cryptographically and, if possible, with a TPM.

RHCOS, an immutable operating system, is used on the nodes. The operating system is specified by the control plane and the version is compared cryptographically (hash checksums) against the defined status. The configuration of the nodes is managed centrally by the control plane using MachineConfigs. Corresponding changes to the managed files are overwritten by the control plane.

The nodes are compared with the configuration stored in the control plane via cryptographically secured communication.

Using the file integrity operator provided by Red Hat, the files on the RHCOS nodes can be checked cryptographically using the Advanced Intrusion Detection Environment (AIDE) and changes to files can be detected [FileIntegrity]. The operator and its configuration can be managed using ACM.

OpenShift uses its own internal Certificate Authority (CA). The nodes communicate with the control plane via TLS connections (TLS v1.3), which are secured with node-specific certificates from this CA. The internal CA is created during the installation (bootstrap) process and managed in the control plane .

The control plane SHOULD ONLY include nodes in the cluster that have successfully proven their integrity.

As described, the control plane compares the nodes with the stored configuration. If a node is to be included, it establishes a connection to the control plane. The control plane enforces the central configuration. Only when the configuration phase is completed is the node released for workloads and therefore active.

All nodes must authenticate themselves using a certificate. This is transferred from the control plane during provisioning and is only valid for a very short time. Optionally, a cluster admin can be required to approve each request.

APP.4.4.A18 Use of micro-segmentation Requirements for increased protection needs
Status: Implementation by: Responsible:

Even within a Kubernetes namespace, the pods SHOULD only be able to communicate with each other via the necessary network ports. There SHOULD be rules within the CNI that prevent all but the network connections necessary for operation within the Kubernetes namespace. These rules SHOULD clearly define the source and destination of the connections using at least one of the following criteria: Service name, metadata ("labels"), the Kubernetes service accounts or certificate-based authentication.

All criteria that serve as a designation for this connection SHOULD be secured in such a way that they can only be changed by authorized persons and administrative services.

A default deny network policy can be created and either implemented when onboarding new projects (tenants) or managed centrally using ACM. The Openshift SDNs support network policies that enable the corresponding micro-segmentations.

If additional control is to be implemented at Layer 7, OpenShift Service Mesh (based on Istio) enables further separation by enforcing mTLS and other authorization features at the service level.

APP.4.4.A19 Kubernetes high availability Requirements for increased protection needs
Status: Implementation by: Responsible:

The operation SHOULD be structured in such a way that if one location fails, the clusters and thus the applications in the pods either continue to run without interruption or can restart at another location within a short period of time.

For the restart, all necessary configuration files, images, user data, network connections and other resources required for operation, including the hardware required for operation, SHOULD already be available at this location.

For the uninterrupted operation of the cluster, the Kubernetes control plane, the infrastructure applications of the cluster and the application pods SHOULD be distributed across several fire compartments based on location data from the nodes in such a way that the failure of one fire compartment does not lead to the failure of the application.

OpenShift supports topology labels to separate multiple fire compartments (so-called “failure zones”). Setting up complete fail-over clusters is also supported by Red Hat Advanced Cluster Management for Kubernetes in order to distribute the applications to the available clusters. With its configuration as a metro cluster, OpenShift Data Foundation offers the basis for an appropriate storage design that must be included in the development and deployment of applications.

Failures of individual nodes or entire fire compartments do not lead to a cluster or application failure as long as the applications and infrastructure services are distributed accordingly. This distribution must be carried out when deploying the application and infrastructure services using affinity rules.

In addition, Openshift Monitoring generates alarms if there are too few resources available for the control plane due to the failure of a node. Such alarms are also configurable for workloads.

APP.4.4.A20 Encrypted data storage for pods Requirements for increased protection needs
Status: Implementation by: Responsible:

The file systems containing the persistent data of the control plane (here especially etcd) and the application services SHOULD be encrypted.

OpenShift fully supports this requirement. Encryption can be carried out at different levels (virtualization, node file system, etc.). The encryption of etcd can be activated from within ACM using a policy. FIPS 140-2 certified encryption modules can also be used.

APP.4.4.A21 Regular restart of pods Requirements for increased protection needs
Status: Implementation by: Responsible:

If there is an increased risk of external influences and a very high need for protection, pods SHOULD be stopped and restarted regularly. No pod SHOULD run for more than 24 hours. The availability of the applications in the pod SHOULD be ensured.

A regular restart of pods can be forced using the Openshift Descheduler [Descheduler]. This makes it possible to automatically deprovision pods after a configurable lifetime (default 24 hours).

OpenShift provides sufficient mechanisms (e.g. rollout strategies, reconciliation loops, replica count) to ensure the availability of applications even after a forced restart. For non-cloud-native applications, consideration is required in terms of long-running processes and forced restarts of pods.

Comments