BSI Quick Check: Guidance to ensure that your OCP deployment complies with BSI IT-Grundschutz blocks SYS.1.6 Containerization and APP.4.4 Kubernetes
Ansgar Kückes, Steffen Lützenkirchen
2023-07-24
Summary
This article explains the considerations needed to ensure that your OCP deployment complies with BSI (Bundesamt für Sicherheit in der Informationstechnik, Germany’s Federal Office for Information Security) IT-Grundschutz blocks SYS.1.6 Containerization and APP.4.4 Kubernetes. Some of these considerations are addressed by the features of the product itself, and some must be implemented organisationally. It also describes how other parts of OpenShift Platform Plus can be used in conjunction with OpenShift Container Platform to facilitate compliance.
Read the original German-language document (DOCX format)
Contents
Block SYS.1.6 Containerization
Disclaimer
We would like to point out that the implementation of the measures described in this document and the use of the technologies mentioned do not guarantee compliance with the BSI guidelines. Rather, the document is intended as a starting point to define the necessary measures depending on the respective organizational and technological requirements, and the respective protection needs.
Any liability for the completeness, accuracy, timeliness or reliability of the content provided is excluded
Copyright © 2023 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Red Hat logo and JBoss are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the USA and other countries.
Definitions
An attempt is made to follow the terms used in the BSI module. In some places it is necessary to use additional terms for the purpose of clarification. Terms not defined in the [Kubernetes glossary] are defined in the following table:
Term | Definition |
---|---|
ACM / Advanced Cluster Management for Kubernetes | A product from Red Hat that enables the management of multiple clusters at the same time using policies and supports standardized operation of container clusters. |
ACS / Advanced Cluster Security for Kubernetes | A Red Hat product that supports compliance with security configurations through policies and compliance rules. |
Compliance Operator | The OpenShift Compliance Operator checks nodes and the platform itself (API resources) and compares the results against a definable compliance profile (tailored profile). |
CSI/Container Storage Interface | An API specification that enables the integration of different storage solutions within Kubernetes/OpenShift using plug-ins. |
Infra-Node | Special compute node that is used exclusively for non-application-related tasks (infrastructure tasks). |
OADP / OpenShift APIs for Data Protection | An operator that provides various APIs that can be used to backup and restore cluster resources (yaml), internal images, and persistent volume data. |
OpenShift GitOps | GitOps functionality integrated into OpenShift to implement continuous deployment processes. |
OpenShift Sandboxes | Isolated runtime environments provided in OpenShift based on hypervisors (based on Kata Containers). |
Project (client) | OpenShift manages clients within a cluster in the form of so-called “Projects”.Each project has, among other things, its own namespace and its own administration role. Projects are encapsulated from each other and cannot access another client's resources without explicit permission. Typically, a separate project (client) is used for each application. |
Prometheus | Monitoring tool for exposing system performance data. |
Quay | A Red Hat product that provides an enterprise registry. |
RHCOS/Red Hat CoreOS | An operating system designed for the operation of containers and follows immutable principles. |
SBOMs / Software Bill of Materials | A machine-readable document that lists and makes verifiable the individual software artifacts contained in a piece of software. |
Active network | Network in which the (compute) nodes are placed. |
Worker node |
Compute node on which an application is running or an application service. |
Block SYS.1.6 Containerization
SYS.1.6 Containerization |
|
||
Consulted persons: Last update: BSI implementation instructions: |
|||
SYS.1.6.A1 Planning the use of containers | Basic requirement | ||
Status: | Implementation by: | Responsible | |
Before containers are deployed, the goal of the container deployment (e.g. scaling, availability, disposable containers for security or CI/CD) MUST first be determined so that all security-related aspects of installation, operation and decommissioning can be planned.
When planning, the operating costs that arise from container use or mixed operation SHOULD also be taken into account.
The planning MUST be adequately documented.
|
|||
SYS.1.6.A2 Planning the management of containers | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
The containers MAY ONLY be managed after appropriate planning.
This planning MUST cover the entire life cycle from commissioning to decommissioning, including operation and updates.
When planning administration, it MUST be taken into account that the creator of a container should be viewed in part like an administrator due to the impact on operations.
Starting, stopping and monitoring the containers MUST be done via the management software used.
|
|||
SYS.1.6.A3 Secure deployment of containerized IT systems | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
For containerized IT systems, it MUST be taken into account how containerization affects the IT systems and applications being operated, in particular the administration and suitability of the applications.
Based on the protection needs of the applications, it MUST be checked whether the requirements for isolation and encapsulation of the containerized IT systems and the virtual networks as well as the applications operated are sufficiently met.
The operating system's own mechanisms SHOULD be included in this test.
For virtual networks, the host performs the function of a network component. The building blocks of the sub-layers NET.1 networks and NET.3 network components MUST be taken into account accordingly.
Logical and overlay networks MUST also be considered and modeled.
Furthermore, the containerized IT systems used MUST meet the requirements for availability and data throughput.
During ongoing operations, the performance and condition of the containerized IT systems SHOULD be monitored (so-called health checks).
|
|||
SYS.1.6.A4 Planning the deployment and distribution of images | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
The process for deploying and distributing images MUST be planned and appropriately documented.
|
|||
SYS.1.6.A5 Separation of the administration and access networks for containers | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
The networks for the administration of the host, the administration of the containers and their access networks MUST be separated appropriately to the protection requirements.
In principle, at least, administration of the host SHOULD only be possible from the administration network.
Only the communication relationships necessary for operation SHOULD be permitted.
|
|||
SYS.1.6.A6 Use of secure images | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
It MUST be ensured that all images used only come from trustworthy sources.
The creator of the images MUST be clearly identifiable.
The source MUST be selected so that the creator of the image regularly checks the included software for security problems, fixes and documents them and assures his customers of this.
The version of base images used MUST NOT be deprecated.
Unique version numbers MUST be provided.
If an image with a newer version number is available, patch and change management MUST check whether and how it can be rolled out.
|
|||
SYS.1.6.A7 Persistence of container logging data | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Storage of container logging data MUST occur outside of the container, at least on the container host.
|
|||
SYS.1.6.A8 Secure storage of access data for containers | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Credentials MUST be stored and managed so that only authorized people and containers can access them.
In particular, it MUST be ensured that access data is only stored in specially protected locations and not in the images.
The credential management mechanisms provided by the container service management software SHOULD be used.
At least the following credentials MUST be stored securely:
|
|||
SYS.1.6.A9 Suitability for container operation | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The application or service that is to be operated in the container SHOULD be suitable for container operation.
It SHOULD be taken into account that containers can more often terminate unexpectedly for the application running within them.
The results of the test according to SYS.1.6.A3 Secure use of containerized IT systems SHOULD be documented in a comprehensible manner.
|
|||
SYS.1.6.A10 Policy for images and container operations | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
A policy SHOULD be created and applied that specifies the requirements for the operation of the containers and the images allowed. The policy SHOULD also include requirements for operating and deploying the images.
|
|||
SYS.1.6.A11 Only one service per container | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Each container SHOULD only provide one service at a time.
|
|||
SYS.1.6.A12 Distribution of secure images | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
There SHOULD be adequate documentation of which image sources have been classified as trustworthy and why.
In addition, the process SHOULD be adequately documented as to how images or the software components contained in the image are obtained from trustworthy sources and ultimately made available for production use.
The images used SHOULD have metadata that makes the function and history of the image understandable.
Digital signatures SHOULD secure every image against change.
|
|||
SYS.1.6.A13 Release of images | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Like software products, all images for production use SHOULD go through a testing and release process in accordance with module OPS.1.1.6 Software testing and releases.
|
|||
SYS.1.6.A14 Updating images | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
When creating the concept for patch and change management in accordance with OPS.1.1.3 Patch and change management, it SHOULD be decided when and how the updates to the images or the software or service operated will be rolled out.
For persistent containers, it SHOULD be checked whether, in exceptional cases, an update of the respective container is more suitable than completely re-provisioning the container.
|
|||
SYS.1.6.A15 Limitation of resources per container | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
For each container, resources on the host system, such as CPU, volatile and persistent memory, and network bandwidth, SHOULD be appropriately reserved and limited.
It SHOULD be defined and documented how the system reacts if these limitations are exceeded.
|
|||
SYS.1.6.A16 Remote administrative access to containers | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Administrative access from a container to the container host and vice versa SHOULD in principle be viewed as administrative remote access.
There SHOULD NOT be remote administrative access to the container host from a container.
Application containers SHOULD not contain any remote maintenance access.
Administrative access to application containers SHOULD always take place via the container runtime.
|
|||
SYS.1.6.A17 Execution of containers without privileges | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The container runtime and all instantiated containers SHOULD only be run by a non-privileged system account that does not have or can obtain elevated rights to the container service or the host system's operating system.
The container runtime SHOULD be encapsulated through additional measures, such as using CPU virtualization extensions.
If containers are to take over tasks of the host system in exceptional cases, the privileges on the host system SHOULD be limited to the necessary minimum.
Exceptions SHOULD be appropriately documented.
|
|||
SYS.1.6.A18 Application services accounts | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The system accounts within a container SHOULD not have permissions on the host system.
Where this authorization is necessary for operational reasons, it SHOULD only apply to absolutely necessary data and system access.
The account in the container that is necessary for this data exchange SHOULD be known in the host system.
|
|||
SYS.1.6.A19 Integrating data stores into containers | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The containers SHOULD ONLY be able to access the mass storage and directories necessary for operation.
Permissions SHOULD be granted explicitly only if they are needed.
If the container runtime includes local storage for a container, the access rights in the file system SHOULD be restricted to the container's service account.
If network storage is used, the permissions SHOULD be set on the network storage itself.
|
|||
SYS.1.6.A20 Securing configuration data | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The description of the container configuration data SHOULD be versioned.
Changes SHOULD be clearly documented.
|
|||
SYS.1.6.A21 Advanced security policies | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
Advanced policies SHOULD limit container permissions.
Mandatory Access Control (MAC) or comparable technology SHOULD enforce these policies.
Policies SHOULD restrict at least the following access:
The runtime SHOULD start the containers in such a way that the host system kernel prevents all activities of the containers that are not permitted by the policy (e.g. by setting up local packet filters or revoking permissions) or at least appropriately reports violations.
|
|||
SYS.1.6.A22 Provision for examinations | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
In order to have containers available for later investigation if necessary, an image of the state SHOULD be created according to defined rules.
|
|||
SYS.1.6.A23 Container immutability | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
Containers SHOULD not be able to change their file system at runtime.
File systems SHOULD not be mounted with write permissions.
|
|||
SYS.1.6.A24 Host-based intrusion detection | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
The behavior of the containers and the applications or services operating within them SHOULD be monitored.
Deviations from normal behavior SHOULD be noticed and reported.
Reports SHOULD be handled appropriately in the central security incident handling process.
The behavior to be monitored SHOULD include at least:
|
|||
SYS.1.6.A25 High availability of containerized applications | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
If containerized applications have high availability requirements, it SHOULD be decided at which level availability should be implemented (e.g. redundant at the host level).
|
|||
SYS.1.6.A26 Further isolation and encapsulation of containers | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
If further isolation and encapsulation of containers is required, the following measures SHOULD be examined based on increasing effectiveness:
|
Block APP.4.4: Kubernetes
APP.4.4 Kubernetes |
|
||
Consulted persons: Last update: BSI implementation instructions: |
|||
APP.4.4.A1 Planning the separation of applications | Basic requirement | ||
Status: Partially |
Implementation by: | Responsible: | |
Before commissioning, it MUST be planned how the applications operated in the pods and their different test and production requirements The planning MUST contain regulations for network, CPU and permanent memory separation. The separation SHOULD also take into account the network zone concept and the protection requirements and be tailored to these. Applications SHOULD each run in their own Kubernetes namespace that includes all of the application's programs. Only applications with similar protection needs and similar potential attack vectors SHOULD share a Kubernetes cluster.
|
|||
APP.4.4.A2 Planning automation with CI/CD | Basic requirement | ||
Status: Partially |
Implementation by: | Responsible: | |
If automation of the operation of applications in Kubernetes takes place using CI/CD, it MUST ONLY be done after appropriate planning.
Planning MUST cover the entire life cycle from commissioning to decommissioning, including development, testing, operation, monitoring and updates.
The role and rights concept as well as securing Kubernetes secrets MUST be part of the planning.
|
|||
APP.4.4.A3 Identity and permission management in Kubernetes | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
Kubernetes and all other control plane applications MUST authenticate and authorize every action by a user or, in automated operation, corresponding software, regardless of whether the actions take place via a client, a web interface or via an appropriate interface (API). Administrative actions MUST NOT be carried out anonymously.
Each user MUST ONLY receive the absolutely necessary rights. Permissions without restrictions MUST be granted very restrictively. Only a small group of people SHOULD be allowed to define automation processes.
Only selected administrators SHOULD be given the right to create or change persistent volume shares in Kubernetes.
|
|||
APP.4.4.A4 Separation of pods | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
The operating system kernel of the nodes MUST have isolation mechanisms to limit the visibility and resource usage of the pods among themselves (see Linux namespaces and cgroups). The separation MUST include at least process IDs, inter-process communication, user IDs, file system and network including host name.
|
|||
APP.4.4.A5 Data backup in the cluster | Basic requirement | ||
Status: | Implementation by: | Responsible: | |
The cluster MUST be backed up. Data backup MUST include:
Snapshots for the operation of the applications SHOULD also be considered. Snapshots MUST NOT replace data backup.
|
|||
APP.4.4.A6 Initialization of pods | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
If initialization of an application, for example, takes place in the pod at the start, this SHOULD take place in its own init container. It SHOULD be ensured that the initialization terminates all processes that are already running. Kubernetes SHOULD ONLY start the additional containers if initialization is successful.
|
|||
APP.4.4.A7 Separation of networks in Kubernetes | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The networks for the administration of the nodes, the control plane and the individual networks of the application services SHOULD be separated. ONLY the network ports of the pods necessary for operation SHOULD be released into the intended networks. For multiple applications on a Kubernetes cluster, all network connections between the Kubernetes namespaces SHOULD initially be prohibited and only required network connections should be permitted (whitelisting). The network ports necessary for the administration of the nodes, the runtime and Kubernetes including its extensions SHOULD ONLY be accessible from the administration network and from pods that require them. Only selected administrators SHOULD have permission in Kubernetes to manage the CNI and create or change rules for the network.
|
|||
APP.4.4.A8 Securing configuration files in Kubernetes | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
The Kubernetes cluster configuration files, including all extensions and applications SHOULD be versioned and annotated. Access rights to the management software of the configuration files SHOULD be assigned minimally. Access rights for reading and writing access to the control plane configuration files SHOULD be assigned and restricted particularly carefully.
|
|||
APP.4.4.A9 Use of Kubernetes service accounts | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
Pods SHOULD NOT use the "default" service account. No rights SHOULD be granted to the “default” service account. Pods for different applications SHOULD each run under their own service accounts. Permissions for the service accounts of the application pods SHOULD be limited to those strictly necessary. Pods that do not require a service account SHOULD not be able to view it or have access to corresponding tokens. Only control plane pods and pods that absolutely need them SHOULD use privileged service accounts. Automation programs SHOULD each receive their own tokens, even if they use a common service account due to similar tasks.
|
|||
APP.4.4.A10 Securing automation processes | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
All automation software processes, such as CI/CD and their pipelines, SHOULD only work with absolutely necessary rights.
If different user groups can change the configuration or start pods via the automation software, this SHOULD be done for each group through separate processes that only have the rights necessary for the respective user group.
|
|||
APP.4.4.A11 Monitoring of containers | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
In pods, each container SHOULD define a health check for startup and operation (“readiness” and “liveness”). These checks SHOULD provide information about the availability of the software running in the pod. The checks SHOULD fail if the monitored software cannot perform its tasks properly. Each of these controls SHOULD define a time period appropriate to the service operating in the pod. Based on these checks, Kubernetes SHOULD delete or restart the pods.
|
|||
APP.4.4.A12 Securing infrastructure applications | Standard requirement | ||
Status: | Implementation by: | Responsible: | |
If you use your own registry for images or software for automation, managing permanent memory, storing configuration files or similar, its security SHOULD at least consider:
|
|||
APP.4.4.A13 Automated configuration auditing | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
There SHOULD be an automatic audit of the settings of the nodes, Kubernetes and the application pods against a defined list of permitted settings and against standardized benchmarks.
Kubernetes SHOULD enforce the established rules in the cluster by connecting suitable tools.
|
|||
APP.4.4.A14 Use of dedicated nodes | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
In a Kubernetes cluster, the nodes SHOULD be assigned dedicated tasks and only operate pods that are assigned to the respective task.
Bastion nodes SHOULD take over all incoming and outgoing data connections from applications to other networks.
Management nodes SHOULD operate the control plane pods and they SHOULD only take over the control plane data connections.
If deployed, storage nodes SHOULD only operate the solid storage services pods in the cluster.
|
|||
APP.4.4.A15 Separation of applications at node and cluster levels | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
Applications with very high protection requirements SHOULD use their own Kubernetes clusters or dedicated nodes that are not available for other applications.
|
|||
APP.4.4.A16 Use of operators | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
The automation of operational tasks in operators SHOULD be used in particularly critical applications and the control plane programs.
|
|||
APP.4.4.A17 Attestation of nodes | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
Nodes SHOULD send a secured status message to the control plane, verified cryptographically and, if possible, with a TPM.
The control plane SHOULD ONLY include nodes in the cluster that have successfully proven their integrity.
|
|||
APP.4.4.A18 Use of micro-segmentation | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
Even within a Kubernetes namespace, the pods SHOULD only be able to communicate with each other via the necessary network ports. There SHOULD be rules within the CNI that prevent all but the network connections necessary for operation within the Kubernetes namespace. These rules SHOULD clearly define the source and destination of the connections using at least one of the following criteria: Service name, metadata ("labels"), the Kubernetes service accounts or certificate-based authentication. All criteria that serve as a designation for this connection SHOULD be secured in such a way that they can only be changed by authorized persons and administrative services.
|
|||
APP.4.4.A19 Kubernetes high availability | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
The operation SHOULD be structured in such a way that if one location fails, the clusters and thus the applications in the pods either continue to run without interruption or can restart at another location within a short period of time. For the restart, all necessary configuration files, images, user data, network connections and other resources required for operation, including the hardware required for operation, SHOULD already be available at this location. For the uninterrupted operation of the cluster, the Kubernetes control plane, the infrastructure applications of the cluster and the application pods SHOULD be distributed across several fire compartments based on location data from the nodes in such a way that the failure of one fire compartment does not lead to the failure of the application.
|
|||
APP.4.4.A20 Encrypted data storage for pods | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
The file systems containing the persistent data of the control plane (here especially etcd) and the application services SHOULD be encrypted.
|
|||
APP.4.4.A21 Regular restart of pods | Requirements for increased protection needs | ||
Status: | Implementation by: | Responsible: | |
If there is an increased risk of external influences and a very high need for protection, pods SHOULD be stopped and restarted regularly. No pod SHOULD run for more than 24 hours. The availability of the applications in the pod SHOULD be ensured.
|
Comments