Red Hat Training
A Red Hat training course is available for OpenShift Container Platform
Configuring Clusters
OpenShift Container Platform 3.11 Installation and Configuration
Abstract
Chapter 1. Overview
This guide covers further configuration options available for your OpenShift Container Platform cluster post-installation.
Chapter 2. Setting up the Registry
2.1. Internal Registry Overview
2.1.1. About the Registry
OpenShift Container Platform can build container images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift Container Platform provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images.
2.1.2. Integrated or Stand-alone Registries
During an initial installation of a full OpenShift Container Platform cluster, it is likely that the registry was deployed automatically during the installation process. If it was not, or if you want to further customize the configuration of your registry, see Deploying a Registry on Existing Clusters.
While it can be deployed to run as an integrated part of your full OpenShift Container Platform cluster, the OpenShift Container Platform registry can alternatively be installed separately as a stand-alone container image registry.
To install a stand-alone registry, follow Installing a Stand-alone Registry. This installation path deploys an all-in-one cluster running a registry and specialized web console.
2.2. Deploying a Registry on Existing Clusters
2.2.1. Overview
If the integrated registry was not previously deployed automatically during the initial installation of your OpenShift Container Platform cluster, or if it is no longer running successfully and you need to redeploy it on your existing cluster, see the following sections for options on deploying a new registry.
This topic is not required if you installed a stand-alone registry.
2.2.2. Setting the Registry Host Name
You can configure the host name and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service IP and potentially allowing image streams and their references to be portable between clusters.
To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname
in the imagePolicyConfig
section of the master configuration file. The external host name is controlled by setting the externalRegistryHostname
value in the same location.
Image Policy Configuration
imagePolicyConfig: internalRegistryHostname: docker-registry.default.svc.cluster.local:5000 externalRegistryHostname: docker-registry.mycompany.com
The registry itself must be configured with the same internal hostname value. This can be accomplished by setting the REGISTRY_OPENSHIFT_SERVER_ADDR
environment variable on the registry deployment configuration, or by setting the value in the OpenShift section of the registry configuration.
If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.
2.2.3. Deploying the Registry
To deploy the integrated container image registry, use the oc adm registry
command as a user with cluster administrator privileges. For example:
$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \1 --service-account=registry \2 --images='registry.redhat.io/openshift3/ose-${component}:${version}' 3
This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.
To see a full list of options that you can specify when creating the registry:
$ oc adm registry --help
The value for --fs-group
must be permitted by the SCC used by the registry (typically, the restricted SCC).
2.2.4. Deploying the Registry as a DaemonSet
Use the oc adm registry
command to deploy the registry as a DaemonSet
with the --daemonset
option.
Daemonsets ensure that when nodes are created, they contain copies of a specified pod. When the nodes are removed, the pods are garbage collected.
For more information on DaemonSets
, see Using Daemonsets.
2.2.5. Registry Compute Resources
By default, the registry is created with no settings for compute resource requests or limits. For production, it is highly recommended that the deployment configuration for the registry be updated to set resource requests and limits for the registry pod. Otherwise, the registry pod will be considered a BestEffort pod.
See Compute Resources for more information on configuring requests and limits.
2.2.6. Storage for the Registry
The registry stores container images and metadata. If you simply deploy a pod with the registry, it uses an ephemeral volume that is destroyed if the pod exits. Any images anyone has built or pushed into the registry would disappear.
This section lists the supported registry storage drivers. See the container image registry documentation for more information.
The following list includes storage drivers that need to be configured in the registry’s configuration file:
- Filesystem. Filesystem is the default and does not need to be configured.
- S3. See the CloudFront configuration documentation for more information.
- OpenStack Swift
- Google Cloud Storage (GCS)
- Microsoft Azure
- Aliyun OSS
General registry storage configuration options are supported. See the container image registry documentation for more information.
The following storage options need to be configured through the filesystem driver:
For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.
2.2.6.1. Production Use
For production use, attach a remote volume or define and use the persistent storage method of your choice.
For example, to use an existing persistent volume claim:
$ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=<pvc_name> --overwrite
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
2.2.6.1.1. Use Amazon S3 as a Storage Back-end
There is also an option to use Amazon Simple Storage Service storage with the internal container image registry. It is a secure cloud storage manageable through AWS Management Console. To use it, the registry’s configuration file must be manually edited and mounted to the registry pod. However, before you start with the configuration, look at upstream’s recommended steps.
Take a default YAML configuration file as a base and replace the filesystem entry in the storage section with s3 entry such as below. The resulting storage section may look like this:
storage: cache: layerinfo: inmemory delete: enabled: true s3: accesskey: awsaccesskey 1 secretkey: awssecretkey 2 region: us-west-1 regionendpoint: http://myobjects.local bucket: bucketname encrypt: true keyid: mykeyid secure: true v4auth: false chunksize: 5242880 rootdirectory: /s3/object/name/prefix
All of the s3 configuration options are documented in upstream’s driver reference documentation.
Overriding the registry configuration will take you through the additional steps on mounting the configuration file into pod.
When the registry runs on the S3 storage back-end, there are reported issues.
If you want to use a S3 region that is not supported by the integrated registry you are using, see S3 Driver Configuration.
2.2.6.2. Non-Production Use
For non-production use, you can use the --mount-host=<path>
option to specify a directory for the registry to use for persistent storage. The registry volume is then created as a host-mount at the specified <path>
.
The --mount-host
option mounts a directory from the node on which the registry container lives. If you scale up the docker-registry deployment configuration, it is possible that your registry pods and containers will run on different nodes, which can result in two or more registry containers, each with its own local storage. This will lead to unpredictable behavior, as subsequent requests to pull the same image repeatedly may not always succeed, depending on which container the request ultimately goes to.
The --mount-host
option requires that the registry container run in privileged mode. This is automatically enabled when you specify --mount-host
. However, not all pods are allowed to run privileged containers by default. If you still want to use this option, create the registry and specify that it use the registry service account that was created during installation:
$ oc adm registry --service-account=registry \
--config=/etc/origin/master/admin.kubeconfig \
--images='registry.redhat.io/openshift3/ose-${component}:${version}' \ 1
--mount-host=<path>
- 1
- Required to pull the correct image for OpenShift Container Platform.
${component}
and${version}
are dynamically replaced during installation.
The container image registry pod runs as user 1001. This user must be able to write to the host directory. You may need to change directory ownership to user ID 1001 with this command:
$ sudo chown 1001:root <path>
2.2.7. Enabling the Registry Console
OpenShift Container Platform provides a web-based interface to the integrated registry. This registry console is an optional component for browsing and managing images. It is deployed as a stateless service running as a pod.
If you installed OpenShift Container Platform as a stand-alone registry, the registry console is already deployed and secured automatically during installation.
If Cockpit is already running, you’ll need to shut it down before proceeding in order to avoid a port conflict (9090 by default) with the registry console.
2.2.7.1. Deploying the Registry Console
You must first have exposed the registry.
Create a passthrough route in the default project. You will need this when creating the registry console application in the next step.
$ oc create route passthrough --service registry-console \ --port registry-console \ -n default
Deploy the registry console application. Replace
<openshift_oauth_url>
with the URL of the OpenShift Container Platform OAuth provider, which is typically the master.$ oc new-app -n default --template=registry-console \ -p OPENSHIFT_OAUTH_PROVIDER_URL="https://<openshift_oauth_url>:8443" \ -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \ -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}')
NoteIf the redirection URL is wrong when you are trying to log in to the registry console, check your OAuth client with
oc get oauthclients
.- Finally, use a web browser to view the console using the route URI.
2.2.7.2. Securing the Registry Console
By default, the registry console generates self-signed TLS certificates if deployed manually per the steps in Deploying the Registry Console. See Troubleshooting the Registry Console for more information.
Use the following steps to add your organization’s signed certificates as a secret volume. This assumes your certificates are available on the oc
client host.
Create a .cert file containing the certificate and key. Format the file with:
- One or more BEGIN CERTIFICATE blocks for the server certificate and the intermediate certificate authorities
A block containing a BEGIN PRIVATE KEY or similar for the key. The key must not be encrypted
For example:
-----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCyOJ5garOYw0sm 8TBCDSqQ/H1awGMzDYdB11xuHHsxYS2VepPMzMzryHR137I4dGFLhvdTvJUH8lUS ... -----END PRIVATE KEY-----
The secured registry should contain the following Subject Alternative Names (SAN) list:
Two service hostnames.
For example:
docker-registry.default.svc.cluster.local docker-registry.default.svc
Service IP address.
For example:
172.30.124.220
Use the following command to get the container image registry service IP address:
oc get service docker-registry --template='{{.spec.clusterIP}}'
Public hostname.
For example:
docker-registry-default.apps.example.com
Use the following command to get the container image registry public hostname:
oc get route docker-registry --template '{{.spec.host}}'
For example, the server certificate should contain SAN details similar to the following:
X509v3 Subject Alternative Name: DNS:docker-registry-public.openshift.com, DNS:docker-registry.default.svc, DNS:docker-registry.default.svc.cluster.local, DNS:172.30.2.98, IP Address:172.30.2.98
The registry console loads a certificate from the /etc/cockpit/ws-certs.d directory. It uses the last file with a .cert extension in alphabetical order. Therefore, the .cert file should contain at least two PEM blocks formatted in the OpenSSL style.
If no certificate is found, a self-signed certificate is created using the
openssl
command and stored in the 0-self-signed.cert file.
Create the secret:
$ oc create secret generic console-secret \ --from-file=/path/to/console.cert
Add the secrets to the registry-console deployment configuration:
$ oc set volume dc/registry-console --add --type=secret \ --secret-name=console-secret -m /etc/cockpit/ws-certs.d
This triggers a new deployment of the registry console to include your signed certificates.
2.2.7.3. Troubleshooting the Registry Console
2.2.7.3.1. Debug Mode
The registry console debug mode is enabled using an environment variable. The following command redeploys the registry console in debug mode:
$ oc set env dc registry-console G_MESSAGES_DEBUG=cockpit-ws,cockpit-wrapper
Enabling debug mode allows more verbose logging to appear in the registry console’s pod logs.
2.2.7.3.2. Display SSL Certificate Path
To check which certificate the registry console is using, a command can be run from inside the console pod.
List the pods in the default project and find the registry console’s pod name:
$ oc get pods -n default NAME READY STATUS RESTARTS AGE registry-console-1-rssrw 1/1 Running 0 1d
Using the pod name from the previous command, get the certificate path that the cockpit-ws process is using. This example shows the console using the auto-generated certificate:
$ oc exec registry-console-1-rssrw remotectl certificate certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert
2.3. Accessing the Registry
2.3.1. Viewing Logs
To view the logs for the container image registry, use the oc logs
command with the deployment configuration:
$ oc logs dc/docker-registry 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2.3.2. File Storage
Tag and image metadata is stored in OpenShift Container Platform, but the registry stores layer and signature data in a volume that is mounted into the registry container at /registry. As oc exec
does not work on privileged containers, to view a registry’s contents you must manually SSH into the node housing the registry pod’s container, then run docker exec
on the container itself:
List the current pods to find the pod name of your container image registry:
# oc get pods
Then, use
oc describe
to find the host name for the node running the container:# oc describe pod <pod_name>
Log in to the desired node:
# ssh node.example.com
List the running containers from the default project on the node host and identify the container ID for the container image registry:
# docker ps --filter=name=registry_docker-registry.*_default_
List the registry contents using the
oc rsh
command:# oc rsh dc/docker-registry find /registry /registry/docker /registry/docker/registry /registry/docker/registry/v2 /registry/docker/registry/v2/blobs 1 /registry/docker/registry/v2/blobs/sha256 /registry/docker/registry/v2/blobs/sha256/ed /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/data 2 /registry/docker/registry/v2/blobs/sha256/a3 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data /registry/docker/registry/v2/blobs/sha256/f7 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/data /registry/docker/registry/v2/repositories 3 /registry/docker/registry/v2/repositories/p1 /registry/docker/registry/v2/repositories/p1/pause 4 /registry/docker/registry/v2/repositories/p1/pause/_manifests /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures 5 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/link 6 /registry/docker/registry/v2/repositories/p1/pause/_uploads 7 /registry/docker/registry/v2/repositories/p1/pause/_layers 8 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link 9 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/link
- 1
- This directory stores all layers and signatures as blobs.
- 2
- This file contains the blob’s contents.
- 3
- This directory stores all the image repositories.
- 4
- This directory is for a single image repository p1/pause.
- 5
- This directory contains signatures for a particular image manifest revision.
- 6
- This file contains a reference back to a blob (which contains the signature data).
- 7
- This directory contains any layers that are currently being uploaded and staged for the given repository.
- 8
- This directory contains links to all the layers this repository references.
- 9
- This file contains a reference to a specific layer that has been linked into this repository via an image.
2.3.3. Accessing the Registry Directly
For advanced usage, you can access the registry directly to invoke docker
commands. This allows you to push images to or pull them from the integrated registry directly using operations like docker push
or docker pull
. To do so, you must be logged in to the registry using the docker login
command. The operations you can perform depend on your user permissions, as described in the following sections.
2.3.3.1. User Prerequisites
To access the registry directly, the user that you use must satisfy the following, depending on your intended usage:
For any direct access, you must have a regular user for your preferred identity provider. A regular user can generate an access token required for logging in to the registry. System users, such as system:admin, cannot obtain access tokens and, therefore, cannot access the registry directly.
For example, if you are using
HTPASSWD
authentication, you can create one using the following command:# htpasswd /etc/origin/master/htpasswd <user_name>
For pulling images, for example when using the
docker pull
command, the user must have the registry-viewer role. To add this role:$ oc policy add-role-to-user registry-viewer <user_name>
For writing or pushing images, for example when using the
docker push
command, the user must have the registry-editor role. To add this role:$ oc policy add-role-to-user registry-editor <user_name>
For more information on user permissions, see Managing Role Bindings.
2.3.3.2. Logging in to the Registry
Ensure your user satisfies the prerequisites for accessing the registry directly.
To log in to the registry directly:
Ensure you are logged in to OpenShift Container Platform as a regular user:
$ oc login
Log in to the container image registry by using your access token:
docker login -u openshift -p $(oc whoami -t) <registry_ip>:<port>
You can pass any value for the username, the token contains all necessary information. Passing a username that contains colons will result in a login failure.
2.3.3.3. Pushing and Pulling Images
After logging in to the registry, you can perform docker pull
and docker push
operations against your registry.
You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.
In the following examples, we use:
Component | Value |
<registry_ip> |
|
<port> |
|
<project> |
|
<image> |
|
<tag> |
omitted (defaults to |
Pull an arbitrary image:
$ docker pull docker.io/busybox
Tag the new image with the form
<registry_ip>:<port>/<project>/<image>
. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry.$ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
NoteYour regular user must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the
docker push
in the next step will fail. To test, you can create a new project to push the busybox image.Push the newly-tagged image to your registry:
$ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
2.3.4. Accessing Registry Metrics
The OpenShift Container Registry provides an endpoint for Prometheus metrics. Prometheus is a stand-alone, open source systems monitoring and alerting toolkit.
The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. However, this route must first be enabled; see Extended Registry Configuration for instructions.
The following is a simple example of a metrics query:
$ curl -s -u <user>:<secret> \ 1
http://172.30.30.30:5000/extensions/v2/metrics | grep openshift | head -n 10
# HELP openshift_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which OpenShift was built.
# TYPE openshift_build_info gauge
openshift_build_info{gitCommit="67275e1",gitVersion="v3.6.0-alpha.1+67275e1-803",major="3",minor="6+"} 1
# HELP openshift_registry_request_duration_seconds Request latency summary in microseconds for each operation
# TYPE openshift_registry_request_duration_seconds summary
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.5"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.9"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.99"} 0
openshift_registry_request_duration_seconds_sum{name="test/origin-pod",operation="blobstore.create"} 0
openshift_registry_request_duration_seconds_count{name="test/origin-pod",operation="blobstore.create"} 5
Another method to access the metrics is to use a cluster role. You still need to enable the endpoint, but you do not need to specify a <secret>
. The part of the configuration file responsible for metrics should look like this:
openshift: version: 1.0 metrics: enabled: true ...
You must create a cluster role if you do not already have one to access the metrics:
$ cat <<EOF | apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF oc create -f -
To add this role to a user, run the following command:
$ oc adm policy add-cluster-role-to-user prometheus-scraper <username>
See the upstream Prometheus documentation for more advanced queries and recommended visualizers.
2.4. Securing and Exposing the Registry
2.4.1. Overview
By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic via TLS. A passthrough route is also created by default to expose the service externally.
If for any reason your registry has not been secured or exposed, see the following sections for steps on how to manually do so.
2.4.2. Manually Securing the Registry
To manually secure the registry to serve traffic via TLS:
- Deploy the registry.
Fetch the service IP and port of the registry:
$ oc get svc/docker-registry NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE docker-registry ClusterIP 172.30.82.152 <none> 5000/TCP 1d
You can use an existing server certificate, or create a key and server certificate valid for specified IPs and host names, signed by a specified CA. To create a server certificate for the registry service IP and the docker-registry.default.svc.cluster.local host name, run the following command from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts:
$ oc adm ca create-server-cert \ --signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt \ --hostnames='docker-registry.default.svc.cluster.local,docker-registry.default.svc,172.30.124.220' \ --cert=/etc/secrets/registry.crt \ --key=/etc/secrets/registry.key
If the router will be exposed externally, add the public route host name in the
--hostnames
flag:--hostnames='mydocker-registry.example.com,docker-registry.default.svc.cluster.local,172.30.124.220 \
See Redeploying Registry and Router Certificates for additional details on updating the default certificate so that the route is externally accessible.
NoteThe
oc adm ca create-server-cert
command generates a certificate that is valid for two years. This can be altered with the--expire-days
option, but for security reasons, it is recommended to not make it greater than this value.Create the secret for the registry certificates:
$ oc create secret generic registry-certificates \ --from-file=/etc/secrets/registry.crt \ --from-file=/etc/secrets/registry.key
Add the secret to the registry pod’s service accounts (including the default service account):
$ oc secrets link registry registry-certificates $ oc secrets link default registry-certificates
NoteLimiting secrets to only the service accounts that reference them is disabled by default. This means that if
serviceAccountConfig.limitSecretReferences
is set tofalse
(the default setting) in the master configuration file, linking secrets to a service is not required.Pause the
docker-registry
service:$ oc rollout pause dc/docker-registry
Add the secret volume to the registry deployment configuration:
$ oc set volume dc/docker-registry --add --type=secret \ --secret-name=registry-certificates -m /etc/secrets
Enable TLS by adding the following environment variables to the registry deployment configuration:
$ oc set env dc/docker-registry \ REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \ REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
See the Configuring a registry section of the Docker documentation for more information.
Update the scheme used for the registry’s liveness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "livenessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
If your registry was initially deployed on OpenShift Container Platform 3.2 or later, update the scheme used for the registry’s readiness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "readinessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
Resume the
docker-registry
service:$ oc rollout resume dc/docker-registry
Validate the registry is running in TLS mode. Wait until the latest docker-registry deployment completes and verify the Docker logs for the registry container. You should find an entry for
listening on :5000, tls
.$ oc logs dc/docker-registry | grep tls time="2015-05-27T05:05:53Z" level=info msg="listening on :5000, tls" instance.id=deeba528-c478-41f5-b751-dc48e4935fc2
Copy the CA certificate to the Docker certificates directory. This must be done on all nodes in the cluster:
$ dcertsdir=/etc/docker/certs.d $ destdir_addr=$dcertsdir/172.30.124.220:5000 $ destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:5000 $ sudo mkdir -p $destdir_addr $destdir_name $ sudo cp ca.crt $destdir_addr 1 $ sudo cp ca.crt $destdir_name
- 1
- The ca.crt file is a copy of /etc/origin/master/ca.crt on the master.
When using authentication, some versions of
docker
also require you to configure your cluster to trust the certificate at the OS level.Copy the certificate:
$ cp /etc/origin/master/ca.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
Run:
$ update-ca-trust enable
Remove the
--insecure-registry
option only for this particular registry in the /etc/sysconfig/docker file. Then, reload the daemon and restart the docker service to reflect this configuration change:$ sudo systemctl daemon-reload $ sudo systemctl restart docker
Validate the
docker
client connection. Runningdocker push
to the registry ordocker pull
from the registry should succeed. Make sure you have logged into the registry.$ docker tag|push <registry/image> <internal_registry/project/image>
For example:
$ docker pull busybox $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox $ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
2.4.3. Manually Exposing a Secure Registry
Instead of logging in to the OpenShift Container Platform registry from within the OpenShift Container Platform cluster, you can gain external access to it by first securing the registry and then exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.
Each of the following prerequisite steps are performed by default during a typical cluster installation. If they have not been, perform them manually:
A passthrough route should have been created by default for the registry during the initial cluster installation:
Verify whether the route exists:
$ oc get route/docker-registry -o yaml apiVersion: v1 kind: Route metadata: name: docker-registry spec: host: <host> 1 to: kind: Service name: docker-registry 2 tls: termination: passthrough 3
NoteRe-encrypt routes are also supported for exposing the secure registry.
If it does not exist, create the route via the
oc create route passthrough
command, specifying the registry as the route’s service. By default, the name of the created route is the same as the service name:Get the docker-registry service details:
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE docker-registry 172.30.69.167 <none> 5000/TCP docker-registry=default 4h kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP <none> 4h router 172.30.172.132 <none> 80/TCP router=router 4h
Create the route:
$ oc create route passthrough \ --service=docker-registry \1 --hostname=<host> route "docker-registry" created 2
Next, you must trust the certificates being used for the registry on your host system to allow the host to push and pull images. The certificates referenced were created when you secured your registry.
$ sudo mkdir -p /etc/docker/certs.d/<host> $ sudo cp <ca_certificate_file> /etc/docker/certs.d/<host> $ sudo systemctl restart docker
Log in to the registry using the information from securing the registry. However, this time point to the host name used in the route rather than your service IP. When logging in to a secured and exposed registry, make sure you specify the registry in the
docker login
command:# docker login -e user@company.com \ -u f83j5h6 \ -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \ <host>
You can now tag and push images using the route host. For example, to tag and push a
busybox
image in a project calledtest
:$ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED $ docker pull busybox $ docker tag busybox <host>/test/busybox $ docker push <host>/test/busybox The push refers to a repository [<host>/test/busybox] (len: 1) 8c2e06607696: Image already exists 6ce2e90b0bc7: Image successfully pushed cf2616975b4a: Image successfully pushed Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 $ docker pull <host>/test/busybox latest: Pulling from <host>/test/busybox cf2616975b4a: Already exists 6ce2e90b0bc7: Already exists 8c2e06607696: Already exists Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 Status: Image is up to date for <host>/test/busybox:latest $ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED busybox 172.30.11.215:5000/test/busybox latest 2 seconds ago
NoteYour image streams will have the IP address and port of the registry service, not the route name and port. See
oc get imagestreams
for details.
2.4.4. Manually Exposing a Non-Secure Registry
Instead of securing the registry in order to expose the registry, you can simply expose a non-secure registry for non-production OpenShift Container Platform environments. This allows you to have an external route to the registry without using SSL certificates.
Only non-production environments should expose a non-secure registry to external access.
To expose a non-secure registry:
Expose the registry:
# oc expose service docker-registry --hostname=<hostname> -n default
This creates the following JSON file:
apiVersion: v1 kind: Route metadata: creationTimestamp: null labels: docker-registry: default name: docker-registry spec: host: registry.example.com port: targetPort: "5000" to: kind: Service name: docker-registry status: {}
Verify that the route has been created successfully:
# oc get route NAME HOST/PORT PATH SERVICE LABELS INSECURE POLICY TLS TERMINATION docker-registry registry.example.com docker-registry docker-registry=default
Check the health of the registry:
$ curl -v http://registry.example.com/healthz
Expect an HTTP 200/OK message.
After exposing the registry, update your /etc/sysconfig/docker file by adding the port number to the
OPTIONS
entry. For example:OPTIONS='--selinux-enabled --insecure-registry=172.30.0.0/16 --insecure-registry registry.example.com:80'
ImportantThe above options should be added on the client from which you are trying to log in.
Also, ensure that Docker is running on the client.
When logging in to the non-secured and exposed registry, make sure you specify the registry in the docker login
command. For example:
# docker login -e user@company.com \ -u f83j5h6 \ -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \ <host>
2.5. Extended Registry Configuration
2.5.1. Maintaining the Registry IP Address
OpenShift Container Platform refers to the integrated registry by its service IP address, so if you decide to delete and recreate the docker-registry service, you can ensure a completely transparent transition by arranging to re-use the old IP address in the new service. If a new IP address cannot be avoided, you can minimize cluster disruption by rebooting only the masters.
- Re-using the Address
- To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service.
Make a note of the
clusterIP
for the service:$ oc get svc/docker-registry -o yaml | grep clusterIP:
Delete the service:
$ oc delete svc/docker-registry dc/docker-registry
Create the registry definition in registry.yaml, replacing
<options>
with, for example, those used in step 3 of the instructions in the Non-Production Use section:$ oc adm registry <options> -o yaml > registry.yaml
-
Edit registry.yaml, find the
Service
there, and change itsclusterIP
to the address noted in step 1. Create the registry using the modified registry.yaml:
$ oc create -f registry.yaml
- Rebooting the Masters
- If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters:
# master-restart api # master-restart controllers
This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.
We recommend against rebooting the entire cluster because that incurs unnecessary downtime for pods and does not actually clear the cache.
2.5.2. Configuring an External Registry Search List
You can use the /etc/containers/registries.conf file to create a list of Docker registries to search for container images.
The /etc/containers/registries.conf file is a list of registry servers that OpenShift Container Platform should search against when a user pulls an image using the image short name, such as: myimage:latest
. You can customize the order of the search, specify secure and insecure registries, and define a blocked registry list. OpenShift Container Platform does not search or allow pulls from registries on the blocked list.
For example, if a user wants to pull the myimage:latest
image, OpenShift Container Platform searches the registries in the order they appear in the list until it finds the myimage:latest
.
The registry search list allows you to curate a set of images and templates that are available for download by OpenShift Container Platform users. You can place these images in one or more Docker registries, add the registry to the list, and pull those images into your cluster.
When using the registry search list, OpenShift Container Platform will not pull images from a registry that is not in the search list.
To configure a registry search list:
Edit the /etc/containers/registries.conf file to add or edit the following parameters as needed:
[registries.search] 1 registries = ["reg1.example.com", "reg2.example.com"] [registries.insecure] 2 registries = ["reg3.example.com"] [registries.block] 3 registries = ['docker.io']
2.5.3. Setting the Registry Host Name
You can configure the host name and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service IP and potentially allowing image streams and their references to be portable between clusters.
To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname
in the imagePolicyConfig
section of the master configuration file. The external host name is controlled by setting the externalRegistryHostname
value in the same location.
Image Policy Configuration
imagePolicyConfig: internalRegistryHostname: docker-registry.default.svc.cluster.local:5000 externalRegistryHostname: docker-registry.mycompany.com
The registry itself must be configured with the same internal hostname value. This can be accomplished by setting the REGISTRY_OPENSHIFT_SERVER_ADDR
environment variable on the registry deployment configuration, or by setting the value in the OpenShift section of the registry configuration.
If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.
2.5.4. Overriding the Registry Configuration
You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration.
Upstream configuration options in this file may also be overridden using environment variables. The middleware section is an exception as there are just a few options that can be overridden using environment variables. Learn how to override specific configuration options.
To enable management of the registry configuration file directly and deploy an updated configuration using a ConfigMap
:
- Deploy the registry.
Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below. Review supported options.
Registry Configuration File
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /registry delete: enabled: true auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: acceptschema2: true pullthrough: true enforcequota: false projectcachettl: 1m blobrepositorycachettl: 10m storage: - name: openshift openshift: version: 1.0 metrics: enabled: false secret: <secret>
Create a
ConfigMap
holding the content of each file in this directory:$ oc create configmap registry-config \ --from-file=</path/to/custom/registry/config.yml>/
Add the registry-config ConfigMap as a volume to the registry’s deployment configuration to mount the custom configuration file at /etc/docker/registry/:
$ oc set volume dc/docker-registry --add --type=configmap \ --configmap-name=registry-config -m /etc/docker/registry/
Update the registry to reference the configuration path from the previous step by adding the following environment variable to the registry’s deployment configuration:
$ oc set env dc/docker-registry \ REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yml
This may be performed as an iterative process to achieve the desired configuration. For example, during troubleshooting, the configuration may be temporarily updated to put it in debug mode.
To update an existing configuration:
This procedure will overwrite the currently deployed registry configuration.
- Edit the local registry configuration file, config.yml.
Delete the registry-config configmap:
$ oc delete configmap registry-config
Recreate the configmap to reference the updated configuration file:
$ oc create configmap registry-config\ --from-file=</path/to/custom/registry/config.yml>/
Redeploy the registry to read the updated configuration:
$ oc rollout latest docker-registry
Maintain configuration files in a source control repository.
2.5.5. Registry Configuration Reference
There are many configuration options available in the upstream docker distribution library. Not all configuration options are supported or enabled. Use this section as a reference when overriding the registry configuration.
Upstream configuration options in this file may also be overridden using environment variables. However, the middleware section may not be overridden using environment variables. Learn how to override specific configuration options.
2.5.5.1. Log
Upstream options are supported.
Example:
log: level: debug formatter: text fields: service: registry environment: staging
2.5.5.2. Hooks
Mail hooks are not supported.
2.5.5.3. Storage
This section lists the supported registry storage drivers. See the container image registry documentation for more information.
The following list includes storage drivers that need to be configured in the registry’s configuration file:
- Filesystem. Filesystem is the default and does not need to be configured.
- S3. See the CloudFront configuration documentation for more information.
- OpenStack Swift
- Google Cloud Storage (GCS)
- Microsoft Azure
- Aliyun OSS
General registry storage configuration options are supported. See the container image registry documentation for more information.
The following storage options need to be configured through the filesystem driver:
For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.
General Storage Configuration Options
storage:
delete:
enabled: true 1
redirect:
disable: false
cache:
blobdescriptor: inmemory
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
readonly:
enabled: false
- 1
- This entry is mandatory for image pruning to work properly.
2.5.5.4. Auth
Auth options should not be altered. The openshift extension is the only supported option.
auth: openshift: realm: openshift
2.5.5.5. Middleware
The repository middleware extension allows to configure OpenShift Container Platform middleware responsible for interaction with OpenShift Container Platform and image proxying.
middleware: registry: - name: openshift 1 repository: - name: openshift 2 options: acceptschema2: true 3 pullthrough: true 4 mirrorpullthrough: true 5 enforcequota: false 6 projectcachettl: 1m 7 blobrepositorycachettl: 10m 8 storage: - name: openshift 9
- 1 2 9
- These entries are mandatory. Their presence ensures required components are loaded. These values should not be changed.
- 3
- Allows you to store manifest schema v2 during a push to the registry. See below for more details.
- 4
- Allows the registry to act as a proxy for remote blobs. See below for more details.
- 5
- Allows the registry cache blobs to be served from remote registries for fast access later. The mirroring starts when the blob is accessed for the first time. The option has no effect if the pullthrough is disabled.
- 6
- Prevents blob uploads exceeding the size limit, which are defined in the targeted project.
- 7
- An expiration timeout for limits cached in the registry. The lower the value, the less time it takes for the limit changes to propagate to the registry. However, the registry will query limits from the server more frequently and, as a consequence, pushes will be slower.
- 8
- An expiration timeout for remembered associations between blob and repository. The higher the value, the higher probability of fast lookup and more efficient registry operation. On the other hand, memory usage will raise as well as a risk of serving image layer to user, who is no longer authorized to access it.
2.5.5.5.1. S3 Driver Configuration
If you want to use a S3 region that is not supported by the integrated registry you are using, then you can specify a regionendpoint
to avoid the region
validation error.
For more information about using Amazon Simple Storage Service storage, see Amazon S3 as a Storage Back-end.
For example:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory delete: enabled: true s3: accesskey: BJKMSZBRESWJQXRWMAEQ secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS bucket: docker.myregistry.com region: eu-west-3 regionendpoint: https://s3.eu-west-3.amazonaws.com auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift storage: - name: openshift
Verify the region
and regionendpoint
fields are consistent between themselves. Otherwise the integrated registry will start, but it can not read or write anything to the S3 storage.
The regionendpoint
can also be useful if you use a S3 storage different from the Amazon S3.
2.5.5.5.2. CloudFront Middleware
The CloudFront middleware extension can be added to support AWS, CloudFront CDN storage provider. CloudFront middleware speeds up distribution of image content internationally. The blobs are distributed to several edge locations around the world. The client is always directed to the edge with the lowest latency.
The CloudFront middleware extension can be only used with S3 storage. It is utilized only during blob serving. Therefore, only blob downloads can be speeded up, not uploads.
The following is an example of minimal configuration of S3 storage driver with a CloudFront middleware:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory delete: enabled: true s3: 1 accesskey: BJKMSZBRESWJQXRWMAEQ secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS region: us-east-1 bucket: docker.myregistry.com auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift storage: - name: cloudfront 2 options: baseurl: https://jrpbyn0k5k88bi.cloudfront.net/ 3 privatekey: /etc/docker/cloudfront-ABCEDFGHIJKLMNOPQRST.pem 4 keypairid: ABCEDFGHIJKLMNOPQRST 5 - name: openshift
- 1
- The S3 storage must be configured the same way regardless of CloudFront middleware.
- 2
- The CloudFront storage middleware needs to be listed before OpenShift middleware.
- 3
- The CloudFront base URL. In the AWS management console, this is listed as Domain Name of CloudFront distribution.
- 4
- The location of your AWS private key on the filesystem. This must be not confused with Amazon EC2 key pair. See the AWS documentation on creating CloudFront key pairs for your trusted signers. The file needs to be mounted as a secret into the registry pod.
- 5
- The ID of your Cloudfront key pair.
2.5.5.5.3. Overriding Middleware Configuration Options
The middleware section cannot be overridden using environment variables. There are a few exceptions, however. For example:
middleware: repository: - name: openshift options: acceptschema2: true 1 pullthrough: true 2 mirrorpullthrough: true 3 enforcequota: false 4 projectcachettl: 1m 5 blobrepositorycachettl: 10m 6
- 1
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2
, which allows for the ability to accept manifest schema v2 on manifest put requests. Recognized values aretrue
andfalse
(which applies to all the other boolean variables below). - 2
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PULLTHROUGH
, which enables a proxy mode for remote repositories. - 3
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_MIRRORPULLTHROUGH
, which instructs registry to mirror blobs locally if serving remote blobs. - 4
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
, which allows the ability to turn quota enforcement on or off. By default, quota enforcement is off. - 5
- A configuration option that can be overridden by the environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PROJECTCACHETTL
, specifying an eviction timeout for project quota objects. It takes a valid time duration string (for example,2m
). If empty, you get the default timeout. If zero (0m
), caching is disabled. - 6
- A configuration option that can be overridden by the environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_BLOBREPOSITORYCACHETTL
, specifying an eviction timeout for associations between blob and containing repository. The format of the value is the same as inprojectcachettl
case.
2.5.5.5.4. Image Pullthrough
If enabled, the registry will attempt to fetch requested blob from a remote registry unless the blob exists locally. The remote candidates are calculated from DockerImage entries stored in status of the image stream, a client pulls from. All the unique remote registry references in such entries will be tried in turn until the blob is found.
Pullthrough will only occur if an image stream tag exists for the image being pulled. For example, if the image being pulled is docker-registry.default.svc:5000/yourproject/yourimage:prod
then the registry will look for an image stream tag named yourimage:prod
in the project yourproject
. If it finds one, it will attempt to pull the image using the dockerImageReference
associated with that image stream tag.
When performing pullthrough, the registry will use pull credentials found in the project associated with the image stream tag that is being referenced. This capability also makes it possible for you to pull images that reside on a registry they do not have credentials to access, as long as you have access to the image stream tag that references the image.
You must ensure that your registry has appropriate certificates to trust any external registries you do a pullthrough against. The certificates need to be placed in the /etc/pki/tls/certs directory on the pod. You can mount the certificates using a configuration map or secret. Note that the entire /etc/pki/tls/certs directory must be replaced. You must include the new certificates and replace the system certificates in your secret or configuration map that you mount.
Note that by default image stream tags use a reference policy type of Source
which means that when the image stream reference is resolved to an image pull specification, the specification used will point to the source of the image. For images hosted on external registries, this will be the external registry and as a result the resource will reference and pull the image by the external registry. For example, registry.redhat.io/openshift3/jenkins-2-rhel7
and pullthrough will not apply. To ensure that resources referencing image streams use a pull specification that points to the internal registry, the image stream tag should use a reference policy type of Local
. More information is available on Reference Policy.
This feature is on by default. However, it can be disabled using a configuration option.
By default, all the remote blobs served this way are stored locally for subsequent faster access unless mirrorpullthrough
is disabled. The downside of this mirroring feature is an increased storage usage.
The mirroring starts when a client tries to fetch at least a single byte of the blob. To pre-fetch a particular image into integrated registry before it is actually needed, you can run the following command:
$ oc get imagestreamtag/${IS}:${TAG} -o jsonpath='{ .image.dockerImageLayers[*].name }' | \ xargs -n1 -I {} curl -H "Range: bytes=0-1" -u user:${TOKEN} \ http://${REGISTRY_IP}:${PORT}/v2/default/mysql/blobs/{}
This OpenShift Container Platform mirroring feature should not be confused with the upstream registry pull through cache feature, which is a similar but distinct capability.
2.5.5.5.5. Manifest Schema v2 Support
Each image has a manifest describing its blobs, instructions for running it and additional metadata. The manifest is versioned, with each version having different structure and fields as it evolves over time. The same image can be represented by multiple manifest versions. Each version will have different digest though.
The registry currently supports manifest v2 schema 1 (schema1) and manifest v2 schema 2 (schema2). The former is being obsoleted but will be supported for an extended amount of time.
The default configuration is to store schema2.
You should be wary of compatibility issues with various Docker clients:
- Docker clients of version 1.9 or older support only schema1. Any manifest this client pulls or pushes will be of this legacy schema.
- Docker clients of version 1.10 support both schema1 and schema2. And by default, it will push the latter to the registry if it supports newer schema.
The registry, storing an image with schema1 will always return it unchanged to the client. Schema2 will be transferred unchanged only to newer Docker client. For the older one, it will be converted on-the-fly to schema1.
This has significant consequences. For example an image pushed to the registry by a newer Docker client cannot be pulled by the older Docker by its digest. That’s because the stored image’s manifest is of schema2 and its digest can be used to pull only this version of manifest.
Once you’re confident that all the registry clients support schema2, you’ll be safe to enable its support in the registry. See the middleware configuration reference above for particular option.
2.5.5.6. OpenShift
This section reviews the configuration of global settings for features specific to OpenShift Container Platform. In a future release, openshift
-related settings in the Middleware section will be obsoleted.
Currently, this section allows you to configure registry metrics collection:
openshift: version: 1.0 1 server: addr: docker-registry.default.svc 2 metrics: enabled: false 3 secret: <secret> 4 requests: read: maxrunning: 10 5 maxinqueue: 10 6 maxwaitinqueue 2m 7 write: maxrunning: 10 8 maxinqueue: 10 9 maxwaitinqueue 2m 10
- 1
- A mandatory entry specifying configuration version of this section. The only supported value is
1.0
. - 2
- The hostname of the registry. Should be set to the same value configured on the master. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_SERVER_ADDR
. - 3
- Can be set to
true
to enable metrics collection. It can be overridden by the boolean environment variableREGISTRY_OPENSHIFT_METRICS_ENABLED
. - 4
- A secret used to authorize client requests. Metrics clients must use it as a bearer token in
Authorization
header. It can be overridden by the environment variableREGISTRY_OPENSHIFT_METRICS_SECRET
. - 5
- Maximum number of simultaneous pull requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXRUNNING
. Zero indicates no limit. - 6
- Maximum number of queued pull requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXINQUEUE
. Zero indicates no limit. - 7
- Maximum time a pull request can wait in the queue before being rejected. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXWAITINQUEUE
. Zero indicates no limit. - 8
- Maximum number of simultaneous push requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXRUNNING
. Zero indicates no limit. - 9
- Maximum number of queued push requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXINQUEUE
. Zero indicates no limit. - 10
- Maximum time a push request can wait in the queue before being rejected. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXWAITINQUEUE
. Zero indicates no limit.
See Accessing Registry Metrics for usage information.
2.5.5.7. Reporting
Reporting is unsupported.
2.5.5.8. HTTP
Upstream options are supported. Learn how to alter these settings via environment variables. Only the tls section should be altered. For example:
http: addr: :5000 tls: certificate: /etc/secrets/registry.crt key: /etc/secrets/registry.key
2.5.5.9. Notifications
Upstream options are supported. The REST API Reference provides more comprehensive integration options.
Example:
notifications: endpoints: - name: registry disabled: false url: https://url:port/path headers: Accept: - text/plain timeout: 500 threshold: 5 backoff: 1000
2.5.5.10. Redis
Redis is not supported.
2.5.5.11. Health
Upstream options are supported. The registry deployment configuration provides an integrated health check at /healthz.
2.5.5.12. Proxy
Proxy configuration should not be enabled. This functionality is provided by the OpenShift Container Platform repository middleware extension, pullthrough: true.
2.5.5.13. Cache
The integrated registry actively caches data to reduce the number of calls to slow external resources. There are two caches:
- The storage cache that is used to cache blobs metadata. This cache does not have an expiration time and the data is there until it is explicitly deleted.
- The application cache contains association between blobs and repositories. The data in this cache has an expiration time.
In order to completely turn off the cache, you need to change the configuration:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: "" 1 openshift: version: 1.0 cache: disabled: true 2 blobrepositoryttl: 10m
- 1
- Disables cache of metadata accessed in the storage backend. Without this cache, the registry server will constantly access the backend for metadata.
- 2
- Disables the cache in which contains the blob and repository associations. Without this cache, the registry server will continually re-query the data from the master API and recompute the associations.
2.6. Known Issues
2.6.1. Overview
The following are the known issues when deploying or using the integrated registry.
2.6.2. Concurrent Build with Registry Pull-through
The local docker-registry deployment takes on additional load. By default, it now caches content from registry.redhat.io. The images from registry.redhat.io for STI builds are now stored in the local registry. Attempts to pull them result in pulls from the local docker-registry. As a result, there are circumstances where extreme numbers of concurrent builds can result in timeouts for the pulls and the build can possibly fail. To alleviate the issue, scale the docker-registry deployment to more than one replica. Check for timeouts in the builder pod’s logs.
2.6.3. Image Push Errors with Scaled Registry Using Shared NFS Volume
When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:
-
digest invalid: provided digest did not match uploaded content
-
blob upload unknown
-
blob upload invalid
These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.
You can perform the following steps to minimize the probability of such a failure:
Ensure that the
sessionAffinity
of your docker-registry service is set toClientIP
:$ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'
This should return
ClientIP
, which is the default in recent OpenShift Container Platform versions. If not, change it:$ oc patch svc/docker-registry -p '{"spec":{"sessionAffinity": "ClientIP"}}'
-
Ensure that the NFS export line of your registry volume on your NFS server has the
no_wdelay
options listed. Theno_wdelay
option prevents the server from delaying writes, which greatly improves read-after-write consistency, a requirement of the registry.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
2.6.4. Pull of Internally Managed Image Fails with "not found" Error
This error occurs when the pulled image is pushed to an image stream different from the one it is being pulled from. This is caused by re-tagging a built image into an arbitrary image stream:
$ oc tag srcimagestream:latest anyproject/pullimagestream:latest
And subsequently pulling from it, using an image reference such as:
internal.registry.url:5000/anyproject/pullimagestream:latest
During a manual Docker pull, this will produce a similar error:
Error: image anyproject/pullimagestream:latest not found
To prevent this, avoid the tagging of internally managed images completely, or re-push the built image to the desired namespace manually.
2.6.5. Image Push Fails with "500 Internal Server Error" on S3 Storage
There are problems reported happening when the registry runs on S3 storage back-end. Pushing to a container image registry occasionally fails with the following error:
Received unexpected HTTP status: 500 Internal Server Error
To debug this, you need to view the registry logs. In there, look for similar error messages occurring at the time of the failed push:
time="2016-03-30T15:01:21.22287816-04:00" level=error msg="unknown error completing upload: driver.Error{DriverName:\"s3\", Enclosed:(*url.Error)(0xc20901cea0)}" http.request.method=PUT ... time="2016-03-30T15:01:21.493067808-04:00" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Put https://s3.amazonaws.com/oso-tsi-docker/registry/docker/registry/v2/blobs/sha256/ab/abe5af443833d60cf672e2ac57589410dddec060ed725d3e676f1865af63d2e2/data: EOF" err.message="unknown error" http.request.method=PUT ... time="2016-04-02T07:01:46.056520049-04:00" level=error msg="error putting into main store: s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." http.request.method=PUT atest
If you see such errors, contact your Amazon S3 support. There may be a problem in your region or with your particular bucket.
2.6.6. Image Pruning Fails
If you encounter the following error when pruning images:
BLOB sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c error deleting blob
And your registry log contains the following information:
error deleting blob \"sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c\": operation unsupported
It means that your custom configuration file lacks mandatory entries in the storage section, namely storage:delete:enabled
set to true. Add them, re-deploy the registry, and repeat your image pruning operation.
Chapter 3. Setting up a Router
3.1. Router Overview
3.1.1. About Routers
There are many ways to get traffic into the cluster. The most common approach is to use the OpenShift Container Platform router as the ingress point for external traffic destined for services in your OpenShift Container Platform installation.
OpenShift Container Platform provides and supports the following router plug-ins:
- The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift Container Platform. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
- The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
- Deploying a Default HAProxy Router
- Deploying a Custom HAProxy Router
- Configuring the HAProxy Router to Use PROXY Protocol
- Configuring Route Timeouts
3.1.2. Router Service Account
Before deploying an OpenShift Container Platform cluster, you must have a service account for the router, which is automatically created during cluster installation. This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.
3.1.2.1. Permission to Access Labels
When namespace labels are used, for example in creating router shards, the service account for the router must have cluster-reader
permission.
$ oc adm policy add-cluster-role-to-user \ cluster-reader \ system:serviceaccount:default:router
With a service account in place, you can proceed to installing a default HAProxy Router or a customized HAProxy Router
3.2. Using the Default HAProxy Router
3.2.1. Overview
The oc adm router
command is provided with the administrator CLI to simplify the tasks of setting up routers in a new installation. The oc adm router
command creates the service and deployment configuration objects. Use the --service-account
option to specify the service account the router will use to contact the master.
The router service account can be created in advance or created by the oc adm router --service-account
command.
Every form of communication between OpenShift Container Platform components is secured by TLS and uses various certificates and authentication methods. The --default-certificate
.pem format file can be supplied or one is created by the oc adm router
command. When routes are created, the user can provide route certificates that the router will use when handling the route.
When deleting a router, ensure the deployment configuration, service, and secret are deleted as well.
Routers are deployed on specific nodes. This makes it easier for the cluster administrator and external network manager to coordinate which IP address will run a router and which traffic the router will handle. The routers are deployed on specific nodes by using node selectors.
Routers use host networking by default, and they directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where ports 80/443 are available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.
It is recommended to use separate distinct openshift-router service account with your router. This can be provided using the --service-account
flag to the oc adm router
command.
$ oc adm router --dry-run --service-account=router 1
Router pods created using oc adm router
have default resource requests that a node must satisfy for the router pod to be deployed. In an effort to increase the reliability of infrastructure components, the default resource requests are used to increase the QoS tier of the router pods above pods without resource requests. The default values represent the observed minimum resources required for a basic router to be deployed and can be edited in the routers deployment configuration and you may want to increase them based on the load of the router.
3.2.2. Creating a Router
If the router does not exist, run the following to create a router:
$ oc adm router <router_name> --replicas=<number> --service-account=router --extended-logging=true
--replicas
is usually 1
unless a high availability configuration is being created.
--extended-logging=true
configures the router to forward logs that are generated by HAProxy to the syslog container.
To find the host IP address of the router:
$ oc get po <router-pod> --template={{.status.hostIP}}
You can also use router shards to ensure that the router is filtered to specific namespaces or routes, or set any environment variables after router creation. In this case create a router for each shard.
3.2.3. Other Basic Router Commands
- Checking the Default Router
- The default router service account, named router, is automatically created during cluster installations. To verify that this account already exists:
$ oc adm router --dry-run --service-account=router
- Viewing the Default Router
- To see what the default router would look like if created:
$ oc adm router --dry-run -o yaml --service-account=router
- Configuring the Router to Forward HAProxy Logs
-
You can configure the router to forward logs that are generated by HAProxy to an rsyslog sidecar container. The
--extended-logging=true
parameter appends the syslog container to forward HAProxy logs to standard output.
$ oc adm router --extended-logging=true
The following example is the configuration for a router that uses --extended-logging=true
:
$ oc get pod router-1-xhdb9 -o yaml apiVersion: v1 kind: Pod spec: containers: - env: .... - name: ROUTER_SYSLOG_ADDRESS 1 value: /var/lib/rsyslog/rsyslog.sock .... - command: 2 - /sbin/rsyslogd - -n - -i - /tmp/rsyslog.pid - -f - /etc/rsyslog/rsyslog.conf image: registry.redhat.io/openshift3/ose-haproxy-router:v3.11.188 imagePullPolicy: IfNotPresent name: syslog
Use the following commands to view the HAProxy logs:
$ oc set env dc/test-router ROUTER_LOG_LEVEL=info 1 $ oc logs -f <pod-name> -c syslog 2
The HAProxy logs take the following form:
2020-04-14T03:05:36.629527+00:00 test-311-node-1 haproxy[43]: 10.0.151.166:59594 [14/Apr/2020:03:05:36.627] fe_no_sni~ be_secure:openshift-console:console/pod:console-b475748cb-t6qkq:console:10.128.0.5:8443 0/0/1/1/2 200 393 - - --NI 2/1/0/1/0 0/0 "HEAD / HTTP/1.1" 2020-04-14T03:05:36.633024+00:00 test-311-node-1 haproxy[43]: 10.0.151.166:59594 [14/Apr/2020:03:05:36.528] public_ssl be_no_sni/fe_no_sni 95/1/104 2793 -- 1/1/0/0/0 0/0
- Deploying the Router to a Labeled Node
- To deploy the router to any node(s) that match a specified node label:
$ oc adm router <router_name> --replicas=<number> --selector=<label> \ --service-account=router
For example, if you want to create a router named router
and have it placed on a node labeled with node-role.kubernetes.io/infra=true
:
$ oc adm router router --replicas=1 --selector='node-role.kubernetes.io/infra=true' \ --service-account=router
During cluster installation, the openshift_router_selector
and openshift_registry_selector
Ansible settings are set to node-role.kubernetes.io/infra=true
by default. The default router and registry will only be automatically deployed if a node exists that matches the node-role.kubernetes.io/infra=true
label.
For information on updating labels, see Updating Labels on Nodes.
Multiple instances are created on different hosts according to the scheduler policy.
- Using a Different Router Image
- To use a different router image and view the router configuration that would be used:
$ oc adm router <router_name> -o <format> --images=<image> \ --service-account=router
For example:
$ oc adm router region-west -o yaml --images=myrepo/somerouter:mytag \ --service-account=router
3.2.4. Filtering Routes to Specific Routers
Using the ROUTE_LABELS
environment variable, you can filter routes so that they are used only by specific routers.
For example, if you have multiple routers, and 100 routes, you can attach labels to the routes so that a portion of them are handled by one router, whereas the rest are handled by another.
After creating a router, use the
ROUTE_LABELS
environment variable to tag the router:$ oc set env dc/<router=name> ROUTE_LABELS="key=value"
Add the label to the desired routes:
oc label route <route=name> key=value
To verify that the label has been attached to the route, check the route configuration:
$ oc describe route/<route_name>
- Setting the Maximum Number of Concurrent Connections
-
The router can handle a maximum number of 20000 connections by default. You can change that limit depending on your needs. Having too few connections prevents the health check from working, which causes unnecessary restarts. You need to configure the system to support the maximum number of connections. The limits shown in
'sysctl fs.nr_open'
and'sysctl fs.file-max'
must be large enough. Otherwise, HAproxy will not start.
When the router is created, the --max-connections=
option sets the desired limit:
$ oc adm router --max-connections=10000 ....
Edit the ROUTER_MAX_CONNECTIONS
environment variable in the router’s deployment configuration to change the value. The router pods are restarted with the new value. If ROUTER_MAX_CONNECTIONS
is not present, the default value of 20000, is used.
A connection includes the frontend and internal backend. This counts as two connections. Be sure to set ROUTER_MAX_CONNECTIONS
to double than the number of connections you intend to create.
3.2.5. HAProxy Strict SNI
The HAProxy strict-sni
can be controlled through the ROUTER_STRICT_SNI
environment variable in the router’s deployment configuration. It can also be set when the router is created by using the --strict-sni
command line option.
$ oc adm router --strict-sni
3.2.6. TLS Cipher Suites
Set the router cipher suite using the --ciphers
option when creating a router:
$ oc adm router --ciphers=modern ....
The values are: modern
, intermediate
, or old
, with intermediate
as the default. Alternatively, a set of ":" separated ciphers can be provided. The ciphers must be from the set displayed by:
$ openssl ciphers
Alternatively, use the ROUTER_CIPHERS
environment variable for an existing router.
3.2.7. Mutual TLS Authentication
Client access to the router and the backend services can be restricted using mutual TLS authentication. The router will reject requests from clients not in its authenticated
set. Mutual TLS authentication is implemented on client certificates and can be controlled based on the certifying authorities (CAs) issuing the certificates, the certificate revocation list and/or any certificate subject filters. Use the mutual tls config options --mutual-tls-auth
, --mutual-tls-auth-ca
, --mutual-tls-auth-crl
and --mutual-tls-auth-filter
when creating a router:
$ oc adm router --mutual-tls-auth=required \ --mutual-tls-auth-ca=/local/path/to/cacerts.pem ....
The --mutual-tls-auth
values are required
, optional
, or none
, with none
as the default. The --mutual-tls-auth-ca
value specifies a file containing one or more CA certificates. These CA certificates are used by the router to verify a client’s certificate.
The --mutual-tls-auth-crl
can be used specify the certificate revocation list to handle cases where certificates (issued by valid certifying authorities) have been revoked.
$ oc adm router --mutual-tls-auth=required \ --mutual-tls-auth-ca=/local/path/to/cacerts.pem \ --mutual-tls-auth-filter='^/CN=my.org/ST=CA/C=US/O=Security/OU=OSE$' \ ....
The --mutual-tls-auth-filter
value can be used for fine grain access control based on the certificate subject. The value is a regular expression, which is to used to match up the certificate’s subject.
The mutual TLS authentication filter example above shows you a restrictive regular expression (regex) — anchored with ^
and $
— that exactly
matches a certificate subject. If you decide to use a less restrictive regular expression, please be aware that this can potentially match certificates issued by any CAs you have deemed to be valid. It is recommended to also use the --mutual-tls-auth-ca
option so that you have finer control over the issued certificates.
Using --mutual-tls-auth=required
ensures that you only allow authenticated clients access to the backend resources. This means that the client is always required to provide authentication information (aka a client certificate). To make the mutual TLS authentication optional, use --mutual-tls-auth=optional
(or use none
to disable it - this is the default). Note here that optional
means that you do not require a client to present any authentication information and if the client provides any authentication information, that is just passed on to the backend in the X-SSL*
HTTP headers.
$ oc adm router --mutual-tls-auth=optional \ --mutual-tls-auth-ca=/local/path/to/cacerts.pem \ ....
When mutual TLS authentication support is enabled (either using the required
or optional
value for the --mutual-tls-auth
flag), the client authentication information is passed to the backend in the form of X-SSL*
HTTP headers.
Examples of the X-SSL*
HTTP headers X-SSL-Client-DN
: the full distinguished name (DN) of the certificate subject. X-SSL-Client-NotBefore
: the client certificate start date in YYMMDDhhmmss[Z] format. X-SSL-Client-NotAfter
: the client certificate end date in YYMMDDhhmmss[Z] format. X-SSL-Client-SHA1
: the SHA-1 fingerprint of the client certificate. X-SSL-Client-DER
: provides full access to the client certificate. Contains the DER formatted client certificate encoded in base-64 format.
3.2.8. Highly-Available Routers
You can set up a highly-available router on your OpenShift Container Platform cluster using IP failover. This setup has multiple replicas on different nodes so the failover software can switch to another replica if the current one fails.
3.2.9. Customizing the Router Service Ports
You can customize the service ports that a template router binds to by setting the environment variables ROUTER_SERVICE_HTTP_PORT
and ROUTER_SERVICE_HTTPS_PORT
. This can be done by creating a template router, then editing its deployment configuration.
The following example creates a router deployment with 0
replicas and customizes the router service HTTP and HTTPS ports, then scales it appropriately (to 1
replica).
$ oc adm router --replicas=0 --ports='10080:10080,10443:10443' 1
$ oc set env dc/router ROUTER_SERVICE_HTTP_PORT=10080 \
ROUTER_SERVICE_HTTPS_PORT=10443
$ oc scale dc/router --replicas=1
- 1
- Ensures exposed ports are appropriately set for routers that use the container networking mode
--host-network=false
.
If you do customize the template router service ports, you will also need to ensure that the nodes where the router pods run have those custom ports opened in the firewall (either via Ansible or iptables
, or any other custom method that you use via firewall-cmd
).
The following is an example using iptables
to open the custom router service ports.
$ iptables -A OS_FIREWALL_ALLOW -p tcp --dport 10080 -j ACCEPT $ iptables -A OS_FIREWALL_ALLOW -p tcp --dport 10443 -j ACCEPT
3.2.10. Working With Multiple Routers
An administrator can create multiple routers with the same definition to serve the same set of routes. Each router will be on a different node and will have a different IP address. The network administrator will need to get the desired traffic to each node.
Multiple routers can be grouped to distribute routing load in the cluster and separate tenants to different routers or shards. Each router or shard in the group admits routes based on the selectors in the router. An administrator can create shards over the whole cluster using ROUTE_LABELS
. A user can create shards over a namespace (project) by using NAMESPACE_LABELS
.
3.2.11. Adding a Node Selector to a Deployment Configuration
Making specific routers deploy on specific nodes requires two steps:
Add a label to the desired node:
$ oc label node 10.254.254.28 "router=first"
Add a node selector to the router deployment configuration:
$ oc edit dc <deploymentConfigName>
Add the
template.spec.nodeSelector
field with a key and value corresponding to the label:... template: metadata: creationTimestamp: null labels: router: router1 spec: nodeSelector: 1 router: "first" ...
- 1
- The key and value are
router
andfirst
, respectively, corresponding to therouter=first
label.
3.2.12. Using Router Shards
Router sharding uses NAMESPACE_LABELS
and ROUTE_LABELS
, to filter router namespaces and routes. This enables you to distribute subsets of routes over multiple router deployments. By using non-overlapping subsets, you can effectively partition the set of routes. Alternatively, you can define shards comprising overlapping subsets of routes.
By default, a router selects all routes from all projects (namespaces). Sharding involves adding labels to routes or namespaces and label selectors to routers. Each router shard comprises the routes that are selected by a specific set of label selectors or belong to the namespaces that are selected by a specific set of label selectors.
The router service account must have the [cluster reader
] permission set to allow access to labels in other namespaces.
Router Sharding and DNS
Because an external DNS server is needed to route requests to the desired shard, the administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router.
Consider the following example:
-
Router A lives on host 192.168.0.5 and has routes with
*.foo.com
. -
Router B lives on host 192.168.1.9 and has routes with
*.example.com
.
Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B:
-
*.foo.com A IN 192.168.0.5
-
*.example.com A IN 192.168.1.9
Router Sharding Examples
This section describes router sharding using namespace and route labels.
Figure 3.1. Router Sharding Based on Namespace Labels

Configure a router with a namespace label selector:
$ oc set env dc/router NAMESPACE_LABELS="router=r1"
Because the router has a selector on the namespace, the router will handle routes only for matching namespaces. In order to make this selector match a namespace, label the namespace accordingly:
$ oc label namespace default "router=r1"
Now, if you create a route in the default namespace, the route is available in the default router:
$ oc create -f route1.yaml
Create a new project (namespace) and create a route,
route2
:$ oc new-project p1 $ oc create -f route2.yaml
Notice the route is not available in your router.
Label namespace
p1
withrouter=r1
$ oc label namespace p1 "router=r1"
Adding this label makes the route available in the router.
- Example
A router deployment
finops-router
is configured with the label selectorNAMESPACE_LABELS="name in (finance, ops)"
, and a router deploymentdev-router
is configured with the label selectorNAMESPACE_LABELS="name=dev"
.If all routes are in namespaces labeled
name=finance
,name=ops
, andname=dev
, then this configuration effectively distributes your routes between the two router deployments.In the above scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards.
The criteria for route selection govern how the routes are distributed. It is possible to have overlapping subsets of routes across router deployments.
- Example
In addition to
finops-router
anddev-router
in the example above, you also havedevops-router
, which is configured with a label selectorNAMESPACE_LABELS="name in (dev, ops)"
.The routes in namespaces labeled
name=dev
orname=ops
now are serviced by two different router deployments. This becomes a case in which you have defined overlapping subsets of routes, as illustrated in the procedure in Router Sharding Based on Namespace Labels.In addition, this enables you to create more complex routing rules, allowing the diversion of higher priority traffic to the dedicated
finops-router
while sending lower priority traffic todevops-router
.
Router Sharding Based on Route Labels
NAMESPACE_LABELS
allows filtering of the projects to service and selecting all the routes from those projects, but you may want to partition routes based on other criteria associated with the routes themselves. The ROUTE_LABELS
selector allows you to slice-and-dice the routes themselves.
- Example
A router deployment
prod-router
is configured with the label selectorROUTE_LABELS="mydeployment=prod"
, and a router deploymentdevtest-router
is configured with the label selectorROUTE_LABELS="mydeployment in (dev, test)"
.This configuration partitions routes between the two router deployments according to the routes' labels, irrespective of their namespaces.
The example assumes you have all the routes you want to be serviced tagged with a label
"mydeployment=<tag>"
.
3.2.12.1. Creating Router Shards
This section describes an advanced example of router sharding. Suppose there are 26 routes, named a
— z
, with various labels:
Possible labels on routes
sla=high geo=east hw=modest dept=finance sla=medium geo=west hw=strong dept=dev sla=low dept=ops
These labels express the concepts including service level agreement, geographical location, hardware requirements, and department. The routes can have at most one label from each column. Some routes may have other labels or no labels at all.
Name(s) | SLA | Geo | HW | Dept | Other Labels |
---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
|
Here is a convenience script mkshard that illustrates how oc adm router
, oc set env
, and oc scale
can be used together to make a router shard.
#!/bin/bash # Usage: mkshard ID SELECTION-EXPRESSION id=$1 sel="$2" router=router-shard-$id 1 oc adm router $router --replicas=0 2 dc=dc/router-shard-$id 3 oc set env $dc ROUTE_LABELS="$sel" 4 oc scale $dc --replicas=3 5
Running mkshard several times creates several routers:
Router | Selection Expression | Routes |
---|---|---|
|
|
|
|
|
|
|
|
|
3.2.12.2. Modifying Router Shards
Because a router shard is a construct based on labels, you can modify either the labels (via oc label
) or the selection expression (via oc set env
).
This section extends the example started in the Creating Router Shards section, demonstrating how to change the selection expression.
Here is a convenience script modshard that modifies an existing router to use a new selection expression:
#!/bin/bash # Usage: modshard ID SELECTION-EXPRESSION... id=$1 shift router=router-shard-$id 1 dc=dc/$router 2 oc scale $dc --replicas=0 3 oc set env $dc "$@" 4 oc scale $dc --replicas=3 5
- 1
- The modified router has name
router-shard-<id>
. - 2
- The deployment configuration where the modifications occur.
- 3
- Scale it down.
- 4
- Set the new selection expression using
oc set env
. Unlikemkshard
from the Creating Router Shards section, the selection expression specified as the non-ID
arguments tomodshard
must include the environment variable name as well as its value. - 5
- Scale it back up.
In modshard
, the oc scale
commands are not necessary if the deployment strategy for router-shard-<id>
is Rolling
.
For example, to expand the department for router-shard-3
to include ops
as well as dev
:
$ modshard 3 ROUTE_LABELS='dept in (dev, ops)'
The result is that router-shard-3
now selects routes g
— s
(the combined sets of g
— k
and l
— s
).
This example takes into account that there are only three departments in this example scenario, and specifies a department to leave out of the shard, thus achieving the same result as the preceding example:
$ modshard 3 ROUTE_LABELS='dept != finance'
This example specifies three comma-separated qualities, and results in only route b
being selected:
$ modshard 3 ROUTE_LABELS='hw=strong,type=dynamic,geo=west'
Similarly to ROUTE_LABELS
, which involves a route’s labels, you can select routes based on the labels of the route’s namespace using the NAMESPACE_LABELS
environment variable. This example modifies router-shard-3
to serve routes whose namespace has the label frequency=weekly
:
$ modshard 3 NAMESPACE_LABELS='frequency=weekly'
The last example combines ROUTE_LABELS
and NAMESPACE_LABELS
to select routes with label sla=low
and whose namespace has the label frequency=weekly
:
$ modshard 3 \ NAMESPACE_LABELS='frequency=weekly' \ ROUTE_LABELS='sla=low'
3.2.13. Finding the Host Name of the Router
When exposing a service, a user can use the same route from the DNS name that external users use to access the application. The network administrator of the external network must make sure the host name resolves to the name of a router that has admitted the route. The user can set up their DNS with a CNAME that points to this host name. However, the user may not know the host name of the router. When it is not known, the cluster administrator can provide it.
The cluster administrator can use the --router-canonical-hostname
option with the router’s canonical host name when creating the router. For example:
# oc adm router myrouter --router-canonical-hostname="rtr.example.com"
This creates the ROUTER_CANONICAL_HOSTNAME
environment variable in the router’s deployment configuration containing the host name of the router.
For routers that already exist, the cluster administrator can edit the router’s deployment configuration and add the ROUTER_CANONICAL_HOSTNAME
environment variable:
spec: template: spec: containers: - env: - name: ROUTER_CANONICAL_HOSTNAME value: rtr.example.com
The ROUTER_CANONICAL_HOSTNAME
value is displayed in the route status for all routers that have admitted the route. The route status is refreshed every time the router is reloaded.
When a user creates a route, all of the active routers evaluate the route and, if conditions are met, admit it. When a router that defines the ROUTER_CANONICAL_HOSTNAME
environment variable admits the route, the router places the value in the routerCanonicalHostname
field in the route status. The user can examine the route status to determine which, if any, routers have admitted the route, select a router from the list, and find the host name of the router to pass along to the network administrator.
status: ingress: conditions: lastTransitionTime: 2016-12-07T15:20:57Z status: "True" type: Admitted host: hello.in.mycloud.com routerCanonicalHostname: rtr.example.com routerName: myrouter wildcardPolicy: None
oc describe
inclues the host name when available:
$ oc describe route/hello-route3 ... Requested Host: hello.in.mycloud.com exposed on router myroute (host rtr.example.com) 12 minutes ago
Using the above information, the user can ask the DNS administrator to set up a CNAME from the route’s host, hello.in.mycloud.com
, to the router’s canonical hostname, rtr.example.com
. This results in any traffic to hello.in.mycloud.com
reaching the user’s application.
3.2.14. Customizing the Default Routing Subdomain
You can customize the suffix used as the default routing subdomain for your environment by modifying the master configuration file (the /etc/origin/master/master-config.yaml file by default). Routes that do not specify a host name would have one generated using this default routing subdomain.
The following example shows how you can set the configured suffix to v3.openshift.test:
routingConfig: subdomain: v3.openshift.test
This change requires a restart of the master if it is running.
With the OpenShift Container Platform master(s) running the above configuration, the generated host name for the example of a route named no-route-hostname without a host name added to a namespace mynamespace would be:
no-route-hostname-mynamespace.v3.openshift.test
3.2.15. Forcing Route Host Names to a Custom Routing Subdomain
If an administrator wants to restrict all routes to a specific routing subdomain, they can pass the --force-subdomain
option to the oc adm router
command. This forces the router to override any host names specified in a route and generate one based on the template provided to the --force-subdomain
option.
The following example runs a router, which overrides the route host names using a custom subdomain template ${name}-${namespace}.apps.example.com
.
$ oc adm router --force-subdomain='${name}-${namespace}.apps.example.com'
3.2.16. Using Wildcard Certificates
A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift Container Platform CA to create the certificate. For example:
$ CA=/etc/origin/master $ oc adm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='*.cloudapps.example.com' \ --cert=cloudapps.crt --key=cloudapps.key
The oc adm ca create-server-cert
command generates a certificate that is valid for two years. This can be altered with the --expire-days
option, but for security reasons, it is recommended to not make it greater than this value.
Run oc adm
commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.
The router expects the certificate and key to be in PEM format in a single file:
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
From there you can use the --default-cert
flag:
$ oc adm router --default-cert=cloudapps.router.pem --service-account=router
Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com.
3.2.17. Manually Redeploy Certificates
To manually redeploy the router certificates:
Check to see if a secret containing the default router certificate was added to the router:
$ oc set volume dc/router deploymentconfigs/router secret/router-certs as server-certificate mounted at /etc/pki/tls/private
If the certificate is added, skip the following step and overwrite the secret.
Make sure that you have a default certificate directory set for the following variable
DEFAULT_CERTIFICATE_DIR
:$ oc set env dc/router --list DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private
If not, create the directory using the following command:
$ oc set env dc/router DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private
Export the certificate to PEM format:
$ cat custom-router.key custom-router.crt custom-ca.crt > custom-router.crt
Overwrite or create a router certificate secret:
If the certificate secret was added to the router, overwrite the secret. If not, create a new secret.
To overwrite the secret, run the following command:
$ oc create secret generic router-certs --from-file=tls.crt=custom-router.crt --from-file=tls.key=custom-router.key --type=kubernetes.io/tls -o json --dry-run | oc replace -f -
To create a new secret, run the following commands:
$ oc create secret generic router-certs --from-file=tls.crt=custom-router.crt --from-file=tls.key=custom-router.key --type=kubernetes.io/tls $ oc set volume dc/router --add --mount-path=/etc/pki/tls/private --secret-name='router-certs' --name router-certs
Deploy the router.
$ oc rollout latest dc/router
3.2.18. Using Secured Routes
Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:
# openssl rsa -in <passwordProtectedKey.key> -out <new.key>
Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.
First, start up a router instance:
# oc adm router --replicas=1 --service-account=router
Next, create a private key, csr and certificate for our edge secured route. The instructions on how to do that would be specific to your certificate authority and provider. For a simple self-signed certificate for a domain named www.example.test
, see the example shown below:
# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a route using the above certificate and key.
$ oc create route edge --service=my-service \ --hostname=www.example.test \ --key=example-test.key --cert=example-test.crt route "my-service" created
Look at its definition.
$ oc get route/my-service -o yaml apiVersion: v1 kind: Route metadata: name: my-service spec: host: www.example.test to: kind: Service name: my-service tls: termination: edge key: | -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----
Make sure your DNS entry for www.example.test
points to your router instance(s) and the route to your domain should be available. The example below uses curl along with a local resolver to simulate the DNS lookup:
# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
3.2.19. Using Wildcard Routes (for a Subdomain)
The HAProxy router has support for wildcard routes, which are enabled by setting the ROUTER_ALLOW_WILDCARD_ROUTES
environment variable to true
. Any routes with a wildcard policy of Subdomain
that pass the router admission checks will be serviced by the HAProxy router. Then, the HAProxy router exposes the associated service (for the route) per the route’s wildcard policy.
To change a route’s wildcard policy, you must remove the route and recreate it with the updated wildcard policy. Editing only the route’s wildcard policy in a route’s .yaml file does not work.
$ oc adm router --replicas=0 ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc scale dc/router --replicas=1
Learn how to configure the web console for wildcard routes.
Using a Secure Wildcard Edge Terminated Route
This example reflects TLS termination occurring on the router before traffic is proxied to the destination. Traffic sent to any hosts in the subdomain example.org
(*.example.org
) is proxied to the exposed service.
The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end for all hosts that match the subdomain (*.example.org
).
Start up a router instance:
$ oc adm router --replicas=0 --service-account=router $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc scale dc/router --replicas=1
Create a private key, certificate signing request (CSR), and certificate for the edge secured route.
The instructions on how to do this are specific to your certificate authority and provider. For a simple self-signed certificate for a domain named
*.example.test
, see this example:# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=*.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a wildcard route using the above certificate and key:
$ cat > route.yaml <<REOF apiVersion: v1 kind: Route metadata: name: my-service spec: host: www.example.test wildcardPolicy: Subdomain to: kind: Service name: my-service tls: termination: edge key: "$(perl -pe 's/\n/\\n/' example-test.key)" certificate: "$(perl -pe 's/\n/\\n/' example-test.cert)" REOF $ oc create -f route.yaml
Ensure your DNS entry for
*.example.test
points to your router instance(s) and the route to your domain is available.This example uses
curl
with a local resolver to simulate the DNS lookup:# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/ # curl -k --resolve abc.example.test:443:$routerip https://abc.example.test/ # curl -k --resolve anyname.example.test:443:$routerip https://anyname.example.test/
For routers that allow wildcard routes (ROUTER_ALLOW_WILDCARD_ROUTES
set to true
), there are some caveats to the ownership of a subdomain associated with a wildcard route.
Prior to wildcard routes, ownership was based on the claims made for a host name with the namespace with the oldest route winning against any other claimants. For example, route r1
in namespace ns1
with a claim for one.example.test
would win over another route r2
in namespace ns2
for the same host name one.example.test
if route r1
was older than route r2
.
In addition, routes in other namespaces were allowed to claim non-overlapping hostnames. For example, route rone
in namespace ns1
could claim www.example.test
and another route rtwo
in namespace d2
could claim c3po.example.test
.
This is still the case if there are no wildcard routes claiming that same subdomain (example.test
in the above example).
However, a wildcard route needs to claim all of the host names within a subdomain (host names of the form \*.example.test
). A wildcard route’s claim is allowed or denied based on whether or not the oldest route for that subdomain (example.test
) is in the same namespace as the wildcard route. The oldest route can be either a regular route or a wildcard route.
For example, if there is already a route eldest
that exists in the ns1
namespace that claimed a host named owner.example.test
and, if at a later point in time, a new wildcard route wildthing
requesting for routes in that subdomain (example.test
) is added, the claim by the wildcard route will only be allowed if it is the same namespace (ns1
) as the owning route.
The following examples illustrate various scenarios in which claims for wildcard routes will succeed or fail.
In the example below, a router that allows wildcard routes will allow non-overlapping claims for hosts in the subdomain example.test
as long as a wildcard route has not claimed a subdomain.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc expose service myservice --hostname=aname.example.test $ oc expose service myservice --hostname=bname.example.test $ oc project ns2 $ oc expose service anotherservice --hostname=second.example.test $ oc expose service anotherservice --hostname=cname.example.test $ oc project otherns $ oc expose service thirdservice --hostname=emmy.example.test $ oc expose service thirdservice --hostname=webby.example.test
In the example below, a router that allows wildcard routes will not allow the claim for owner.example.test
or aname.example.test
to succeed since the owning namespace is ns1
.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc expose service myservice --hostname=aname.example.test $ oc project ns2 $ oc expose service secondservice --hostname=bname.example.test $ oc expose service secondservice --hostname=cname.example.test $ # Router will not allow this claim with a different path name `/p1` as $ # namespace `ns1` has an older route claiming host `aname.example.test`. $ oc expose service secondservice --hostname=aname.example.test --path="/p1" $ # Router will not allow this claim as namespace `ns1` has an older route $ # claiming host name `owner.example.test`. $ oc expose service secondservice --hostname=owner.example.test $ oc project otherns $ # Router will not allow this claim as namespace `ns1` has an older route $ # claiming host name `aname.example.test`. $ oc expose service thirdservice --hostname=aname.example.test
In the example below, a router that allows wildcard routes will allow the claim for `\*.example.test
to succeed since the owning namespace is ns1
and the wildcard route belongs to that same namespace.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # router will allow this claim.
In the example below, a router that allows wildcard routes will not allow the claim for `\*.example.test
to succeed since the owning namespace is ns1
and the wildcard route belongs to another namespace cyclone
.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ # Switch to a different namespace/project. $ oc project cyclone $ # Reusing the route.yaml from a prior example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # router will deny (_NOT_ allow) this claim.
Similarly, once a namespace with a wildcard route claims a subdomain, only routes within that namespace can claim any hosts in that same subdomain.
In the example below, once a route in namespace ns1
with a wildcard route claims subdomain example.test
, only routes in the namespace ns1
are allowed to claim any hosts in that same subdomain.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc project otherns $ # namespace `otherns` is allowed to claim for other.example.test $ oc expose service otherservice --hostname=other.example.test $ oc project ns1 $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # Router will allow this claim. $ # In addition, route in namespace otherns will lose its claim to host $ # `other.example.test` due to the wildcard route claiming the subdomain. $ # namespace `ns1` is allowed to claim for deux.example.test $ oc expose service mysecondservice --hostname=deux.example.test $ # namespace `ns1` is allowed to claim for deux.example.test with path /p1 $ oc expose service mythirdservice --hostname=deux.example.test --path="/p1" $ oc project otherns $ # namespace `otherns` is not allowed to claim for deux.example.test $ # with a different path '/otherpath' $ oc expose service otherservice --hostname=deux.example.test --path="/otherpath" $ # namespace `otherns` is not allowed to claim for owner.example.test $ oc expose service yetanotherservice --hostname=owner.example.test $ # namespace `otherns` is not allowed to claim for unclaimed.example.test $ oc expose service yetanotherservice --hostname=unclaimed.example.test
In the example below, different scenarios are shown, in which the owner routes are deleted and ownership is passed within and across namespaces. While a route claiming host eldest.example.test
in the namespace ns1
exists, wildcard routes in that namespace can claim subdomain example.test
. When the route for host eldest.example.test
is deleted, the next oldest route senior.example.test
would become the oldest route and would not affect any other routes. Once the route for host senior.example.test
is deleted, the next oldest route junior.example.test
becomes the oldest route and block the wildcard route claimant.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=eldest.example.test $ oc expose service seniorservice --hostname=senior.example.test $ oc project otherns $ # namespace `otherns` is allowed to claim for other.example.test $ oc expose service juniorservice --hostname=junior.example.test $ oc project ns1 $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # Router will allow this claim. $ # In addition, route in namespace otherns will lose its claim to host $ # `junior.example.test` due to the wildcard route claiming the subdomain. $ # namespace `ns1` is allowed to claim for dos.example.test $ oc expose service mysecondservice --hostname=dos.example.test $ # Delete route for host `eldest.example.test`, the next oldest route is $ # the one claiming `senior.example.test`, so route claims are unaffacted. $ oc delete route myservice $ # Delete route for host `senior.example.test`, the next oldest route is $ # the one claiming `junior.example.test` in another namespace, so claims $ # for a wildcard route would be affected. The route for the host $ # `dos.example.test` would be unaffected as there are no other wildcard $ # claimants blocking it. $ oc delete route seniorservice
3.2.20. Using the Container Network Stack
The OpenShift Container Platform router runs inside a container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.
Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
This host network behavior is controlled by the --host-network
router command line option, and the default behaviour is the equivalent of using --host-network=true
. If you wish to run the router with the container network stack, use the --host-network=false
option when creating the router. For example:
$ oc adm router --service-account=router --host-network=false
Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.
Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address.
On OpenShift Container Platform clusters using multi-tenant network isolation, routers on a non-default namespace with the --host-network=false
option will load all routes in the cluster, but routes across the namespaces will not be reachable due to network isolation. With the --host-network=true
option, routes bypass the container network and it can access any pod in the cluster. If isolation is needed in this case, then do not add routes across the namespaces.
3.2.21. Using the Dynamic Configuration Manager
You can configure the HAProxy router to support the dynamic configuration manager.
The dynamic configuration manager brings certain types of routes online without requiring HAProxy reload downtime. It handles any route and endpoint life-cycle events such as route and endpoint addition|deletion|update
.
Enable the dynamic configuration manager by setting the ROUTER_HAPROXY_CONFIG_MANAGER
environment variable to true
:
$ oc set env dc/<router_name> ROUTER_HAPROXY_CONFIG_MANAGER='true'
If the dynamic configuration manager cannot dynamically configure HAProxy, it rewrites the configuration and reloads the HAProxy process. For example, if a new route contains custom annotations, such as custom timeouts, or if the route requires custom TLS configuration.
The dynamic configuration internally uses the HAProxy socket and configuration API with a pool of pre-allocated routes and back end servers. The pre-allocated pool of routes is created using route blueprints. The default set of blueprints supports unsecured routes, edge secured routes without any custom TLS configuration, and passthrough routes.
re-encrypt
routes require custom TLS configuration information, so extra configuration is needed in order to use them with the dynamic configuration manager.
Extend the blueprints that the dynamic configuration manager can use by setting the ROUTER_BLUEPRINT_ROUTE_NAMESPACE
and optionally the ROUTER_BLUEPRINT_ROUTE_LABELS
environment variables.
All routes, or the routes that match the route labels, in the blueprint route namespace are processed as custom blueprints similar to the default set of blueprints. This includes re-encrypt
routes or routes that use custom annotations or routes with custom TLS configuration.
The following procedure assumes you have created three route objects: reencrypt-blueprint
, annotated-edge-blueprint
, and annotated-unsecured-blueprint
. See Route Types for an example of the different route type objects.
Procedure
Create a new project:
$ oc new-project namespace_name
Create a new route. This method exposes an existing service:
$ oc create route edge edge_route_name --key=/path/to/key.pem \ --cert=/path/to/cert.pem --service=<service> --port=8443
Label the route:
$ oc label route edge_route_name type=route_label_1
Create three different routes from route object definitions. All have the label
type=route_label_1
:$ oc create -f reencrypt-blueprint.yaml $ oc create -f annotated-edge-blueprint.yaml $ oc create -f annotated-unsecured-blueprint.json
You can also remove a label from a route, which prevents it from being used as a blueprint route. For example, to prevent the
annotated-unsecured-blueprint
from being used as a blueprint route:$ oc label route annotated-unsecured-blueprint type-
Create a new router to be used for the blueprint pool:
$ oc adm router
Set the environment variables for the new router:
$ oc set env dc/router ROUTER_HAPROXY_CONFIG_MANAGER=true \ ROUTER_BLUEPRINT_ROUTE_NAMESPACE=namespace_name \ ROUTER_BLUEPRINT_ROUTE_LABELS="type=route_label_1"
All routes in the namespace or project
namespace_name
with labeltype=route_label_1
can be processed and used as custom blueprints.Note that you can also add, update, or remove blueprints by managing the routes as you would normally in that namespace
namespace_name
. The dynamic configuration manager watches for changes to routes in the namespacenamespace_name
similar to how the router watches forroutes
andservices
.The pool sizes of the pre-allocated routes and back end servers can be controlled with the
ROUTER_BLUEPRINT_ROUTE_POOL_SIZE
, which defaults to10
, andROUTER_MAX_DYNAMIC_SERVERS
, which defaults to5
, environment variables. You can also control how often changes made by the dynamic configuration manager are committed to disk, which is when the HAProxy configuration is re-written and the HAProxy process is reloaded. The default is one hour, or 3600 seconds, or when the dynamic configuration manager runs out of pool space. TheCOMMIT_INTERVAL
environment variable controls this setting:$ oc set env dc/router -c router ROUTER_BLUEPRINT_ROUTE_POOL_SIZE=20 \ ROUTER_MAX_DYNAMIC_SERVERS=3 COMMIT_INTERVAL=6h
The example increases the pool size for each blueprint route to
20
, reduces the number of dynamic servers to3
, and increases the commit interval to6
hours.
3.2.22. Exposing Router Metrics
The HAProxy router metrics are, by default, exposed or published in Prometheus format for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd). Metrics are also available directly from the HAProxy router in its own HTML format for viewing in a browser or CSV download. These metrics include the HAProxy native metrics and some controller metrics.
When you create a router using the following command, OpenShift Container Platform makes metrics available in Prometheus format on the stats port, by default 1936.
$ oc adm router --service-account=router
To extract the raw statistics in Prometheus format run the following command:
curl <user>:<password>@<router_IP>:<STATS_PORT>
For example:
$ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics
You can get the information you need to access the metrics from the router service annotations:
$ oc edit service <router-name> apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "1936" prometheus.io/scrape: "true" prometheus.openshift.io/password: IImoDqON02 prometheus.openshift.io/username: admin
The
prometheus.io/port
is the stats port, by default 1936. You might need to configure your firewall to permit access. Use the previous user name and password to access the metrics. The path is /metrics.$ curl <user>:<password>@<router_IP>:<STATS_PORT> for example: $ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ...
To get metrics in a browser:
Delete the following environment variables from the router deployment configuration file:
$ oc edit dc router - name: ROUTER_LISTEN_ADDR value: 0.0.0.0:1936 - name: ROUTER_METRICS_TYPE value: haproxy
Patch the router readiness probe to use the same path as the liveness probe as it is now served by the haproxy router:
$ oc patch dc router -p '"spec": {"template": {"spec": {"containers": [{"name": "router","readinessProbe": {"httpGet": {"path": "/healthz"}}}]}}}'
Launch the stats window using the following URL in a browser, where the
STATS_PORT
value is1936
by default:http://admin:<Password>@<router_IP>:<STATS_PORT>
You can get the stats in CSV format by adding
;csv
to the URL:For example:
http://admin:<Password>@<router_IP>:1936;csv
To get the router IP, admin name, and password:
oc describe pod <router_pod>
To suppress metrics collection:
$ oc adm router --service-account=router --stats-port=0
3.2.23. ARP Cache Tuning for Large-scale Clusters
In OpenShift Container Platform clusters with large numbers of routes (greater than the value of net.ipv4.neigh.default.gc_thresh3
, which is 65536
by default), you must increase the default values of sysctl variables on each node in the cluster running the router pod to allow more entries in the ARP cache.
When the problem is occuring, the kernel messages would be similar to the following:
[ 1738.811139] net_ratelimit: 1045 callbacks suppressed [ 1743.823136] net_ratelimit: 293 callbacks suppressed
When this issue occurs, the oc
commands might start to fail with the following error:
Unable to connect to the server: dial tcp: lookup <hostname> on <ip>:<port>: write udp <ip>:<port>-><ip>:<port>: write: invalid argument
To verify the actual amount of ARP entries for IPv4, run the following:
# ip -4 neigh show nud all | wc -l
If the number begins to approach the net.ipv4.neigh.default.gc_thresh3
threshold, increase the values. Get the current value by running:
# sysctl net.ipv4.neigh.default.gc_thresh1 net.ipv4.neigh.default.gc_thresh1 = 128 # sysctl net.ipv4.neigh.default.gc_thresh2 net.ipv4.neigh.default.gc_thresh2 = 512 # sysctl net.ipv4.neigh.default.gc_thresh3 net.ipv4.neigh.default.gc_thresh3 = 1024
The following sysctl sets the variables to the OpenShift Container Platform current default values.
# sysctl net.ipv4.neigh.default.gc_thresh1=8192 # sysctl net.ipv4.neigh.default.gc_thresh2=32768 # sysctl net.ipv4.neigh.default.gc_thresh3=65536
To make these settings permanent, see this document.
3.2.24. Protecting Against DDoS Attacks
Add timeout http-request to the default HAProxy router image to protect the deployment against distributed denial-of-service (DDoS) attacks (for example, slowloris):
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
stats socket ./haproxy.stats level admin
defaults
option http-server-close
mode http
timeout http-request 5s
timeout connect 5s 1
timeout server 10s
timeout client 30s
- 1
- timeout http-request is set up to 5 seconds. HAProxy gives a client 5 seconds *to send its whole HTTP request. Otherwise, HAProxy shuts the connection with *an error.
Also, when the environment variable ROUTER_SLOWLORIS_TIMEOUT
is set, it limits the amount of time a client has to send the whole HTTP request. Otherwise, HAProxy will shut down the connection.
Setting the environment variable allows information to be captured as part of the router’s deployment configuration and does not require manual modification of the template, whereas manually adding the HAProxy setting requires you to rebuild the router pod and maintain your router template file.
Using annotations implements basic DDoS protections in the HAProxy template router, including the ability to limit the:
- number of concurrent TCP connections
- rate at which a client can request TCP connections
- rate at which HTTP requests can be made
These are enabled on a per route basis because applications can have extremely different traffic patterns.
Table 3.1. HAProxy Template Router Settings
Setting | Description |
---|---|
| Enables the settings be configured (when set to true, for example). |
| The number of concurrent TCP connections that can be made by the same IP address on this route. |
| The number of TCP connections that can be opened by a client IP. |
| The number of HTTP requests that a client IP can make in a 3-second period. |
3.2.25. Enable HAProxy Threading
Enabled threading with the --threads
flag. This flag specifies the number of threads that the HAProxy router will use.
3.3. Deploying a Customized HAProxy Router
3.3.1. Overview
The default HAProxy router is intended to satisfy the needs of most users. However, it does not expose all of the capability of HAProxy. Therefore, users may need to modify the router for their own needs.
You may need to implement new features within the application back-ends, or modify the current operation. The router plug-in provides all the facilities necessary to make this customization.
The router pod uses a template file to create the needed HAProxy configuration file. The template file is a golang template. When processing the template, the router has access to OpenShift Container Platform information, including the router’s deployment configuration, the set of admitted routes, and some helper functions.
When the router pod starts, and every time it reloads, it creates an HAProxy configuration file, and then it starts HAProxy. The HAProxy configuration manual describes all of the features of HAProxy and how to construct a valid configuration file.
A configMap can be used to add the new template to the router pod. With this approach, the router deployment configuration is modified to mount the configMap as a volume in the router pod. The TEMPLATE_FILE
environment variable is set to the full path name of the template file in the router pod.
It is not guaranteed that router template customizations will still work after you upgrade OpenShift Container Platform.
Also, router template customizations must be applied to the template version of the router that is running.
Alternatively, you can build a custom router image and use it when deploying some or all of your routers. There is no need for all routers to run the same image. To do this, modify the haproxy-template.config file, and rebuild the router image. The new image is pushed to the cluster’s Docker repository, and the router’s deployment configuration image: field is updated with the new name. When the cluster is updated, the image needs to be rebuilt and pushed.
In either case, the router pod starts with the template file.
3.3.2. Obtaining the Router Configuration Template
The HAProxy template file is fairly large and complex. For some changes, it may be easier to modify the existing template rather than writing a complete replacement. You can obtain a haproxy-config.template file from a running router by running this on master, referencing the router pod:
# oc get po NAME READY STATUS RESTARTS AGE router-2-40fc3 1/1 Running 0 11d # oc exec router-2-40fc3 cat haproxy-config.template > haproxy-config.template # oc exec router-2-40fc3 cat haproxy.config > haproxy.config
Alternatively, you can log onto the node that is running the router:
# docker run --rm --interactive=true --tty --entrypoint=cat \ registry.redhat.io/openshift3/ose-haproxy-router:v{product-version} haproxy-config.template
The image name is from container images.
Save this content to a file for use as the basis of your customized template. The saved haproxy.config shows what is actually running.
3.3.3. Modifying the Router Configuration Template
3.3.3.1. Background
The template is based on the golang template. It can reference any of the environment variables in the router’s deployment configuration, any configuration information that is described below, and router provided helper functions.
The structure of the template file mirrors the resulting HAProxy configuration file. As the template is processed, anything not surrounded by {{" something "}}
is directly copied to the configuration file. Passages that are surrounded by {{" something "}}
are evaluated. The resulting text, if any, is copied to the configuration file.
3.3.3.2. Go Template Actions
The define action names the file that will contain the processed template.
{{define "/var/lib/haproxy/conf/haproxy.config"}}pipeline{{end}}
Table 3.2. Template Router Functions
Function | Meaning |
---|---|
| Returns the list of valid endpoints. When action is "shuffle", the order of endpoints is randomized. |
| Tries to get the named environment variable from the pod. If it is not defined or empty, it returns the optional second argument. Otherwise, it returns an empty string. |
| The first argument is a string that contains the regular expression, the second argument is the variable to test. Returns a Boolean value indicating whether the regular expression provided as the first argument matches the string provided as the second argument. |
| Determines if a given variable is an integer. |
| Compares a given string to a list of allowed strings. Returns first match scanning left to right through the list. |
| Compares a given string to a list of allowed strings. Returns "true" if the string is an allowed value, otherwise returns false. |
| Generates a regular expression matching the route hosts (and paths). The first argument is the host name, the second is the path, and the third is a wildcard Boolean. |
| Generates host name to use for serving/matching certificates. First argument is the host name and the second is the wildcard Boolean. |
| Determines if a given variable contains "true". |
These functions are provided by the HAProxy template router plug-in.
3.3.3.3. Router Provided Information
This section reviews the OpenShift Container Platform information that the router makes available to the template. The router configuration parameters are the set of data that the HAProxy router plug-in is given. The fields are accessed by (dot) .Fieldname
.
The tables below the Router Configuration Parameters expand on the definitions of the various fields. In particular, .State has the set of admitted routes.
Table 3.3. Router Configuration Parameters
Field | Type | Description |
---|---|---|
| string | The directory that files will be written to, defaults to /var/lib/containers/router |
|
| The routes. |
|
| The service lookup. |
| string | Full path name to the default certificate in pem format. |
|
| Peers. |
| string | User name to expose stats with (if the template supports it). |
| string | Password to expose stats with (if the template supports it). |
| int | Port to expose stats with (if the template supports it). |
| bool | Whether the router should bind the default ports. |
Table 3.4. Router ServiceAliasConfig (A Route)
Field | Type | Description |
---|---|---|
| string | The user-specified name of the route. |
| string | The namespace of the route. |
| string |
The host name. For example, |
| string |
Optional path. For example, |
|
| The termination policy for this back-end; drives the mapping files and router configuration. |
|
| Certificates used for securing this back-end. Keyed by the certificate ID. |
|
| Indicates the status of configuration that needs to be persisted. |
| string | Indicates the port the user wants to expose. If empty, a port will be selected for the service. |
|
|
Indicates desired behavior for insecure connections to an edge-terminated route: |
| string | Hash of the route + namespace name used to obscure the cookie ID. |
| bool | Indicates this service unit needing wildcard support. |
|
| Annotations attached to this route. |
|
| Collection of services that support this route, keyed by service name and valued on the weight attached to it with respect to other entries in the map. |
| int |
Count of the |
The ServiceAliasConfig
is a route for a service. Uniquely identified by host + path. The default template iterates over routes using {{range $cfgIdx, $cfg := .State }}
. Within such a {{range}}
block, the template can refer to any field of the current ServiceAliasConfig
using $cfg.Field
.
Table 3.5. Router ServiceUnit
Field | Type | Description |
---|---|---|
| string |
Name corresponds to a service name + namespace. Uniquely identifies the |
|
| Endpoints that back the service. This translates into a final back-end implementation for routers. |
ServiceUnit
is an encapsulation of a service, the endpoints that back that service, and the routes that point to the service. This is the data that drives the creation of the router configuration files
Table 3.6. Router Endpoint
Field | Type |
---|---|
| string |
| string |
| string |
| string |
| string |
| string |
| bool |
Endpoint
is an internal representation of a Kubernetes endpoint.
Table 3.7. Router Certificate, ServiceAliasConfigStatus
Field | Type | Description |
---|---|---|
| string |
Represents a public/private key pair. It is identified by an ID, which will become the file name. A CA certificate will not have a |
| string | Indicates that the necessary files for this configuration have been persisted to disk. Valid values: "saved", "". |
Table 3.8. Router Certificate Type
Field | Type | Description |
---|---|---|
ID | string | |
Contents | string | The certificate. |
PrivateKey | string | The private key. |
Table 3.9. Router TLSTerminationType
Field | Type | Description |
---|---|---|
| string | Dictates where the secure communication will stop. |
| string | Indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80. |
TLSTerminationType
and InsecureEdgeTerminationPolicyType
dictate where the secure communication will stop.
Table 3.10. Router TLSTerminationType Values
Constant | Value | Meaning |
---|---|---|
|
| Terminate encryption at the edge router. |
|
| Terminate encryption at the destination, the destination is responsible for decrypting traffic. |
|
| Terminate encryption at the edge router and re-encrypt it with a new certificate supplied by the destination. |
Table 3.11. Router InsecureEdgeTerminationPolicyType Values
Type | Meaning |
---|---|
| Traffic is sent to the server on the insecure port (default). |
| No traffic is allowed on the insecure port. |
| Clients are redirected to the secure port. |
None (""
) is the same as Disable
.
3.3.3.4. Annotations
Each route can have annotations attached. Each annotation is just a name and a value.
apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms [...]
The name can be anything that does not conflict with existing Annotations. The value is any string. The string can have multiple tokens separated by a space. For example, aa bb cc
. The template uses {{index}}
to extract the value of an annotation. For example:
{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}
This is an example of how this could be used for mutual client authorization.
{{ with $cnList := index $cfg.Annotations "whiteListCertCommonName" }} {{ if ne $cnList "" }} acl test ssl_c_s_dn(CN) -m str {{ $cnList }} http-request deny if !test {{ end }} {{ end }}
Then, you can handle the white-listed CNs with this command.
$ oc annotate route <route-name> --overwrite whiteListCertCommonName="CN1 CN2 CN3"
See Route-specific Annotations for more information.
3.3.3.5. Environment Variables
The template can use any environment variables that exist in the router pod. The environment variables can be set in the deployment configuration. New environment variables can be added.
They are referenced by the env
function:
{{env "ROUTER_MAX_CONNECTIONS" "20000"}}
The first string is the variable, and the second string is the default when the variable is missing or nil
. When ROUTER_MAX_CONNECTIONS
is not set or is nil
, 20000 is used. Environment variables are a map where the key is the environment variable name and the content is the value of the variable.
See Route-specific Environment variables for more information.
3.3.3.6. Example Usage
Here is a simple template based on the HAProxy template file.
Start with a comment:
{{/* Here is a small example of how to work with templates taken from the HAProxy template file. */}}
The template can create any number of output files. Use a define construct to create an output file. The file name is specified as an argument to define, and everything inside the define block up to the matching end is written as the contents of that file.
{{ define "/var/lib/haproxy/conf/haproxy.config" }} global {{ end }}
The above will copy global
to the /var/lib/haproxy/conf/haproxy.config file, and then close the file.
Set up logging based on environment variables.
{{ with (env "ROUTER_SYSLOG_ADDRESS" "") }} log {{.}} {{env "ROUTER_LOG_FACILITY" "local1"}} {{env "ROUTER_LOG_LEVEL" "warning"}} {{ end }}
The env
function extracts the value for the environment variable. If the environment variable is not defined or nil
, the second argument is returned.
The with construct sets the value of "." (dot) within the with block to whatever value is provided as an argument to with. The with
action tests Dot for nil
. If not nil
, the clause is processed up to the end
. In the above, assume ROUTER_SYSLOG_ADDRESS
contains /var/log/msg, ROUTER_LOG_FACILITY
is not defined, and ROUTER_LOG_LEVEL
contains info
. The following will be copied to the output file:
log /var/log/msg local1 info
Each admitted route ends up generating lines in the configuration file. Use range
to go through the admitted routes:
{{ range $cfgIdx, $cfg := .State }} backend be_http_{{$cfgIdx}} {{end}}
.State
is a map of ServiceAliasConfig
, where the key is the route name. range
steps through the map and, for each pass, it sets $cfgIdx
with the key
, and sets $cfg
to point to the ServiceAliasConfig
that describes the route. If there are two routes named myroute
and hisroute
, the above will copy the following to the output file:
backend be_http_myroute backend be_http_hisroute
Route Annotations, $cfg.Annotations
, is also a map with the annotation name as the key and the content string as the value. The route can have as many annotations as desired and the use is defined by the template author. The user codes the annotation into the route and the template author customized the HAProxy template to handle the annotation.
The common usage is to index the annotation to get the value.
{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}
The index extracts the value for the given annotation, if any. Therefore, $balanceAlgo
will contain the string associated with the annotation or nil
. As above, you can test for a non-nil
string and act on it with the with
construct.
{{ with $balanceAlgo }} balance $balanceAlgo {{ end }}
Here when $balanceAlgo
is not nil
, balance $balanceAlgo
is copied to the output file.
In a second example, you want to set a server timeout based on a timeout value set in an annotation.
$value := index $cfg.Annotations "haproxy.router.openshift.io/timeout"
The $value
can now be evaluated to make sure it contains a properly constructed string. The matchPattern
function accepts a regular expression and returns true
if the argument satisfies the expression.
matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value
This would accept 5000ms
but not 7y
. The results can be used in a test.
{{if (matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value) }} timeout server {{$value}} {{ end }}
It can also be used to match tokens:
matchPattern "roundrobin|leastconn|source" $balanceAlgo
Alternatively matchValues
can be used to match tokens:
matchValues $balanceAlgo "roundrobin" "leastconn" "source"
3.3.4. Using a ConfigMap to Replace the Router Configuration Template
You can use a ConfigMap to customize the router instance without rebuilding the router image. The haproxy-config.template, reload-haproxy, and other scripts can be modified as well as creating and modifying router environment variables.
- Copy the haproxy-config.template that you want to modify as described above. Modify it as desired.
Create a ConfigMap:
$ oc create configmap customrouter --from-file=haproxy-config.template
The
customrouter
ConfigMap now contains a copy of the modified haproxy-config.template file.Modify the router deployment configuration to mount the ConfigMap as a file and point the
TEMPLATE_FILE
environment variable to it. This can be done viaoc set env
andoc set volume
commands, or alternatively by editing the router deployment configuration.- Using
oc
commands $ oc set volume dc/router --add --overwrite \ --name=config-volume \ --mount-path=/var/lib/haproxy/conf/custom \ --source='{"configMap": { "name": "customrouter"}}' $ oc set env dc/router \ TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
- Editing the Router Deployment Configuration
Use
oc edit dc router
to edit the router deployment configuration with a text editor.... - name: STATS_USERNAME value: admin - name: TEMPLATE_FILE 1 value: /var/lib/haproxy/conf/custom/haproxy-config.template image: openshift/origin-haproxy-routerp ... terminationMessagePath: /dev/termination-log volumeMounts: 2 - mountPath: /var/lib/haproxy/conf/custom name: config-volume dnsPolicy: ClusterFirst ... terminationGracePeriodSeconds: 30 volumes: 3 - configMap: name: customrouter name: config-volume ...
Save the changes and exit the editor. This restarts the router.
- Using
3.3.5. Using Stick Tables
The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.
Adding a Peer Section
In order to synchronize stick-tables amongst peers you must a define a peers section in your HAProxy configuration. This section determines how HAProxy will identify and connect to peers. The plug-in provides data to the template under the .PeerEndpoints
variable to allow you to easily identify members of the router service. You may add a peer section to the haproxy-config.template file inside the router image by adding:
{{ if (len .PeerEndpoints) gt 0 }} peers openshift_peers {{ range $endpointID, $endpoint := .PeerEndpoints }} peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937 {{ end }} {{ end }}
Changing the Reload Script
When using stick-tables, you have the option of telling HAProxy what it should consider the name of the local host in the peer section. When creating endpoints, the plug-in attempts to set the TargetName
to the value of the endpoint’s TargetRef.Name
. If TargetRef
is not set, it will set the TargetName
to the IP address. The TargetRef.Name
corresponds with the Kubernetes host name, therefore you can add the -L
option to the reload-haproxy
script to identify the local host in the peer section.
peer_name=$HOSTNAME 1
if [ -n "$old_pid" ]; then
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid
else
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name
fi
- 1
- Must match an endpoint target name that is used in the peer section.
Modifying Back Ends
Finally, to use the stick-tables within back ends, you can modify the HAProxy configuration to use the stick-tables and peer set. The following is an example of changing the existing back end for TCP connections to use stick-tables:
{{ if eq $cfg.TLSTermination "passthrough" }} backend be_tcp_{{$cfgIdx}} balance leastconn timeout check 5000ms stick-table type ip size 1m expire 5m{{ if (len $.PeerEndpoints) gt 0 }} peers openshift_peers {{ end }} stick on src {{ range $endpointID, $endpoint := $serviceUnit.EndpointTable }} server {{$endpointID}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms {{ end }} {{ end }}
After this modification, you can rebuild your router.
3.3.6. Rebuilding Your Router
In order to rebuild the router, you need copies of several files that are present on a running router. Make a work directory and copy the files from the router:
# mkdir - myrouter/conf # cd myrouter # oc get po NAME READY STATUS RESTARTS AGE router-2-40fc3 1/1 Running 0 11d # oc exec router-2-40fc3 cat haproxy-config.template > conf/haproxy-config.template # oc exec router-2-40fc3 cat error-page-503.http > conf/error-page-503.http # oc exec router-2-40fc3 cat default_pub_keys.pem > conf/default_pub_keys.pem # oc exec router-2-40fc3 cat ../Dockerfile > Dockerfile # oc exec router-2-40fc3 cat ../reload-haproxy > reload-haproxy
You can edit or replace any of these files. However, conf/haproxy-config.template and reload-haproxy are the most likely to be modified.
After updating the files:
# docker build -t openshift/origin-haproxy-router-myversion . # docker tag openshift/origin-haproxy-router-myversion 172.30.243.98:5000/openshift/haproxy-router-myversion 1 # docker push 172.30.243.98:5000/openshift/origin-haproxy-router-pc:latest 2
To use the new router, edit the router deployment configuration either by changing the image: string or by adding the --images=<repo>/<image>:<tag>
flag to the oc adm router
command.
When debugging the changes, it is helpful to set imagePullPolicy: Always
in the deployment configuration to force an image pull on each pod creation. When debugging is complete, you can change it back to imagePullPolicy: IfNotPresent
to avoid the pull on each pod start.
3.4. Configuring the HAProxy Router to Use the PROXY Protocol
3.4.1. Overview
By default, the HAProxy router expects incoming connections to unsecure, edge, and re-encrypt routes to use HTTP. However, you can configure the router to expect incoming requests by using the PROXY protocol instead. This topic describes how to configure the HAProxy router and an external load balancer to use the PROXY protocol.
3.4.2. Why Use the PROXY Protocol?
When an intermediary service such as a proxy server or load balancer forwards an HTTP request, it appends the source address of the connection to the request’s "Forwarded" header in order to provide this information to subsequent intermediaries and to the back-end service to which the request is ultimately forwarded. However, if the connection is encrypted, intermediaries cannot modify the "Forwarded" header. In this case, the HTTP header will not accurately communicate the original source address when the request is forwarded.
To solve this problem, some load balancers encapsulate HTTP requests using the PROXY protocol as an alternative to simply forwarding HTTP. Encapsulation enables the load balancer to add information to the request without modifying the forwarded request itself. In particular, this means that the load balancer can communicate the source address even when forwarding an encrypted connection.
The HAProxy router can be configured to accept the PROXY protocol and decapsulate the HTTP request. Because the router terminates encryption for edge and re-encrypt routes, the router can then update the "Forwarded" HTTP header (and related HTTP headers) in the request, appending any source address that is communicated using the PROXY protocol.
The PROXY protocol and HTTP are incompatible and cannot be mixed. If you use a load balancer in front of the router, both must use either the PROXY protocol or HTTP. Configuring one to use one protocol and the other to use the other protocol will cause routing to fail.
3.4.3. Using the PROXY Protocol
By default, the HAProxy router does not use the PROXY protocol. The router can be configured using the ROUTER_USE_PROXY_PROTOCOL
environment variable to expect the PROXY protocol for incoming connections:
Enable the PROXY Protocol
$ oc set env dc/router ROUTER_USE_PROXY_PROTOCOL=true
Set the variable to any value other than true
or TRUE
to disable the PROXY protocol:
Disable the PROXY Protocol
$ oc set env dc/router ROUTER_USE_PROXY_PROTOCOL=false
If you enable the PROXY protocol in the router, you must configure your load balancer in front of the router to use the PROXY protocol as well. Following is an example of configuring Amazon’s Elastic Load Balancer (ELB) service to use the PROXY protocol. This example assumes that ELB is forwarding ports 80 (HTTP), 443 (HTTPS), and 5000 (for the image registry) to the router running on one or more EC2 instances.
Configure Amazon ELB to Use the PROXY Protocol
To simplify subsequent steps, first set some shell variables:
$ lb='infra-lb' 1 $ instances=( 'i-079b4096c654f563c' ) 2 $ secgroups=( 'sg-e1760186' ) 3 $ subnets=( 'subnet-cf57c596' ) 4
Next, create the ELB with the appropriate listeners, security groups, and subnets.
NoteYou must configure all listeners to use the TCP protocol, not the HTTP protocol.
$ aws elb create-load-balancer --load-balancer-name "$lb" \ --listeners \ 'Protocol=TCP,LoadBalancerPort=80,InstanceProtocol=TCP,InstancePort=80' \ 'Protocol=TCP,LoadBalancerPort=443,InstanceProtocol=TCP,InstancePort=443' \ 'Protocol=TCP,LoadBalancerPort=5000,InstanceProtocol=TCP,InstancePort=5000' \ --security-groups $secgroups \ --subnets $subnets { "DNSName": "infra-lb-2006263232.us-east-1.elb.amazonaws.com" }
Register your router instance or instances with the ELB:
$ aws elb register-instances-with-load-balancer --load-balancer-name "$lb" \ --instances $instances { "Instances": [ { "InstanceId": "i-079b4096c654f563c" } ] }
Configure the ELB’s health check:
$ aws elb configure-health-check --load-balancer-name "$lb" \ --health-check 'Target=HTTP:1936/healthz,Interval=30,UnhealthyThreshold=2,HealthyThreshold=2,Timeout=5' { "HealthCheck": { "HealthyThreshold": 2, "Interval": 30, "Target": "HTTP:1936/healthz", "Timeout": 5, "UnhealthyThreshold": 2 } }
Finally, create a load-balancer policy with the
ProxyProtocol
attribute enabled, and configure it on the ELB’s TCP ports 80 and 443:$ aws elb create-load-balancer-policy --load-balancer-name "$lb" \ --policy-name "${lb}-ProxyProtocol-policy" \ --policy-type-name 'ProxyProtocolPolicyType' \ --policy-attributes 'AttributeName=ProxyProtocol,AttributeValue=true' $ for port in 80 443 do aws elb set-load-balancer-policies-for-backend-server \ --load-balancer-name "$lb" \ --instance-port "$port" \ --policy-names "${lb}-ProxyProtocol-policy" done
Verify the Configuration
You can examine the load balancer as follows to verify that the configuration is correct:
$ aws elb describe-load-balancers --load-balancer-name "$lb" | jq '.LoadBalancerDescriptions| [.[]|.ListenerDescriptions]' [ [ { "Listener": { "InstancePort": 80, "LoadBalancerPort": 80, "Protocol": "TCP", "InstanceProtocol": "TCP" }, "PolicyNames": ["infra-lb-ProxyProtocol-policy"] 1 }, { "Listener": { "InstancePort": 443, "LoadBalancerPort": 443, "Protocol": "TCP", "InstanceProtocol": "TCP" }, "PolicyNames": ["infra-lb-ProxyProtocol-policy"] 2 }, { "Listener": { "InstancePort": 5000, "LoadBalancerPort": 5000, "Protocol": "TCP", "InstanceProtocol": "TCP" }, "PolicyNames": [] 3 } ] ]
Alternatively, if you already have an ELB configured, but it is not configured to use the PROXY protocol, you will need to change the existing listener for TCP port 80 to use the TCP protocol instead of HTTP (TCP port 443 should already be using the TCP protocol):
$ aws elb delete-load-balancer-listeners --load-balancer-name "$lb" \ --load-balancer-ports 80 $ aws elb create-load-balancer-listeners --load-balancer-name "$lb" \ --listeners 'Protocol=TCP,LoadBalancerPort=80,InstanceProtocol=TCP,InstancePort=80'
Verify the Protocol Updates
Verify that the protocol has been updated as follows:
$ aws elb describe-load-balancers --load-balancer-name "$lb" |
jq '[.LoadBalancerDescriptions[]|.ListenerDescriptions]'
[
[
{
"Listener": {
"InstancePort": 443,
"LoadBalancerPort": 443,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
},
{
"Listener": {
"InstancePort": 5000,
"LoadBalancerPort": 5000,
"Protocol": "TCP",
"InstanceProtocol": "TCP"
},
"PolicyNames": []
},
{
"Listener": {
"InstancePort": 80,
"LoadBalancerPort": 80,
"Protocol": "TCP", 1
"InstanceProtocol": "TCP"
},
"PolicyNames": []
}
]
]
- 1
- All listeners, including the listener for TCP port 80, should be using the TCP protocol.
Then, create a load-balancer policy and add it to the ELB as described in Step 5 above.
Chapter 4. Deploying Red Hat CloudForms
4.1. Deploying Red Hat CloudForms on OpenShift Container Platform
4.1.1. Introduction
The OpenShift Container Platform installer includes the Ansible role openshift-management and playbooks for deploying Red Hat CloudForms 4.6 (CloudForms Management Engine 5.9, or CFME) on OpenShift Container Platform.
The current implementation is incompatible with the Technology Preview deployment process of Red Hat CloudForms 4.5 as described in OpenShift Container Platform 3.6 documentation.
When deploying Red Hat CloudForms on OpenShift Container Platform, there are two major decisions to make:
- Do you want an external or a containerized (also referred to as podified) PostgreSQL database?
- Which storage class will back your persistent volumes (PVs)?
For the first decision, you can deploy Red Hat CloudForms in one of two ways, depending on the location of the PostgreSQL database to be used by Red Hat CloudForms:
Deployment Variant | Description |
---|---|
Fully containerized | All application services and the PostgreSQL database are run as pods on OpenShift Container Platform. |
External database | The application utilizes an externally-hosted PostgreSQL database server, while all other services are ran as pods on OpenShift Container Platform. |
For the second decision, the openshift-management role provides customization options for overriding many default deployment parameters. This includes the following storage class options to back your PVs:
Storage Class | Description |
---|---|
NFS (default) | Local, on cluster |
NFS External | NFS somewhere else, like a storage appliance |
Cloud Provider | Use automatic storage provisioning from your cloud provider (Google Cloud Engine, Amazon Web Services, or Microsoft Azure) |
Preconfigured (advanced) | Assumes you created everything ahead of time |
Topics in this guide include the requirements for running Red Hat CloudForms on OpenShift Container Platform, descriptions of the available configuration variables, and instructions on running the installer either during your initial OpenShift Container Platform installation or after your cluster has been provisioned.
4.2. Requirements for Red Hat CloudForms on OpenShift Container Platform
The default requirements are listed in the table below. These can be overridden by customizing template parameters.
The application performance will suffer, or possibly even fail to deploy, if these requirements are not satisfied.
Table 4.1. Default Requirements
Item | Requirement | Description | Customization Parameter |
---|---|---|---|
Application Memory | ≥ 4.0 Gi | Minimum required memory for the application |
|
Application Storage | ≥ 5.0 Gi | Minimum PV size required for the application |
|
PostgreSQL Memory | ≥ 6.0 Gi | Minimum required memory for the database |
|
PostgreSQL Storage | ≥ 15.0 Gi | Minimum PV size required for the database |
|
Cluster Hosts | ≥ 3 | Number of hosts in your cluster | N/A |
To sum up these requirements:
- You must have several cluster nodes.
- Your cluster nodes must have lots of memory available.
- You must have several GiB’s of storage available, either locally or on your cloud provider.
- PV sizes can be changed by providing override values to template parameters.
4.3. Configuring Role Variables
4.3.1. Overview
The following sections describe role variables that may be used in your Ansible inventory file, which is used to control the behavior of the Red Hat CloudForms installation when running the installer.
4.3.2. General Variables
Variable | Required | Default | Description |
---|---|---|---|
| No |
|
Boolean, set to |
| Yes |
|
The deployment variant of Red Hat CloudForms to install. Set |
| No |
| Namespace (project) for the Red Hat CloudForms installation. |
| No |
| Namespace (project) description. |
| No |
| Default management user name. Changing this value does not change the user name; only change this value if you have changed the name already and are running integration scripts (such as the script to add container providers). |
| No |
| Default management password. Changing this value does not change the password; only change this value if you have changed the password already and are running integration scripts (such as the script to add container providers). |
4.3.3. Customizing Template Parameters
You can use the openshift_management_template_parameters
Ansible role variable to specify any template parameters you want to override in the application or PV templates.
For example, if you wanted to reduce the memory requirement of the PostgreSQL pod, then you could set the following:
openshift_management_template_parameters={'POSTGRESQL_MEM_REQ': '1Gi'}
When the Red Hat CloudForms template is processed, 1Gi
will be used for the value of the POSTGRESQL_MEM_REQ
template parameter.
Not all template parameters are present in both template variants (containerized or external database). For example, while the podified database template has a POSTGRESQL_MEM_REQ
parameter, no such parameter is present in the external db template, as there is no need for this information due to there being no databases that require pods.
Therefore, be very careful if you are overriding template parameters. Including parameters not defined in a template will cause errors. If you do receive an error during the Ensure the Management App is created
task, run the uninstall scripts first before running the installer again.
4.3.4. Database Variables
4.3.4.1. Containerized (Podified) Database
Any POSTGRES_*
or DATABASE_*
template parameters in the cfme-template.yaml file may be customized through the openshift_management_template_parameters
hash in your inventory file..
4.3.4.2. External Database
Any POSTGRES_*
or DATABASE_*
template parameters in the cfme-template-ext-db.yaml file may be customized through the openshift_management_template_parameters
hash in your inventory file..
External PostgreSQL databases require you to provide database connection parameters. You must set the required connection keys in the openshift_management_template_parameters
parameter in your inventory. The following keys are required:
-
DATABASE_USER
-
DATABASE_PASSWORD
-
DATABASE_IP
-
DATABASE_PORT
(Most PostgreSQL servers run on port5432
) -
DATABASE_NAME
Ensure your external database is running PostgreSQL 9.5 or you may not be able to deploy the CloudForms application successfully.
Your inventory would contain a line similar to:
[OSEv3:vars]
openshift_management_app_template=cfme-template-ext-db 1
openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
- 1
- Set
openshift_management_app_template
parameter tocfme-template-ext-db
.
4.3.5. Storage Class Variables
Variable | Required | Default | Description |
---|---|---|---|
| No |
|
Storage type to use. Options are |
| No |
|
If you are using an external NFS server, such as a NetApp appliance, then you must set the host name here. Leave the value as |
| No |
| If you are using external NFS, then you can set the base path to the exports location here. For local NFS, you can also change this value if you want to change the default path used for local NFS exports. |
| No |
|
If you do not have an |
4.3.5.1. NFS (Default)
The NFS storage class is best suited for proof-of-concept and test deployments. It is also the default storage class for deployments. No additional configuration is required for this choice.
This storage class configures NFS on a cluster host (by default, the first master in the inventory file) to back the required PVs. The application requires a PV, and the database (which may be hosted externally) may require a second. PV minimum required sizes are 5GiB for the Red Hat CloudForms application, and 15GiB for the PostgreSQL database (20GiB minimum available space on a volume or partition if used specifically for NFS purposes).
Customization is provided through the following role variables:
-
openshift_management_storage_nfs_base_dir
-
openshift_management_storage_nfs_local_hostname
4.3.5.2. NFS External
External NFS leans on pre-configured NFS servers to provide exports for the required PVs. For external NFS you must have a cfme-app
and optionally a cfme-db
(for containerized database) exports.
Configuration is provided through the following role variables:
-
openshift_management_storage_nfs_external_hostname
-
openshift_management_storage_nfs_base_dir
The openshift_management_storage_nfs_external_hostname
parameter must be set to the host name or IP of your external NFS server.
If /exports is not the parent directory to your exports then you must set the base directory via the openshift_management_storage_nfs_base_dir
parameter.
For example, if your server export is /exports/hosted/prod/cfme-app, then you must set openshift_management_storage_nfs_base_dir=/exports/hosted/prod
.
4.3.5.3. Cloud Provider
If you are using OpenShift Container Platform cloud provider integration for your storage class, Red Hat CloudForms can also use the cloud provider storage to back its required PVs. For this functionality to work, you must have configured the openshift_cloudprovider_kind
variable (for AWS or GCE) and all associated parameters specific to your chosen cloud provider.
When the application is created using this storage class, the required PVs are automatically provisioned using the configured cloud provider storage integration.
There are no additional variables to configure the behavior of this storage class.
4.3.5.4. Preconfigured (Advanced)
The preconfigured
storage class implies that you know exactly what you are doing and that all storage requirements have been taken care ahead of time. Typically this means that you have already created the correctly sized PVs. The installer will do nothing to modify any storage settings.
There are no additional variables to configure the behavior of this storage class.
4.4. Running the Installer
4.4.1. Deploying Red Hat CloudForms During or After OpenShift Container Platform Installation
You can choose to deploy Red Hat CloudForms either during initial OpenShift Container Platform installation or after the cluster has been provisioned:
Ensure that
openshift_management_install_management
is set totrue
in your inventory file under the[OSEv3:vars]
section:[OSEv3:vars] openshift_management_install_management=true
- Set any other Red Hat CloudForms role variables in your inventory file as described in Configuring Role Variables. Resources to assist in this are provided in Example Inventory Files.
Choose which playbook to run depending on whether OpenShift Container Platform is already provisioned:
- If you want to install Red Hat CloudForms at the same time you install your OpenShift Container Platform cluster, call the standard config.yml playbook as described in Running the Installation Playbooks to begin the OpenShift Container Platform cluster and Red Hat CloudForms installation.
If you want to install Red Hat CloudForms on an already provisioned OpenShift Container Platform cluster, change to the playbook directory and call the Red Hat CloudForms playbook directly to begin the installation:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -v [-i /path/to/inventory] \ playbooks/openshift-management/config.yml
4.4.2. Example Inventory Files
The following sections show example snippets of inventory files showing various configurations of Red Hat CloudForms on OpenShift Container Platform that can help you get started.
See Configuring Role Variables for complete variable descriptions.
4.4.2.1. All Defaults
This example is the simplest, using all of the default values and choices. This results in a fully-containerized (podified) Red Hat CloudForms installation. All application components, as well as the PostgreSQL database, are created as pods in OpenShift Container Platform:
[OSEv3:vars] openshift_management_app_template=cfme-template
4.4.2.2. External NFS Storage
This is as the previous example, except that instead of using local NFS services in the cluster, it uses an existing, external NFS server (such as a storage appliance). Note the two new parameters:
[OSEv3:vars] openshift_management_app_template=cfme-template openshift_management_storage_class=nfs_external 1 openshift_management_storage_nfs_external_hostname=nfs.example.com 2
If the external NFS host exports directories under a different parent directory, such as /exports/hosted/prod, add the following additional variable:
openshift_management_storage_nfs_base_dir=/exports/hosted/prod
4.4.2.3. Override PV Sizes
This example overrides the persistent volume (PV) sizes. PV sizes must be set via openshift_management_template_parameters
, which ensures that the application and database are able to make claims on created PVs without interfering with each other:
[OSEv3:vars] openshift_management_app_template=cfme-template openshift_management_template_parameters={'APPLICATION_VOLUME_CAPACITY': '10Gi', 'DATABASE_VOLUME_CAPACITY': '25Gi'}
4.4.2.4. Override Memory Requirements
In a test or proof-of-concept installation, you may need to reduce the application and database memory requirements to fit within your capacity. Note that reducing memory limits can result in reduced performance or a complete failure to initialize the application:
[OSEv3:vars] openshift_management_app_template=cfme-template openshift_management_template_parameters={'APPLICATION_MEM_REQ': '3000Mi', 'POSTGRESQL_MEM_REQ': '1Gi', 'ANSIBLE_MEM_REQ': '512Mi'}
This example instructs the installer to process the application template with the parameter APPLICATION_MEM_REQ
set to 3000Mi
, POSTGRESQL_MEM_REQ
set to 1Gi
, and ANSIBLE_MEM_REQ
set to 512Mi
.
These parameters can be combined with the parameters displayed in the previous example Override PV Sizes.
4.4.2.5. External PostgreSQL Database
To use an external database, you must change the openshift_management_app_template
parameter value to cfme-template-ext-db
.
Additionally, database connection information must be supplied using the openshift_management_template_parameters
variable. See Configuring Role Variables for more details.
[OSEv3:vars] openshift_management_app_template=cfme-template-ext-db openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
Ensure your are running PostgreSQL 9.5 or you may not be able to deploy the application successfully.
4.5. Enabling Container Provider Integration
4.5.1. Adding a Single Container Provider
After deploying Red Hat CloudForms on OpenShift Container Platform as described in Running the Installer, there are two methods for enabling container provider integration. You can manually add OpenShift Container Platform as a container provider, or you can try the playbooks included with this role.
4.5.1.1. Adding Manually
See the following Red Hat CloudForms documentation for steps on manually adding your OpenShift Container Platform cluster as a container provider:
4.5.1.2. Adding Automatically
Automated container provider integration can be accomplished using the playbooks included with this role.
This playbook:
- Gathers the necessary authentication secrets.
- Finds the public routes to the Red Hat CloudForms application and the cluster API.
- Makes a REST call to add the OpenShift Container Platform cluster as a container provider.
Change to the playbook directory and run the container provider playbook:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -v [-i /path/to/inventory] \ openshift-management/add_container_provider.yml
4.5.2. Multiple Container Providers
As well as providing playbooks to integrate your current OpenShift Container Platform cluster into your Red Hat CloudForms deployment, this role includes a script which allows you to add multiple container platforms as container providers in any arbitrary Red Hat CloudForms server. The container platforms can be OpenShift Container Platform or OpenShift Origin.
Using the multiple provider script requires manual configuration and setting an EXTRA_VARS
parameter on the CLI when running the playbook.
4.5.2.1. Preparing the Script
To prepare the multiple provider script, complete the following manual configuration:
- Copy the /usr/share/ansible/openshift-ansible/roles/openshift_management/files/examples/container_providers.yml example somewhere, such as /tmp/cp.yml. You will be modifying this file.
-
If you changed your Red Hat CloudForms name or password, update the
hostname
,user
, andpassword
parameters in themanagement_server
key in the container_providers.yml file that you copied. Fill in an entry under the
container_providers
key for each container platform cluster you want to add as container providers.The following parameters must be configured:
-
auth_key
- This is the token of a service account that hascluster-admin
privileges. -
hostname
- This is the host name that points to the cluster API. Each container provider must have a unique host name. -
name
- This is the name of the cluster to be displayed in the Red Hat CloudForms server container providers overview page. This must be unique.
TipTo obtain the
auth_key
bearer token from your clusters:$ oc serviceaccounts get-token -n management-infra management-admin
-
The following parameters may be optionally configured:
-
port
- Update this key if your container platform cluster runs the API on a port other than8443
. -
endpoint
- You may enable SSL verification (verify_ssl
) or change the validation setting tossl-with-validation
. Support for custom trusted CA certificates is not currently available.
-
4.5.2.1.1. Example
As an example, consider the following scenario:
- You copied the container_providers.yml file to /tmp/cp.yml.
- You want to add two OpenShift Container Platform clusters.
-
Your Red Hat CloudForms server runs on
mgmt.example.com
For this scenario, you would customize /tmp/cp.yml as follows:
container_providers: - connection_configurations: - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 1 endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0} hostname: "<provider_hostname1>" name: <display_name1> port: 8443 type: "ManageIQ::Providers::Openshift::ContainerManager" - connection_configurations: - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 2 endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0} hostname: "<provider_hostname2>" name: <display_name2> port: 8443 type: "ManageIQ::Providers::Openshift::ContainerManager" management_server: hostname: "<hostname>" user: <user_name> password: <password>
4.5.2.2. Running the Playbook
To run the multiple-providers integration script, you must provide the path to the container providers configuration file as an EXTRA_VARS
parameter to the ansible-playbook
command. Use the -e
(or --extra-vars
) parameter to set container_providers_config
to the configuration file path. Change to the playbook directory and run the playbook:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -v [-i /path/to/inventory] \ -e container_providers_config=/tmp/cp.yml \ playbooks/openshift-management/add_many_container_providers.yml
After the playbook completes, you should find two new container providers in your Red Hat CloudForms service. Navigate to the Compute → Containers → Providers
page to see an overview.
4.5.3. Refreshing Providers
After adding either a single or multiple container providers, the new provider(s) must be refreshed in Red Hat CloudForms to get all the latest data about the container provider and the containers being managed. This involves navigating to each provider in the Red Hat CloudForms web console and clicking a refresh button for each.
See the following Red Hat CloudForms documentation for steps:
4.6. Uninstalling Red Hat CloudForms
4.6.1. Running the Uninstall Playbook
To uninstall and erase a deployed Red Hat CloudForms installation from OpenShift Container Platform, change to the playbook directory and run the uninstall.yml playbook:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook -v [-i /path/to/inventory] \ playbooks/openshift-management/uninstall.yml
NFS export definitions and data stored on NFS exports are not automatically removed. You are urged to manually erase any data from old application or database deployments before attempting to initialize a new deployment.
4.6.2. Troubleshooting
Failure to erase old PostgreSQL data can result in cascading errors, causing the postgresql pod to enter a crashloopbackoff
state. This blocks the cfme pod from ever starting. The cause of the crashloopbackoff
is due to incorrect file permissions on the database NFS export created during a previous deployment.
To continue, erase all data from the PostgreSQL export and delete the pod (not the deployer pod). For example, if you had the following pods:
$ oc get pods NAME READY STATUS RESTARTS AGE httpd-1-cx7fk 1/1 Running 1 21h cfme-0 0/1 Running 1 21h memcached-1-vkc7p 1/1 Running 1 21h postgresql-1-deploy 1/1 Running 1 21h postgresql-1-6w2t4 0/1 CrashLoopBackOff 1 21h
Then you would:
- Erase the data from the database NFS export.
Run:
$ oc delete postgresql-1-6w2t4
The PostgreSQL deployer pod will try to scale up a new postgresql pod to replace the one you deleted. After the postgresql pod is running, the cfme pod will stop blocking and begin application initialization.
Chapter 5. Prometheus Cluster Monitoring
5.1. Overview
OpenShift Container Platform ships with a pre-configured and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and ships with a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards.

Highlighted in the diagram above, at the heart of the monitoring stack sits the OpenShift Container Platform Cluster Monitoring Operator (CMO), which watches over the deployed monitoring components and resources, and ensures that they are always up to date.
The Prometheus Operator (PO) creates, configures, and manages Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on familiar Kubernetes label queries.
In addition to Prometheus and Alertmanager, OpenShift Container Platform Monitoring also includes node-exporter and kube-state-metrics. Node-exporter is an agent deployed on every node to collect metrics about it. The kube-state-metrics exporter agent converts Kubernetes objects to metrics consumable by Prometheus.
The targets monitored as part of the cluster monitoring are:
- Prometheus itself
- Prometheus-Operator
- cluster-monitoring-operator
- Alertmanager cluster instances
- Kubernetes apiserver
- kubelets (the kubelet embeds cAdvisor for per container metrics)
- kube-controllers
- kube-state-metrics
- node-exporter
- etcd (if etcd monitoring is enabled)
All these components are automatically updated.
For more information about the OpenShift Container Platform Cluster Monitoring Operator, see the Cluster Monitoring Operator GitHub project.
In order to be able to deliver updates with guaranteed compatibility, configurability of the OpenShift Container Platform Monitoring stack is limited to the explicitly available options.
5.2. Configuring OpenShift Container Platform cluster monitoring
The OpenShift Container Platform Ansible openshift_cluster_monitoring_operator
role configures and deploys the Cluster Monitoring Operator using the variables from the inventory file.
Table 5.1. Ansible variables
Variable | Description |
---|---|
|
Deploy the Cluster Monitoring Operator if |
|
The persistent volume claim size for each of the Prometheus instances. This variable applies only if |
|
The persistent volume claim size for each of the Alertmanager instances. This variable applies only if |
|
Set to the desired, existing node selector to ensure that pods are placed onto nodes with specific labels. Defaults to |
| Configures Alertmanager. |
|
Enable persistent storage of Prometheus' time-series data. This variable is set to |
|
Enable persistent storage of Alertmanager notifications and silences. This variable is set to |
|
If you enabled the |
|
If you enabled the |
5.2.1. Monitoring prerequisites
The monitoring stack imposes additional resource requirements. See computing resources recommendations for details.
5.2.2. Installing monitoring stack
The Monitoring stack is installed with OpenShift Container Platform by default. You can prevent it from being installed. To do that, set this variable to false
in the Ansible inventory file:
openshift_cluster_monitoring_operator_install
You can do it by running:
$ ansible-playbook [-i </path/to/inventory>] <OPENSHIFT_ANSIBLE_DIR>/playbooks/openshift-monitoring/config.yml \ -e openshift_cluster_monitoring_operator_install=False
A common path for the Ansible directory is /usr/share/ansible/openshift-ansible/
. In this case, the path to the configuration file is /usr/share/ansible/openshift-ansible/playbooks/openshift-monitoring/config.yml
.
5.2.3. Persistent storage
Running cluster monitoring with persistent storage means that your metrics are stored to a persistent volume and can survive a pod being restarted or recreated. This is ideal if you require your metrics or alerting data to be guarded from data loss. For production environments, it is highly recommended to configure persistent storage using block storage technology.
5.2.3.1. Enabling persistent storage
By default, persistent storage is disabled for both Prometheus time-series data and for Alertmanager notifications and silences. You can configure the cluster to persistently store any one of them or both.
To enable persistent storage of Prometheus time-series data, set this variable to
true
in the Ansible inventory file:openshift_cluster_monitoring_operator_prometheus_storage_enabled
To enable persistent storage of Alertmanager notifications and silences, set this variable to
true
in the Ansible inventory file:openshift_cluster_monitoring_operator_alertmanager_storage_enabled
5.2.3.2. Determining how much storage is necessary
How much storage you need depends on the number of pods. It is administrator’s responsibility to dedicate sufficient storage to ensure that the disk does not become full. For information on system requirements for persistent storage, see Capacity Planning for Cluster Monitoring Operator.
5.2.3.3. Setting persistent storage size
To specify the size of the persistent volume claim for Prometheus and Alertmanager, change these Ansible variables:
-
openshift_cluster_monitoring_operator_prometheus_storage_capacity
(default: 50Gi) -
openshift_cluster_monitoring_operator_alertmanager_storage_capacity
(default: 2Gi)
Each of these variables applies only if its corresponding storage_enabled
variable is set to true
.
5.2.3.4. Allocating enough persistent volumes
Unless you use dynamically-provisioned storage, you need to make sure you have a persistent volume (PV) ready to be claimed by the PVC, one PV for each replica. Prometheus has two replicas and Alertmanager has three replicas, which amounts to five PVs.
5.2.3.5. Enabling dynamically-provisioned storage
Instead of statically-provisioned storage, you can use dynamically-provisioned storage. See Dynamic Volume Provisioning for details.
To enable dynamic storage for Prometheus and Alertmanager, set the following parameters to true
in the Ansible inventory file:
-
openshift_cluster_monitoring_operator_prometheus_storage_enabled
(Default: false) -
openshift_cluster_monitoring_operator_alertmanager_storage_enabled
(Default: false)
After you enable dynamic storage, you can also set the storageclass
for the persistent volume claim for each component in the following parameters in the Ansible inventory file:
-
openshift_cluster_monitoring_operator_prometheus_storage_class_name
(default: "") -
openshift_cluster_monitoring_operator_alertmanager_storage_class_name
(default: "")
Each of these variables applies only if its corresponding storage_enabled
variable is set to true
.
5.2.4. Supported configuration
The supported way of configuring OpenShift Container Platform Monitoring is by configuring it using the options described in this guide. Beyond those explicit configuration options, it is possible to inject additional configuration into the stack. However this is unsupported, as configuration paradigms might change across Prometheus releases, and such cases can only be handled gracefully if all configuration possibilities are controlled.
Explicitly unsupported cases include:
-
Creating additional
ServiceMonitor
objects in theopenshift-monitoring
namespace, thereby extending the targets the cluster monitoring Prometheus instance scrapes. This can cause collisions and load differences that cannot be accounted for, therefore the Prometheus setup can be unstable. -
Creating additional
ConfigMap
objects, that cause the cluster monitoring Prometheus instance to include additional alerting and recording rules. Note that this behavior is known to cause a breaking behavior if applied, as Prometheus 2.0 will ship with a new rule file syntax.
5.3. Configuring Alertmanager
The Alertmanager manages incoming alerts; this includes silencing, inhibition, aggregation, and sending out notifications through methods such as email, PagerDuty, and HipChat.
The default configuration of the OpenShift Container Platform Monitoring Alertmanager cluster is:
global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: DeadMansSwitch repeat_interval: 5m receiver: deadmansswitch receivers: - name: default - name: deadmansswitch
This configuration can be overwritten using the Ansible variable openshift_cluster_monitoring_operator_alertmanager_config
from the openshift_cluster_monitoring_operator
role.
The following example configures PagerDuty for notifications. See the PagerDuty documentation for Alertmanager to learn how to retrieve the service_key
.
openshift_cluster_monitoring_operator_alertmanager_config: |+ global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: DeadMansSwitch repeat_interval: 5m receiver: deadmansswitch - match: service: example-app routes: - match: severity: critical receiver: team-frontend-page receivers: - name: default - name: deadmansswitch - name: team-frontend-page pagerduty_configs: - service_key: "<key>"
The sub-route matches only on alerts that have a severity of critical
and sends them using the receiver called team-frontend-page
. As the name indicates, someone should be paged for alerts that are critical. See Alertmanager configuration for configuring alerting through different alert receivers.
5.3.1. Dead man’s switch
OpenShift Container Platform Monitoring ships with a dead man’s switch to ensure the availability of the monitoring infrastructure.
The dead man’s switch is a simple Prometheus alerting rule that always triggers. The Alertmanager continuously sends notifications for the dead man’s switch to the notification provider that supports this functionality. This also ensures that communication between the Alertmanager and the notification provider is working.
This mechanism is supported by PagerDuty to issue alerts when the monitoring system itself is down. For more information, see Dead man’s switch PagerDuty below.
5.3.2. Grouping alerts
After alerts are firing against the Alertmanager, it must be configured to know how to logically group them.
For this example, a new route is added to reflect alert routing of the frontend
team.
Procedure
Add new routes. Multiple routes may be added beneath the original route, typically to define the receiver for the notification. The following example uses a matcher to ensure that only alerts coming from the service
example-app
are used:global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - match: alertname: DeadMansSwitch repeat_interval: 5m receiver: deadmansswitch - match: service: example-app routes: - match: severity: critical receiver: team-frontend-page receivers: - name: default - name: deadmansswitch
The sub-route matches only on alerts that have a severity of
critical
, and sends them using the receiver calledteam-frontend-page
. As the name indicates, someone should be paged for alerts that are critical.
5.3.3. Dead man’s switch PagerDuty
PagerDuty supports this mechanism through an integration called Dead Man’s Snitch. Simply add a PagerDuty
configuration to the default deadmansswitch
receiver. Use the process described above to add this configuration.
Configure Dead Man’s Snitch to page the operator if the Dead man’s switch alert is silent for 15 minutes. With the default Alertmanager configuration, the Dead man’s switch alert is repeated every five minutes. If Dead Man’s Snitch triggers after 15 minutes, it indicates that the notification has been unsuccessful at least twice.
Learn how to configure Dead Man’s Snitch for PagerDuty.
5.3.4. Alerting rules
OpenShift Container Platform Cluster Monitoring ships with the following alerting rules configured by default. Currently you cannot add custom alerting rules.
Some alerting rules have identical names. This is intentional. They are alerting about the same event with different thresholds, with different severity, or both. With the inhibition rules, the lower severity is inhibited when the higher severity is firing.
For more details on the alerting rules, see the configuration file.
Alert | Severity | Description |
---|---|---|
|
| Cluster Monitoring Operator is experiencing X% errors. |
|
| Alertmanager has disappeared from Prometheus target discovery. |
|
| ClusterMonitoringOperator has disappeared from Prometheus target discovery. |
|
| KubeAPI has disappeared from Prometheus target discovery. |
|
| KubeControllerManager has disappeared from Prometheus target discovery. |
|
| KubeScheduler has disappeared from Prometheus target discovery. |
|
| KubeStateMetrics has disappeared from Prometheus target discovery. |
|
| Kubelet has disappeared from Prometheus target discovery. |
|
| NodeExporter has disappeared from Prometheus target discovery. |
|
| Prometheus has disappeared from Prometheus target discovery. |
|
| PrometheusOperator has disappeared from Prometheus target discovery. |
|
| Namespace/Pod (Container) is restarting times / second |
|
| Namespace/Pod is not ready. |
|
| Deployment Namespace/Deployment generation mismatch |
|
| Deployment Namespace/Deployment replica mismatch |
|
| StatefulSet Namespace/StatefulSet replica mismatch |
|
| StatefulSet Namespace/StatefulSet generation mismatch |
|
| Only X% of desired pods scheduled and ready for daemon set Namespace/DaemonSet |
|
| A number of pods of daemonset Namespace/DaemonSet are not scheduled. |
|
| A number of pods of daemonset Namespace/DaemonSet are running where they are not supposed to run. |
|
| CronJob Namespace/CronJob is taking more than 1h to complete. |
|
| Job Namespaces/Job is taking more than 1h to complete. |
|
| Job Namespaces/Job failed to complete. |
|
| Overcommited CPU resource requests on Pods, cannot tolerate node failure. |
|
| Overcommited Memory resource requests on Pods, cannot tolerate node failure. |
|
| Overcommited CPU resource request quota on Namespaces. |
|
| Overcommited Memory resource request quota on Namespaces. |
|
| X% usage of Resource in namespace Namespace. |
|
| The persistent volume claimed by PersistentVolumeClaim in namespace Namespace has X% free. |
|
| Based on recent sampling, the persistent volume claimed by PersistentVolumeClaim in namespace Namespace is expected to fill up within four days. Currently X bytes are available. |
|
| Node has been unready for more than an hour |
|
| There are X different versions of Kubernetes components running. |
|
| Kubernetes API server client 'Job/Instance' is experiencing X% errors.' |
|
| Kubernetes API server client 'Job/Instance' is experiencing X errors / sec.' |
|
| Kubelet Instance is running X pods, close to the limit of 110. |
|
| The API server has a 99th percentile latency of X seconds for Verb Resource. |
|
| The API server has a 99th percentile latency of X seconds for Verb Resource. |
|
| API server is erroring for X% of requests. |
|
| API server is erroring for X% of requests. |
|
| Kubernetes API certificate is expiring in less than 7 days. |
|
| Kubernetes API certificate is expiring in less than 1 day. |
|
|
Summary: Configuration out of sync. Description: The configuration of the instances of the Alertmanager cluster |
|
| Summary: Alertmanager’s configuration reload failed. Description: Reloading Alertmanager’s configuration has failed for Namespace/Pod. |
|
| Summary: Targets are down. Description: X% of Job targets are down. |
|
| Summary: Alerting DeadMansSwitch. Description: This is a DeadMansSwitch meant to ensure that the entire Alerting pipeline is functional. |
|
| Device Device of node-exporter Namespace/Pod is running full within the next 24 hours. |
|
| Device Device of node-exporter Namespace/Pod is running full within the next 2 hours. |
|
| Summary: Reloading Prometheus' configuration failed. Description: Reloading Prometheus' configuration has failed for Namespace/Pod |
|
| Summary: Prometheus' alert notification queue is running full. Description: Prometheus' alert notification queue is running full for Namespace/Pod |
|
| Summary: Errors while sending alert from Prometheus. Description: Errors while sending alerts from Prometheus Namespace/Pod to Alertmanager Alertmanager |
|
| Summary: Errors while sending alerts from Prometheus. Description: Errors while sending alerts from Prometheus Namespace/Pod to Alertmanager Alertmanager |
|
| Summary: Prometheus is not connected to any Alertmanagers. Description: Prometheus Namespace/Pod is not connected to any Alertmanagers |
|
| Summary: Prometheus has issues reloading data blocks from disk. Description: Job at Instance had X reload failures over the last four hours. |
|
| Summary: Prometheus has issues compacting sample blocks. Description: Job at Instance had X compaction failures over the last four hours. |
|
| Summary: Prometheus write-ahead log is corrupted. Description: Job at Instance has a corrupted write-ahead log (WAL). |
|
| Summary: Prometheus isn’t ingesting samples. Description: Prometheus Namespace/Pod isn’t ingesting samples. |
|
| Summary: Prometheus has many samples rejected. Description: Namespace/Pod has many samples rejected due to duplicate timestamps but different values |
|
| Etcd cluster "Job": insufficient members (X). |
|
| Etcd cluster "Job": member Instance has no leader. |
|
| Etcd cluster "Job": instance Instance has seen X leader changes within the last hour. |
|
| Etcd cluster "Job": X% of requests for GRPC_Method failed on etcd instance Instance. |
|
| Etcd cluster "Job": X% of requests for GRPC_Method failed on etcd instance Instance. |
|
| Etcd cluster "Job": gRPC requests to GRPC_Method are taking X_s on etcd instance _Instance. |
|
| Etcd cluster "Job": member communication with To is taking X_s on etcd instance _Instance. |
|
| Etcd cluster "Job": X proposal failures within the last hour on etcd instance Instance. |
|
| Etcd cluster "Job": 99th percentile fync durations are X_s on etcd instance _Instance. |
|
| Etcd cluster "Job": 99th percentile commit durations X_s on etcd instance _Instance. |
|
| Job instance Instance will exhaust its file descriptors soon |
|
| Job instance Instance will exhaust its file descriptors soon |
5.4. Configuring etcd monitoring
If the etcd
service does not run correctly, successful operation of the whole OpenShift Container Platform cluster is in danger. Therefore, it is reasonable to configure monitoring of etcd
.
Follow these steps to configure etcd
monitoring:
Procedure
Verify that the monitoring stack is running:
$ oc -n openshift-monitoring get pods NAME READY STATUS RESTARTS AGE alertmanager-main-0 3/3 Running 0 34m alertmanager-main-1 3/3 Running 0 33m alertmanager-main-2 3/3 Running 0 33m cluster-monitoring-operator-67b8797d79-sphxj 1/1 Running 0 36m grafana-c66997f-pxrf7 2/2 Running 0 37s kube-state-metrics-7449d589bc-rt4mq 3/3 Running 0 33m node-exporter-5tt4f 2/2 Running 0 33m node-exporter-b2mrp 2/2 Running 0 33m node-exporter-fd52p 2/2 Running 0 33m node-exporter-hfqgv 2/2 Running 0 33m prometheus-k8s-0 4/4 Running 1 35m prometheus-k8s-1 0/4 ContainerCreating 0 21s prometheus-operator-6c9fddd47f-9jfgk 1/1 Running 0 36m
Open the configuration file for the cluster monitoring stack:
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Under
config.yaml: |+
, add theetcd
section.If you run
etcd
in static pods on your master nodes, you can specify theetcd
nodes using the selector:... data: config.yaml: |+ ... etcd: targets: selector: openshift.io/component: etcd openshift.io/control-plane: "true"
If you run
etcd
on separate hosts, you need to specify the nodes using IP addresses:... data: config.yaml: |+ ... etcd: targets: ips: - "127.0.0.1" - "127.0.0.2" - "127.0.0.3"
If the IP addresses for
etcd
nodes change, you must update this list.
Verify that the
etcd
service monitor is now running:$ oc -n openshift-monitoring get servicemonitor NAME AGE alertmanager 35m etcd 1m 1 kube-apiserver 36m kube-controllers 36m kube-state-metrics 34m kubelet 36m node-exporter 34m prometheus 36m prometheus-operator 37m
- 1
- The
etcd
service monitor.
It might take up to a minute for the
etcd
service monitor to start.Now you can navigate to the web interface to see more information about the status of
etcd
monitoring.To get the URL, run:
$ oc -n openshift-monitoring get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... prometheus-k8s prometheus-k8s-openshift-monitoring.apps.msvistun.origin-gce.dev.openshift.com prometheus-k8s web reencrypt None
-
Using
https
, navigate to the URL listed forprometheus-k8s
. Log in.
Ensure the user belongs to the
cluster-monitoring-view
role. This role provides access to viewing cluster monitoring UIs.For example, to add user
developer
to thecluster-monitoring-view
role, run:$ oc adm policy add-cluster-role-to-user cluster-monitoring-view developer
-
In the web interface, log in as the user belonging to the
cluster-monitoring-view
role. Click Status, then Targets. If you see an
etcd
entry,etcd
is being monitored.While
etcd
is now being monitored, Prometheus is not yet able to authenticate againstetcd
, and so cannot gather metrics.To configure Prometheus authentication against
etcd
:Copy the
/etc/etcd/ca/ca.crt
and/etc/etcd/ca/ca.key
credentials files from the master node to the local machine:$ ssh -i gcp-dev/ssh-privatekey cloud-user@35.237.54.213
Create the
openssl.cnf
file with these contents:[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name [ req_distinguished_name ] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, keyEncipherment, digitalSignature extendedKeyUsage=serverAuth, clientAuth
Generate the
etcd.key
private key file:$ openssl genrsa -out etcd.key 2048
Generate the
etcd.csr
certificate signing request file:$ openssl req -new -key etcd.key -out etcd.csr -subj "/CN=etcd" -config openssl.cnf
Generate the
etcd.crt
certificate file:$ openssl x509 -req -in etcd.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out etcd.crt -days 365 -extensions v3_req -extfile openssl.cnf
Put the credentials into format used by OpenShift Container Platform:
$ cat <<-EOF > etcd-cert-secret.yaml apiVersion: v1 data: etcd-client-ca.crt: "$(cat ca.crt | base64 --wrap=0)" etcd-client.crt: "$(cat etcd.crt | base64 --wrap=0)" etcd-client.key: "$(cat etcd.key | base64 --wrap=0)" kind: Secret metadata: name: kube-etcd-client-certs namespace: openshift-monitoring type: Opaque EOF
This creates the etcd-cert-secret.yaml file
Apply the credentials file to the cluster:
$ oc apply -f etcd-cert-secret.yaml
Now that you have configured authentication, visit the Targets page of the web interface again. Verify that
etcd
is now being correctly monitored. It might take several minutes for changes to take effect.If you want
etcd
monitoring to be automatically updated when you update OpenShift Container Platform, set this variable in the Ansible inventory file totrue
:openshift_cluster_monitoring_operator_etcd_enabled=true
If you run
etcd
on separate hosts, specify the nodes by IP addresses using this Ansible variable:openshift_cluster_monitoring_operator_etcd_hosts=[<address1>, <address2>, ...]
If the IP addresses of the
etcd
nodes change, you must update this list.
5.5. Accessing Prometheus, Alertmanager, and Grafana
OpenShift Container Platform Monitoring ships with a Prometheus instance for cluster monitoring and a central Alertmanager cluster. In addition to Prometheus and Alertmanager, OpenShift Container Platform Monitoring also includes a Grafana instance as well as pre-built dashboards for cluster monitoring troubleshooting. The Grafana instance that is provided with the monitoring stack, along with its dashboards, is read-only.
To get the addresses for accessing Prometheus, Alertmanager, and Grafana web UIs:
Procedure
Run the following command:
$ oc -n openshift-monitoring get routes NAME HOST/PORT alertmanager-main alertmanager-main-openshift-monitoring.apps._url_.openshift.com grafana grafana-openshift-monitoring.apps._url_.openshift.com prometheus-k8s prometheus-k8s-openshift-monitoring.apps._url_.openshift.com
Make sure to prepend
https://
to these addresses. You cannot access web UIs using unencrypted connections.-
Authentication is performed against the OpenShift Container Platform identity and uses the same credentials or means of authentication as is used elsewhere in OpenShift Container Platform. You must use a role that has read access to all namespaces, such as the
cluster-monitoring-view
cluster role.
Chapter 6. Accessing and Configuring the Red Hat Registry
6.1. Authentication Enabled Red Hat Registry
All container images available through the Red Hat Container Catalog are hosted on an image registry, registry.access.redhat.com
. With OpenShift Container Platform 3.11 Red Hat Container Catalog moved from registry.access.redhat.com
to registry.redhat.io
.
The new registry, registry.redhat.io
, requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time.
OpenShift Container Platform pulls images from registry.redhat.io
, so you must configure your cluster to use it.
The new registry uses standard OAuth mechanisms for authentication, with the following methods:
- Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters.
-
Web username and password. This is the standard set of credentials you use to log in to resources such as
access.redhat.com
. While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform.
You can use docker login
with your credentials, either username and password or authentication token, to access content on the new registry.
All image streams point to the new registry. Because the new registry requires authentication for access, there is a new secret in the OpenShift namespace called imagestreamsecret
.
You must place your credentials in two places:
- OpenShift namespace. Your credentials must exist in the OpenShift namespace so that the image streams in the OpenShift namespace can import.
- Your host. Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images.
To access the new registry:
-
Verify image import secret,
imagestreamsecret
, is in your OpenShift namespace. That secret has credentials that allow you to access the new registry. -
Verify all of your cluster nodes have a
/var/lib/origin/.docker/config.json
, copied from master, that allows you to access the Red Hat registry.
6.1.1. Creating User accounts
If you are a Red Hat customer with entitlements to Red Hat products, you have an account with applicable user credentials. These are the username and password that you use to log in to the Red Hat Customer Portal.
If you do not have an account, you can acquire one for free by registering for one of the following options:
- Red Hat Developer Program. This account gives you access to developer tools and programs.
- 30-day Trial Subscription. This account gives you a 30-day trial subscription with access to select Red Hat software products.
6.1.2. Creating Service Accounts and Authentication Tokens for the Red Hat Registry
You must create tokens if your organization manages shared accounts. Administrators can create, view, and delete all tokens associated with an organization.
Prerequisites
- User credentials
Procedure
To create a token in order complete a docker login
:
-
Navigate to
registry.redhat.io
. - Log in with your Red Hat Network (RHN) username and password.
Accept terms when prompted.
- If you are not immediately prompted to accept terms, you will be prompted when proceeding with the following steps.
From the Registry Service Accounts page, click Create Service Account
- Provide a name for the service account. It will be prepended with a random string.
- Enter a description.
- Click create.
- Navigate back to your Service Accounts.
- Click the Service Account you created.
- Copy the username, including the prepended string.
- Copy the token.
6.1.3. Managing Registry Credentials for Installation and Upgrade
You can also manage registry credentials during installation or upgrade using the Ansible installer.
This will set up the following:
-
imagestreamsecret
in your OpenShift namespace. - Credentials on all nodes.
The Ansible installer will require credentials when you are using the default value of registry.redhat.io
for either openshift_examples_registryurl
or oreg_url
.
Prerequisites
- User credentials
- Service account
- Service account token
Procedure
To manage registry credentials during installation or upgrade using the Ansible installer:
-
During installation or upgrade, specify the
oreg_auth_user
andoreg_auth_password
variables in your installer inventory.
If you have created a token, set oreg_auth_password
to the value of the token.
Clusters that require access to additional authenticated registries can configure a list of registries by setting openshift_additional_registry_credentials
. Each registry requires a host and password value, you can specify a username by setting user. By default the credentials specified are validated by attempting to inspect the image openshift3/ose-pod
on the specified registry.
To specify an alternate image, either:
-
Set
test_image
. -
Disable credential validation by setting
test_login
to False.
If the registry is insecure, set tls_verify
to False.
All credentials in this list will have an imagestreamsecret
created in the OpenShift namespace and credentials deployed to all nodes.
For example:
openshift_additional_registry_credentials=[{'host':'registry.example.com','user':'name','password':'pass1','test_login':'False'},{'host':'registry2.example.com','password':'token12345','tls_verify':'False','test_image':'mongodb/mongodb'}]
6.1.4. Using Service Accounts with the Red Hat Registry
Once you have created your service accounts and generated tokens for the Red Hat Registry, you can perform additional tasks.
This section provides the manual steps, which can be automatically performed during installation by providing the inventory variables outlined in the Managing Registry Credentials for Installation and Upgrade section.
Prerequisites
- User credentials
- Service account
- Service account token
Procedure
From your Registry Service Accounts page, click on your account name. From there, you can perform the following tasks:
- From the Token Information tab, you can view your username (the name you provided prepended with a random string) and password (token). From this tab, you can regenerate your token.
From the OpenShift Secret tab, you can:
- Download the secret by clicking the link in the tab.
Submit the secret to the cluster:
# oc create -f <account-name>-secret.yml --namespace=openshift
Update your Kubernetes configuration by adding a reference to the secret to your Kubernetes pod configuration with an
imagePullSecrets
field, for example:apiVersion: v1 kind: Pod metadata: name: somepod namespace: all spec: containers: - name: web image: registry.redhat.io/REPONAME imagePullSecrets: - name: <numerical-string-account-name>-pull-secret
From the Docker Login tab, you can run
docker login
. For example:# docker login -u='<numerical-string|account-name>' -p=<token>
After you successfully log in, copy
~/.docker/config.json
to/var/lib/origin/.docker/config.json
and restart the node.# cp -r ~/.docker /var/lib/origin/ systemctl restart atomic-openshift-node
From the Docker Configuration tab, you can:
- Download the credentials configuration by clicking the link in the tab.
Write the configuration to the disk by placing the file in the Docker configuration directory. This will overwrite existing credentials. For example:
# mv <account-name>-auth.json ~/.docker/config.json
Chapter 7. Master and Node Configuration
7.1. Customizing master and node configuration after installation
The openshift start
command (for master servers) and hyperkube
command (for node servers) take a limited set of arguments that are sufficient for launching servers in a development or experimental environment. However, these arguments are insufficient to describe and control the full set of configuration and security options that are necessary in a production environment.
You must provide these options in the master configuration file, at /etc/origin/master/master-config.yaml, and the node configuration maps. These files define options including overriding the default plug-ins, connecting to etcd, automatically creating service accounts, building image names, customizing project requests, configuring volume plug-ins, and much more.
This topic covers the available options for customizing your OpenShift Container Platform master and node hosts, and shows you how to make changes to the configuration after installation.
These files are fully specified with no default values. Therefore, an empty value indicates that you want to start up with an empty value for that parameter. This makes it easy to reason about exactly what your configuration is, but it also makes it difficult to remember all of the options to specify. To make this easier, the configuration files can be created with the --write-config
option and then used with the --config
option.
7.2. Installation dependencies
Production environments should be installed using the standard cluster installation process. In production environments, it is a good idea to use multiple masters for the purposes of high availability (HA). A cluster architecture of three masters is recommended, and HAproxy is the recommended solution for this.
If etcd is installed on the master hosts, you must configure your cluster to use at least three masters, because etcd would not be able to decide which one is authoritative. The only way to successfully run only two masters is if you install etcd on hosts other than the masters.
7.3. Configuring masters and nodes
The method you use to configure your master and node configuration files must match the method that was used to install your OpenShift Container Platform cluster. If you followed the standard cluster installation processe, then make your configuration changes in the Ansible inventory file.
7.4. Making configuration changes using Ansible
For this section, familiarity with Ansible is assumed.
Only a portion of the available host configuration options are exposed to Ansible. After an OpenShift Container Platform install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your OpenShift Container Platform cluster.
While OpenShift Container Platform supports using Ansible for cluster installation, using an Ansible playbook and inventory file, you can also use other management tools, such as Puppet, Chef, or Salt.
Use Case: Configuring the cluster to use HTPasswd authentication
- This use case assumes you have already set up SSH keys to all the nodes referenced in the playbook.
The
htpasswd
utility is in thehttpd-tools
package:# yum install httpd-tools
To modify the Ansible inventory and make configuration changes:
- Open the ./hosts inventory file.
Add the following new variables to the
[OSEv3:vars]
section of the file:# htpasswd auth openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Defining htpasswd users #openshift_master_htpasswd_users={'<name>': '<hashed-password>', '<name>': '<hashed-password>'} # or #openshift_master_htpasswd_file=/etc/origin/master/htpasswd
For HTPasswd authentication the
openshift_master_identity_providers
variable enables the authentication type. You can configure three different authentication options that use HTPasswd:-
Specify only
openshift_master_identity_providers
if/etc/origin/master/htpasswd
is already configured and present on the host. -
Specify both
openshift_master_identity_providers
andopenshift_master_htpasswd_file
to copy a local htpasswd file to the host. -
Specify both
openshift_master_identity_providers
andopenshift_master_htpasswd_users
to generate a new htpasswd file on the host.
Because OpenShift Container Platform requires a hashed password to configure HTPasswd authentication, you can use the
htpasswd
command, as shown in the following section, to generate the hashed password(s) for your user(s) or to create the flat file with the users and associated hashed passwords.The following example changes the authentication method from the default
deny all
setting tohtpasswd
and uses the specified file to generate user IDs and passwords for thejsmith
andbloblaw
users.# htpasswd auth openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # Defining htpasswd users openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'bloblaw': '7IRJ$2ODmeLoxf4I6sUEKfiA$2aDJqLJe'} # or #openshift_master_htpasswd_file=/etc/origin/master/htpasswd
-
Specify only
Re-run the ansible playbook for these modifications to take effect:
$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml
The playbook updates the configuration, and restarts the OpenShift Container Platform master service to apply the changes.
You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which master and node configuration options are exposed to Ansible and customize your own Ansible inventory.
7.4.1. Using the htpasswd
command
To configure the OpenShift Container Platform cluster to use HTPasswd authentication, you need at least one user with a hashed password to include in the inventory file.
You can:
- Generate the username and password to add directly to the ./hosts inventory file.
- Create a flat file to pass the credentials to the ./hosts inventory file.
To create a user and hashed password:
Run the following command to add the specified user:
$ htpasswd -n <user_name>
NoteYou can include the
-b
option to supply the password on the command line:$ htpasswd -nb <user_name> <password>
Enter and confirm a clear-text password for the user.
For example:
$ htpasswd -n myuser New password: Re-type new password: myuser:$apr1$vdW.cI3j$WSKIOzUPs6Q
The command generates a hashed version of the password.
You can then use the hashed password when configuring HTPasswd authentication. The hashed password is the string after the :
. In the above example,you would enter:
openshift_master_htpasswd_users={'myuser': '$apr1$wIwXkFLI$bAygtISk2eKGmqaJftB'}
To create a flat file with a user name and hashed password:
Execute the following command:
$ htpasswd -c /etc/origin/master/htpasswd <user_name>
NoteYou can include the
-b
option to supply the password on the command line:$ htpasswd -c -b <user_name> <password>
Enter and confirm a clear-text password for the user.
For example:
htpasswd -c /etc/origin/master/htpasswd user1 New password: Re-type new password: Adding password for user user1
The command generates a file that includes the user name and a hashed version of the user’s password.
You can then use the password file when configuring HTPasswd authentication.
For more information on the htpasswd
command, see HTPasswd Identity Provider.
7.5. Making manual configuration changes
Use Case: Configure the cluster to use HTPasswd authentication
To manually modify a configuration file:
- Open the configuration file you want to modify, which in this case is the /etc/origin/master/master-config.yaml file:
Add the following new variables to the
identityProviders
stanza of the file:oauthConfig: ... identityProviders: - name: my_htpasswd_provider challenge: true login: true mappingMethod: claim provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /etc/origin/master/htpasswd
- Save your changes and close the file.
Restart the master for the changes to take effect:
# master-restart api # master-restart controllers
You have now manually modified the master and node configuration files, but this is just a simple use case. From here you can see all the master and node configuration options, and further customize your own cluster by making further modifications.
To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.
7.6. Master Configuration Files
This section reviews parameters mentioned in the master-config.yaml file.
You can create a new master configuration file to see the valid options for your installed version of OpenShift Container Platform.
Whenever you modify the master-config.yaml file, you must restart the master for the changes to take effect. See Restarting OpenShift Container Platform services.
7.6.1. Admission Control Configuration
Table 7.1. Admission Control Configuration Parameters
Parameter Name | Description |
---|---|
| Contains the admission control plug-in configuration. OpenShift Container Platform has a configurable list of admission controller plug-ins that are triggered whenever API objects are created or modified. This option allows you to override the default list of plug-ins; for example, disabling some plug-ins, adding others, changing the ordering, and specifying configuration. Both the list of plug-ins and their configuration can be controlled from Ansible. |
|
Key-value pairs that will be passed directly to the Kube API server that match the API servers' command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in apiServerArguments: event-ttl: - "15m" |
|
Key-value pairs that will be passed directly to the Kube controller manager that match the controller manager’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in |
|
Used to enable or disable various admission plug-ins. When this type is present as the configuration object under |
| Allows specifying a configuration file per admission control plug-in. |
| A list of admission control plug-in names that will be installed on the master. Order is significant. If empty, a default list of plug-ins is used. |
|
Key-value pairs that will be passed directly to the Kube scheduler that match the scheduler’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in |
7.6.2. Asset Configuration
Table 7.2. Asset Configuration Parameters
Parameter Name | Description |
---|---|
| If present, then the asset server starts based on the defined parameters. For example: assetConfig: logoutURL: "" masterPublicURL: https://master.ose32.example.com:8443 publicURL: https://master.ose32.example.com:8443/console/ servingInfo: bindAddress: 0.0.0.0:8443 bindNetwork: tcp4 certFile: master.server.crt clientCA: "" keyFile: master.server.key maxRequestsInFlight: 0 requestTimeoutSeconds: 0 |
|
To access the API server from a web application using a different host name, you must whitelist that host name by specifying |
| A list of features that should not be started. You will likely want to set this as null. It is very unlikely that anyone will want to manually disable features and that is not encouraged. |
| Files to serve from the asset server file system under a subcontext. |
| When set to true, tells the asset server to reload extension scripts and stylesheets for every request rather than only at startup. It lets you develop extensions without having to restart the server for every change. |
|
Key- (string) and value- (string) pairs that will be injected into the console under the global variable |
| File paths on the asset server files to load as scripts when the web console loads. |
| File paths on the asset server files to load as style sheets when the web console loads. |
| The public endpoint for logging (optional). |
| An optional, absolute URL to redirect web browsers to after logging out of the web console. If not specified, the built-in logout page is shown. |
| How the web console can access the OpenShift Container Platform server. |
| The public endpoint for metrics (optional). |
| URL of the asset server. |
7.6.3. Authentication and Authorization Configuration
Table 7.3. Authentication and Authorization Parameters
Parameter Name | Description |
---|---|
| Holds authentication and authorization configuration options. |
| Indicates how many authentication results should be cached. If 0, the default cache size is used. |
| Indicates how long an authorization result should be cached. It takes a valid time duration string (e.g. "5m"). If empty, you get the default timeout. If zero (e.g. "0m"), caching is disabled. |
7.6.4. Controller Configuration
Table 7.4. Controller Configuration Parameters
Parameter Name | Description |
---|---|
|
List of the controllers that should be started. If set to none, no controllers will start automatically. The default value is * which will start all controllers. When using *, you may exclude controllers by prepending a |
|
Enables controller election, instructing the master to attempt to acquire a lease before controllers start and renewing it within a number of seconds defined by this value. Setting this value non-negative forces |
| Instructs the master to not automatically start controllers, but instead to wait until a notification to the server is received before launching them. |
7.6.5. etcd Configuration
Table 7.5. etcd Configuration Parameters
Parameter Name | Description |
---|---|
| The advertised host:port for client connections to etcd. |
| Contains information about how to connect to etcd. Specifies if etcd is run as embedded or non-embedded, and the hosts. The rest of the configuration is handled by the Ansible inventory. For example: etcdClientInfo: ca: ca.crt certFile: master.etcd-client.crt keyFile: master.etcd-client.key urls: - https://m1.aos.example.com:4001 |
| If present, then etcd starts based on the defined parameters. For example: etcdConfig: address: master.ose32.example.com:4001 peerAddress: master.ose32.example.com:7001 peerServingInfo: bindAddress: 0.0.0.0:7001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key servingInfo: bindAddress: 0.0.0.0:4001 certFile: etcd.server.crt clientCA: ca.crt keyFile: etcd.server.key storageDirectory: /var/lib/origin/openshift.local.etcd |
| Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster. |
| The path within etcd that the Kubernetes resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is kubernetes.io. |
| The API version that Kubernetes resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version. |
| The path within etcd that the OpenShift Container Platform resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is openshift.io. |
| API version that OS resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version. |
| The advertised host:port for peer connections to etcd. |
| Describes how to start serving the etcd peer. |
| Describes how to start serving. For example: servingInfo: bindAddress: 0.0.0.0:8443 bindNetwork: tcp4 certFile: master.server.crt clientCA: ca.crt keyFile: master.server.key maxRequestsInFlight: 500 requestTimeoutSeconds: 3600 |
| The path to the etcd storage directory. |
7.6.6. Grant Configuration
Table 7.6. Grant Configuration Parameters
Parameter Name | Description |
---|---|
| Describes how to handle grants. |
| Auto-approves client authorization grant requests. |
| Auto-denies client authorization grant requests. |
| Prompts the user to approve new client authorization grant requests. |
| Determines the default strategy to use when an OAuth client requests a grant.This method will be used only if the specific OAuth client does not provide a strategy of their own. Valid grant handling methods are:
|
7.6.7. Image Configuration
Table 7.7. Image Configuration Parameters
Parameter Name | Description |
---|---|
| The format of the name to be built for the system component. |
| Determines if the latest tag will be pulled from the registry. |
7.6.8. Image Policy Configuration
Table 7.8. Image Policy Configuration Parameters
Parameter Name | Description |
---|---|
| Allows scheduled background import of images to be disabled. |
| Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number defaults to 5 to prevent users from importing large numbers of images accidentally. Set -1 for no limit. |
| The maximum number of scheduled image streams that will be imported in the background per minute. The default value is 60. |
| The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is 15 minutes. |
| Limits the docker registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions. |
| Specified a filepath to a PEM-encoded file listing additional certificate authorities that should be trusted during imagestream import. This file needs to be accessible to the API server process. Depending how your cluster is installed, this may require mounting the file into the API server pod. |
|
Sets the hostname for the default internal image registry. The value must be in |
|
ExternalRegistryHostname sets the hostname for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The value is used in |
7.6.9. Kubernetes Master Configuration
Table 7.9. Kubernetes Master Configuration Parameters
Parameter Name | Description |
---|---|
| A list of API levels that should be enabled on startup, v1 as examples. |
|
A map of groups to the versions (or |
| Contains information about how to connect to kubelets. |
| Contains information about how to connect to kubelet’s KubernetesMasterConfig. If present, then start the kubernetes master with this process. |
| The number of expected masters that should be running. This value defaults to 1 and may be set to a positive integer, or if set to -1, indicates this is part of a cluster. |
|
The public IP address of Kubernetes resources. If empty, the first result from |
| File name for the .kubeconfig file that describes how to connect this node to the master. |
|
Controls grace period for deleting pods on failed nodes. It takes valid time duration string. If empty, you get the default pod eviction timeout. The default is |
| Specifies the client cert/key to use when proxying to pods.For example: proxyClientInfo: certFile: master.proxy-client.crt keyFile: master.proxy-client.key |
| The range to use for assigning service public ports on a host. Default 30000-32767. |
| The subnet to use for assigning service IPs. |
| The list of nodes that are statically known. |
7.6.10. Network Configuration
Choose the CIDRs in the following parameters carefully, because the IPv4 address space is shared by all users of the nodes. OpenShift Container Platform reserves CIDRs from the IPv4 address space for its own use, and reserves CIDRs from the IPv4 address space for addresses that are shared between the external user and the cluster.
Table 7.10. Network Configuration Parameters
Parameter Name | Description |
---|---|
| The CIDR string to specify the global overlay network’s L3 space. This is reserved for the internal use of the cluster networking. |
|
Controls what values are acceptable for the service external IP field. If empty, no |
| The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host. |
|
Controls the range to assign ingress IPs from for services of type LoadBalancer on bare metal. It may contain a single CIDR that it will be allocated from. By default |
| The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host. |
| To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory.
For Example: networkConfig: clusterNetworks - cidr: 10.3.0.0/16 hostSubnetLength: 8 networkPluginName: example/openshift-ovs-subnet # serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet serviceNetworkCIDR: 179.29.0.0/16 |
| The name of the network plug-in to use. |
| The CIDR string to specify the service networks. |
7.6.11. OAuth Authentication Configuration
Table 7.11. OAuth Configuration Parameters
Parameter Name | Description |
---|---|
| Forces the provider selection page to render even when there is only a single provider. |
| Used for building valid client redirect URLs for external access. |
| A path to a file containing a go template used to render error pages during the authentication or grant flow If unspecified, the default error page is used. |
| Ordered list of ways for a user to identify themselves. |
| A path to a file containing a go template used to render the login page. If unspecified, the default login page is used. |
|
CA for verifying the TLS connection back to the |
| Used for building valid client redirect URLs for external access. |
| Used for making server-to-server calls to exchange authorization codes for access tokens. |
| If present, then the /oauth endpoint starts based on the defined parameters. For example: oauthConfig: assetPublicURL: https://master.ose32.example.com:8443/console/ grantConfig: method: auto identityProviders: - challenge: true login: true mappingMethod: claim name: htpasswd_all provider: apiVersion: v1 kind: HTPasswdPasswordIdentityProvider file: /etc/origin/openshift-passwd masterCA: ca.crt masterPublicURL: https://master.ose32.example.com:8443 masterURL: https://master.ose32.example.com:8443 sessionConfig: sessionMaxAgeSeconds: 3600 sessionName: ssn sessionSecretsFile: /etc/origin/master/session-secrets.yaml tokenConfig: accessTokenMaxAgeSeconds: 86400 authorizeTokenMaxAgeSeconds: 500 |
| Allows for customization of pages like the login page. |
| A path to a file containing a go template used to render the provider selection page. If unspecified, the default provider selection page is used. |
| Holds information about configuring sessions. |
| Allows you to customize pages like the login page. |
| Contains options for authorization and access tokens. |
7.6.12. Project Configuration
Table 7.12. Project Configuration Parameters
Parameter Name | Description |
---|---|
| Holds default project node label selector. |
| Holds information about project creation and defaults:
|
| The string presented to a user if they are unable to request a project via the project request API endpoint. |
| The template to use for creating projects in response to a projectrequest. It is in the format namespace/template and it is optional. If it is not specified, a default template is used. |
7.6.13. Scheduler Configuration
Table 7.13. Scheduler Configuration Parameters
Parameter Name | Description |
---|---|
| Points to a file that describes how to set up the scheduler. If empty, you get the default scheduling rules |
7.6.14. Security Allocator Configuration
Table 7.14. Security Allocator Parameters
Parameter Name | Description |
---|---|
|
Defines the range of MCS categories that will be assigned to namespaces. The format is |
| Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled. |
| Defines the total set of Unix user IDs (UIDs) that will be allocated to projects automatically, and the size of the block that each namespace gets. For example, 1000-1999/10 will allocate ten UIDs per namespace, and will be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks (which is the expected size of the ranges container images will use once user namespaces are started). |
7.6.15. Service Account Configuration
Table 7.15. Service Account Configuration Parameters
Parameter Name | Description |
---|---|
| Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them. |
|
A list of service account names that will be auto-created in every namespace. If no names are specified, the |
| The CA for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so they can verify connections to the master. |
|
A file containing a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, the service account |
| A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, the public portion of the key is used. The list of public keys is used to verify presented service account tokens. Each key is tried in order until the list is exhausted or verification succeeds. If no keys are specified, no service account authentication will be available. |
| Holds options related to service accounts:
|
7.6.16. Serving Information Configuration
Table 7.16. Serving Information Configuration Parameters
Parameter Name | Description |
---|---|
| Allows the DNS server on the master to answer queries recursively. Note that open resolvers can be used for DNS amplification attacks and the master DNS should not be made accessible to public networks. |
| The ip:port to serve on. |
| Controls limits and behavior for importing images. |
| A file containing a PEM-encoded certificate. |
| TLS cert information for serving secure traffic. |
| The certificate bundle for all the signers that you recognize for incoming client certificates. |
| If present, then start the DNS server based on the defined parameters. For example: dnsConfig: bindAddress: 0.0.0.0:8053 bindNetwork: tcp4 |
| Holds the domain suffix. |
| Holds the IP. |
|
A file containing a PEM-encoded private key for the certificate specified by |
| Provides overrides to the client connection used to connect to the master. This parameter is not supported. To set QPS and burst values, see Setting Node QPS and Burst Values. |
| The number of concurrent requests allowed to the server. If zero, no limit. |
| A list of certificates to use to secure requests to specific host names. |
| The number of seconds before requests are timed out. The default is 60 minutes. If -1, there is no limit on requests. |
| The HTTP serving information for the assets. |
7.6.17. Volume Configuration
Table 7.17. Volume Configuration Parameters
Parameter Name | Description |
---|---|
| A boolean to enable or disable dynamic provisioning. Default is true. |
FSGroup |
Enables local storage quotas on each node for each FSGroup. At present this is only implemented for emptyDir volumes, and if the underlying |
| Contains options for configuring volume plug-ins in the master node. |
| Contains options for configuring volumes on the node. |
| Contains options for configuring volume plug-ins in the node:
|
|
The directory that volumes are stored under. Use the |
7.6.18. Basic Audit
Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system.
Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:
The request line containing:
- A Unique ID allowing to match the response line (see #2)
- The source IP of the request
- The HTTP method being invoked
- The original user invoking the operation
-
The impersonated user for the operation (
self
meaning himself) -
The impersonated group for the operation (
lookup
meaning user’s group) - The namespace of the request or <none>
- The URI as requested
The response line containing:
- The unique ID from #1
- The response code
Example output for user admin asking for a list of pods:
AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" ip="127.0.0.1" method="GET" user="admin" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods" AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" response="200"
7.6.18.1. Enable Basic Auditing
The following procedure enables basic auditing post installation.
Advanced Audit must be enabled during installation.
Edit the /etc/origin/master/master-config.yaml file on all master nodes as shown in the following example:
auditConfig: auditFilePath: "/var/log/origin/audit-ocp.log" enabled: true maximumFileRetentionDays: 14 maximumFileSizeMegabytes: 500 maximumRetainedFiles: 15
Restart the API pods in your cluster.
# /usr/local/bin/master-restart api
To enable basic auditing during installation, add the following variable declaration to your inventory file. Adjust values as appropriate.
openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origin/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}
The audit configuration takes the following parameters:
Table 7.18. Audit Configuration Parameters
Parameter Name | Description |
---|---|
|
A boolean to enable or disable audit logs. Default is |
| File path where the requests should be logged to. If not set, logs are printed to master logs. |
| Specifies maximum number of days to retain old audit log files based on the time stamp encoded in their filename. |
| Specifies the maximum number of old audit log files to retain. |
| Specifies maximum size in megabytes of the log file before it gets rotated. Defaults to 100MB. |
Example Audit Configuration
auditConfig: auditFilePath: "/var/log/origin/audit-ocp.log" enabled: true maximumFileRetentionDays: 14 maximumFileSizeMegabytes: 500 maximumRetainedFiles: 15
When you define the auditFilePath
parameter, the directory is created if it does not exist.
7.6.19. Advanced Audit
The advanced audit feature provides several improvements over the basic audit functionality, including fine-grained events filtering and multiple output back ends.
To enable the advanced audit feature, you create an audit policy file and specify the following values in the openshift_master_audit_config
and openshift_master_audit_policyfile
parameters:
openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/log/origin/audit-ocp.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5, "policyFile": "/etc/origin/master/adv-audit.yaml", "logFormat":"json"} openshift_master_audit_policyfile="/<path>/adv-audit.yaml"
You must create the adv-audit.yaml file before you install the cluster and specify its location in the cluster inventory file.
The following table contains additional options you can use.
Table 7.19. Advanced Audit Configuration Parameters
Parameter Name | Description |
---|---|
| Path to the file that defines the audit policy configuration. |
| An embedded audit policy configuration. |
|
Specifies the format of the saved audit logs. Allowed values are |
|
Path to a |
|
Specifies the strategy for sending audit events. Allowed values are |
To enable the advanced audit feature, you must provide either policyFile
orpolicyConfiguration
describing the audit policy rules:
Sample Audit Policy Configuration
apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: # Do not log watch requests by the "system:kube-proxy" on endpoints or services - level: None 1 users: ["system:kube-proxy"] 2 verbs: ["watch"] 3 resources: 4 - group: "" resources: ["endpoints", "services"] # Do not log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] 5 nonResourceURLs: 6 - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] 7 # Log configmap and secret changes in all other namespaces at the metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 8 # Log login failures from the web console or CLI. Review the logs and refine your policies. - level: Metadata nonResourceURLs: - /login* 9 - /oauth* 10
- 1 8
- There are four possible levels every event can be logged at:
-
None
- Do not log events that match this rule. -
Metadata
- Log request metadata (requesting user, time stamp, resource, verb, etc.), but not request or response body. This is the same level as the one used in basic audit. -
Request
- Log event metadata and request body, but not response body. -
RequestResponse
- Log event metadata, request, and response bodies.
-
- 2
- A list of users the rule applies to. An empty list implies every user.
- 3
- A list of verbs this rule applies to. An empty list implies every verb. This is Kubernetes verb associated with API requests (including
get
,list
,watch
,create
,update
,patch
,delete
,deletecollection
, andproxy
). - 4
- A list of resources the rule applies to. An empty list implies every resource. Each resource is specified as a group it is assigned to (for example, an empty for Kubernetes core API, batch, build.openshift.io, etc.), and a resource list from that group.
- 5
- A list of groups the rule applies to. An empty list implies every group.
- 6
- A list of non-resources URLs the rule applies to.
- 7
- A list of namespaces the rule applies to. An empty list implies every namespace.
- 9
- Endpoint used by the web console.
- 10
- Endpoint used by the CLI.
For more information on advanced audit, see the Kubernetes documentation
7.6.20. Specifying TLS ciphers for etcd
You can specify the supported TLS ciphers to use in communication between the master and etcd servers.
On each etcd node, upgrade etcd:
# yum update etcd iptables-services
Confirm that your etcd version is 3.2.22 or later:
# etcd --version etcd Version: 3.2.22
On each master host, specify the ciphers to enable in the