Chapter 6. Automatically building Dockerfiles with Build workers

Red Hat Quay supports building Dockerfiles using a set of worker nodes on OpenShift Container Platform or Kubernetes. Build triggers, such as GitHub webhooks, can be configured to automatically build new versions of your repositories when new code is committed.

This document shows you how to enable Builds with your Red Hat Quay installation, and set up one more more OpenShift Container Platform or Kubernetes clusters to accept Builds from Red Hat Quay.

6.1. Setting up Red Hat Quay Builders with OpenShift Container Platform

You must pre-configure Red Hat Quay Builders prior to using it with OpenShift Container Platform.

6.1.1. Configuring the OpenShift Container Platform TLS component

The tls component allows you to control TLS configuration.

Note

Red Hat Quay does not support Builders when the TLS component is managed by the Red Hat Quay Operator.

If you set tls to unmanaged, you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support Builders, you must add both the Quay route and the Builder route name to the SAN list in the certificate; alternatively you can use a wildcard.

To add the builder route, use the following format:

[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]

6.1.2. Preparing OpenShift Container Platform for Red Hat Quay Builders

Prepare Red Hat Quay Builders for OpenShift Container Platform by using the following procedure.

Prerequisites

  • You have configured the OpenShift Container Platform TLS component.

Procedure

  1. Enter the following command to create a project where Builds will be run, for example, builder:

    $ oc new-project builder
  2. Create a new ServiceAccount in the the builder namespace by entering the following command:

    $ oc create sa -n builder quay-builder
  3. Enter the following command to grant a user the edit role within the builder namespace:

    $ oc policy add-role-to-user -n builder edit system:serviceaccount:builder:quay-builder
  4. Enter the following command to retrieve a token associated with the quay-builder service account in the builder namespace. This token is used to authenticate and interact with the OpenShift Container Platform cluster’s API server.

    $ oc sa get-token -n builder quay-builder
  5. Identify the URL for the OpenShift Container Platform cluster’s API server. This can be found in the OpenShift Container Platform Web Console.
  6. Identify a worker node label to be used when schedule Build jobs. Because Build pods need to run on bare metal worker nodes, typically these are identified with specific labels.

    Check with your cluster administrator to determine exactly which node label should be used.

  7. Optional. If the cluster is using a self-signed certificate, you must get the Kube API Server’s certificate authority (CA) to add to Red Hat Quay’s extra certificates.

    1. Enter the following command to obtain the name of the secret containing the CA:

      $ oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains("openshift-apiserver-sa-token"))'.name
    2. Obtain the ca.crt key value from the secret in the OpenShift Container Platform Web Console. The value begins with "-----BEGIN CERTIFICATE-----"`.
    3. Import the CA to Red Hat Quay. Ensure that the name of this file matches K8S_API_TLS_CA.
  8. Create the following SecurityContextConstraints resource for the ServiceAccount:

    apiVersion: security.openshift.io/v1
    kind: SecurityContextConstraints
    metadata:
      name: quay-builder
    priority: null
    readOnlyRootFilesystem: false
    requiredDropCapabilities: null
    runAsUser:
      type: RunAsAny
    seLinuxContext:
      type: RunAsAny
    seccompProfiles:
    - '*'
    supplementalGroups:
      type: RunAsAny
    volumes:
    - '*'
    allowHostDirVolumePlugin: true
    allowHostIPC: true
    allowHostNetwork: true
    allowHostPID: true
    allowHostPorts: true
    allowPrivilegeEscalation: true
    allowPrivilegedContainer: true
    allowedCapabilities:
    - '*'
    allowedUnsafeSysctls:
    - '*'
    defaultAddCapabilities: null
    fsGroup:
      type: RunAsAny
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: quay-builder-scc
      namespace: builder
    rules:
    - apiGroups:
      - security.openshift.io
      resourceNames:
      - quay-builder
      resources:
      - securitycontextconstraints
      verbs:
      - use
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: quay-builder-scc
      namespace: builder
    subjects:
    - kind: ServiceAccount
      name: quay-builder
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: quay-builder-scc

6.1.3. Configuring Red Hat Quay Builders

Use the following procedure to enable Red Hat Quay Builders.

Procedure

  1. Ensure that your Red Hat Quay config.yaml file has Builds enabled, for example:

    FEATURE_BUILD_SUPPORT: True
  2. Add the following information to your Red Hat Quay config.yaml file, replacing each value with information that is relevant to your specific installation:

    BUILD_MANAGER:
    - ephemeral
    - ALLOWED_WORKER_COUNT: 1
      ORCHESTRATOR_PREFIX: buildman/production/
      ORCHESTRATOR:
        REDIS_HOST: quay-redis-host
        REDIS_PASSWORD: quay-redis-password
        REDIS_SSL: true
        REDIS_SKIP_KEYSPACE_EVENT_SETUP: false
      EXECUTORS:
      - EXECUTOR: kubernetes
        BUILDER_NAMESPACE: builder
        K8S_API_SERVER: api.openshift.somehost.org:6443
        K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt
        VOLUME_SIZE: 8G
        KUBERNETES_DISTRIBUTION: openshift
        CONTAINER_MEMORY_LIMITS: 5120Mi
        CONTAINER_CPU_LIMITS: 1000m
        CONTAINER_MEMORY_REQUEST: 3968Mi
        CONTAINER_CPU_REQUEST: 500m
        NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type
        NODE_SELECTOR_LABEL_VALUE: n1-standard-4
        CONTAINER_RUNTIME: podman
        SERVICE_ACCOUNT_NAME: *****
        SERVICE_ACCOUNT_TOKEN: *****
        QUAY_USERNAME: quay-username
        QUAY_PASSWORD: quay-password
        WORKER_IMAGE: <registry>/quay-quay-builder
        WORKER_TAG: some_tag
        BUILDER_VM_CONTAINER_IMAGE: <registry>/quay-quay-builder-qemu-rhcos:v3.4.0
        SETUP_TIME: 180
        MINIMUM_RETRY_THRESHOLD: 0
        SSH_AUTHORIZED_KEYS:
        - ssh-rsa 12345 someuser@email.com
        - ssh-rsa 67890 someuser2@email.com

    For more information about each configuration field, see

6.2. OpenShift Container Platform Routes limitations

The following limitations apply when you are using the Red Hat Quay Operator on OpenShift Container Platform with a managed route component:

  • Currently, OpenShift Container Platform Routes are only able to serve traffic to a single port. Additional steps are required to set up Red Hat Quay Builds.
  • Ensure that your kubectl or oc CLI tool is configured to work with the cluster where the Red Hat Quay Operator is installed and that your QuayRegistry exists; the QuayRegistry does not have to be on the same bare metal cluster where Builders run.
  • Ensure that HTTP/2 ingress is enabled on the OpenShift cluster by following these steps.
  • The Red Hat Quay Operator creates a Route resource that directs gRPC traffic to the Build manager server running inside of the existing Quay pod, or pods. If you want to use a custom hostname, or a subdomain like <builder-registry.example.com>, ensure that you create a CNAME record with your DNS provider that points to the status.ingress[0].host of the create Route resource. For example:

    $ kubectl get -n <namespace> route <quayregistry-name>-quay-builder -o jsonpath={.status.ingress[0].host}
  • Using the OpenShift Container Platform UI or CLI, update the Secret referenced by spec.configBundleSecret of the QuayRegistry with the Build cluster CA certificate. Name the key extra_ca_cert_build_cluster.cert. Update the config.yaml file entry with the correct values referenced in the Builder configuration that you created when you configured Red Hat Quay Builders, and add the BUILDMAN_HOSTNAME CONFIGURATION FIELD:

    BUILDMAN_HOSTNAME: <build-manager-hostname> 1
    BUILD_MANAGER:
    - ephemeral
    - ALLOWED_WORKER_COUNT: 1
      ORCHESTRATOR_PREFIX: buildman/production/
      JOB_REGISTRATION_TIMEOUT: 600
      ORCHESTRATOR:
        REDIS_HOST: <quay_redis_host
        REDIS_PASSWORD: <quay_redis_password>
        REDIS_SSL: true
        REDIS_SKIP_KEYSPACE_EVENT_SETUP: false
      EXECUTORS:
      - EXECUTOR: kubernetes
        BUILDER_NAMESPACE: builder
        ...
    1
    The externally accessible server hostname which the build jobs use to communicate back to the Build manager. Default is the same as SERVER_HOSTNAME. For OpenShift Route, it is either status.ingress[0].host or the CNAME entry if using a custom hostname. BUILDMAN_HOSTNAME must include the port number, for example, somehost:443 for an OpenShift Container Platform Route, as the gRPC client used to communicate with the build manager does not infer any port if omitted.

6.3. Troubleshooting Builds

The Builder instances started by the Build manager are ephemeral. This means that they will either get shut down by Red Hat Quay on timeouts or failure, or garbage collected by the control plane (EC2/K8s). In order to obtain the Build logs, you must do so while the Builds are running.

6.3.1. DEBUG config flag

The DEBUG flag can be set to true in order to prevent the Builder instances from getting cleaned up after completion or failure. For example:

  EXECUTORS:
    - EXECUTOR: ec2
      DEBUG: true
      ...
    - EXECUTOR: kubernetes
      DEBUG: true
      ...

When set to true, the debug feature prevents the Build nodes from shutting down after the quay-builder service is done or fails. It also prevents the Build manager from cleaning up the instances by terminating EC2 instances or deleting Kubernetes jobs. This allows debugging Builder node issues.

Debugging should not be set in a production cycle. The lifetime service still exists; for example, the instance still shuts down after approximately two hours. When this happens, EC2 instances are terminated, and Kubernetes jobs are completed.

Enabling debug also affects the ALLOWED_WORKER_COUNT, because the unterminated instances and jobs still count toward the total number of running workers. As a result, the existing Builder workers must be manually deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds.

Setting DEBUG will also affect ALLOWED_WORKER_COUNT, as the unterminated instances/jobs will still count towards the total number of running workers. This means the existing builder workers will need to manually be deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds.

6.3.2. Troubleshooting OpenShift Container Platform and Kubernetes Builds

Use the following procedure to troubleshooting OpenShift Container Platform Kubernetes Builds.

Procedure

  1. Create a port forwarding tunnel between your local machine and a pod running with either an OpenShift Container Platform cluster or a Kubernetes cluster by entering the following command:

    $ oc port-forward <builder_pod> 9999:2222
  2. Establish an SSH connection to the remote host using a specified SSH key and port, for example:

    $ ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost
  3. Obtain the quay-builder service logs by entering the following commands:

    $ systemctl status quay-builder
    $ journalctl -f -u quay-builder

6.4. Setting up Github builds

If your organization plans to have Builds be conducted by pushes to Github, or Github Enterprise, continue with Creating an OAuth application in GitHub.