Chapter 15. Red Hat Quay as a proxy cache for upstream registries

With the growing popularity of container development, customers increasingly rely on container images from upstream registries like Docker or Google Cloud Platform to get services up and running. Today, registries have rate limitations and throttling on the number of times users can pull from these registries.

With this feature, Red Hat Quay will act as a proxy cache to circumvent pull-rate limitations from upstream registries. Adding a cache feature also accelerates pull performance, because images are pulled from the cache rather than upstream dependencies. Cached images are only updated when the upstream image digest differs from the cached image, reducing rate limitations and potential throttling.

With the Red Hat Quay cache proxy technology preview, the following features are available:

  • Specific organizations can be defined as a cache for upstream registries.
  • Configuration of a Quay organization that acts as a cache for a specific upstream registry. This repository can be defined by using the Quay UI, and offers the following configurations:

    • Upstream registry credentials for private repositories or increased rate limiting.
    • Expiration timer to avoid surpassing cache organization size.

      Note

      Because cache proxy is still marked as Technology Preview, there is no storage quota support yet. When this feature goes General Availability in a future release of Red Hat Quay, the expiration timer will be supplemented by another timer that protects against intermittent upstream registry issues.

  • Global on/off configurable via the configuration application.
  • Caching of entire upstream registries or just a single namespace, for example, all of \docker.io or just \docker.io/library.
  • Logging of all cache pulls.
  • Cached images scannability by Clair.

15.1. Proxy cache architecture

The following image shows the expected design flow and architecture of the proxy cache feature.

Proxy cache overview

When a user pulls an image, for example, postgres:14, from an upstream repository on Red Hat Quay, the repository checks to see if an image is present. If the image does not exist, a fresh pull is initiated. After being pulled, the image layers are saved to cache and server to the user in parallel. The following image depicts an architectural overview of this scenario:

Pulled image overview

If the image in the cache exists, users can rely on Quay’s cache to stay up-to-date with the upstream source so that newer images from the cache are automatically pulled. This happens when tags of the original image have been overwritten in the upstream registry. The following image depicts an architectural overview of what happens when the upstream image and cached version of the image are different:

Updating opposing layers overview

If the upstream image and cached version are the same, no layers are pulled and the cached image is delivered to the user.

In some cases, users initiate pulls when the upstream registry is down. If this happens with the configured staleness period, the image stored in cache is delivered. If the pull happens after the configured staleness period, the error is propagated to the user. The following image depicts an architectural overview when a pull happens after the configured staleness period:

image: cache-proxy-staleness-pull.png[Staleness pull overview]

Quay administrators can leverage the configurable size limit of an organization to limit cache size so that backend storage consumption remains predictable. This is achieved by discarding images from the cache according to the frequency in which an image is used. The following image depicts an architectural overview of this scenario:

15.2. Proxy cache limitations

Proxy caching with Red Hat Quay has the following limitations:

  • Your proxy cache must have a size limit of greater than, or equal to, the image you want to cache. For example, if your proxy cache organization has a maximum size of 500 MB, and the image a user wants to pull is 700 MB, the image will be cached and will overflow beyond the configured limit.
  • Cached images must have the same properties that images on a Quay repository must have.

15.3. Using Red Hat Quay to proxy a remote registry

The following procedure describes how you can use Red Hat Quay to proxy a remote registry. This procedure is set up to proxy quay.io, which allows users to use podman to pull any public image from any namespace on quay.io.

Prerequisites

Procedure

  1. In your Quay organization on the UI, for example, cache-quayio, click Organization Settings on the left hand pane.
  2. Optional: Click Add Storage Quota to configure quota management for your organization. For more information about quota management, see Quota Management.

    Note

    In some cases, pulling images with Podman might return the following error when quota limit is reached during a pull: unable to pull image: Error parsing image configuration: Error fetching blob: invalid status code from registry 403 (Forbidden). Error 403 is inaccurate, and occurs because Podman hides the correct API error: Quota has been exceeded on namespace. This known issue will be fixed in a future Podman update.

  3. In Remote Registry enter the name of the remote registry to be cached, for example, quay.io, and click Save.

    Note

    By adding a namespace to the Remote Registry, for example, quay.io/<namespace>, users in your organization will only be able to proxy from that namespace.

  4. Optional: Add a Remote Registry Username and Remote Registry Password.

    Note

    If you do not set a Remote Registry Username and Remote Registry Password, you cannot add one without removing the proxy cache and creating a new registry.

  5. Optional: Set a time in the Expiration field.

    Note
    • The default tag Expiration field for cached images in a proxy organization is set to 86400 seconds. In the proxy organization, the tag expiration is refreshed to the value set in the UI’s Expiration field every time the tag is pulled. This feature is different than Quay’s default individual tag expiration feature. In a proxy organization, it is possible to override the individual tag feature. When this happens, the individual tag’s expiration is reset according to the Expiration field of the proxy organization.
    • Expired images will disappear after the allotted time, but are still stored in Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified.
  6. Click Save.
  7. On the CLI, pull a public image from the registry, for example, quay.io, acting as a proxy cache:

    $ podman pull <registry_url>/<organization_name>/<quayio_namespace>/<image_name>
    Important

    If your organization is set up to pull from a single namespace in the remote registry, the remote registry namespace must be omitted from the URL. For example, podman pull <registry_url>/<organization_name>/<image_name>.

15.3.1. Leveraging storage quota limits in proxy organizations

With Red Hat Quay 3.8, the proxy cache feature has been enhanced with an auto-pruning feature for tagged images. The auto-pruning of image tags is only available when a proxied namespace has quota limitations configured. Currently, if an image size is greater than quota for an organization, the image is skipped from being uploaded until an administrator creates the necessary space. Now, when an image is pushed that exceeds the allotted space, the auto-pruning enhancement marks the least recently used tags for deletion. As a result, the new image tag is stored, while the least used image tag is marked for deletion.

Important
  • As part of the auto-pruning feature, the tags that are marked for deletion are eventually garbage collected by the garbage collector (gc) worker process. As a result, the quota size restriction is not fully enforced during this period.
  • Currently, the namespace quota size computation does not take into account the size for manifest child. This is a known issue and will be fixed in a future version of Red Hat Quay.

15.3.1.1. Testing the storage quota limits feature in proxy organizations

Use the following procedure to test the auto-pruning feature of an organization with proxy cache and storage quota limitations enabled.

Prerequisites

  • Your organization is configured to serve as a proxy organization. The following example proxies from quay.io.
  • FEATURE_PROXY_CACHE is set to true in your config.yaml file.
  • FEATURE_QUOTA_MANAGEMENT is set to true in your config.yaml file.
  • Your organization is configured with a quota limit, for example, 150 MB.

Procedure

  1. Pull an image to your repository from your proxy organization, for example:

    $ podman pull quay-server.example.com/proxytest/projectquay/quay:3.7.9
  2. Depending on the space left in your repository, you might need to pull additional images from your proxy organization, for example:

    $ podman pull quay-server.example.com/proxytest/projectquay/quay:3.6.2
  3. In the Red Hat Quay registry UI, click the name of your repository.

    • Click Tags in the navigation pane and ensure that quay:3.7.9 and quay:3.6.2 are tagged.
  4. Pull the last image that will result in your repository exceeding the the allotted quota, for example:

    $ podman pull quay-server.example.com/proxytest/projectquay/quay:3.5.1
  5. Refresh the Tags page of your Red Hat Quay registry. The first image that you pushed, for example, quay:3.7.9 should have been auto-pruned. The Tags page should now show quay:3.6.2 and quay:3.5.1.