Operators

OpenShift Container Platform 4.6

Working with Operators in OpenShift Container Platform

Red Hat OpenShift Documentation Team

Abstract

This document provides information for working with Operators in OpenShift Container Platform. This includes instructions for cluster administrators on how to install and manage Operators, as well as information for developers on how to create applications from installed Operators. This also contains guidance on building your own Operator using the Operator SDK.

Chapter 1. Understanding Operators

1.1. What are Operators?

Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers.

Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor’s engineering team, watching over a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.

More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application.

A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes.

1.1.1. Why use Operators?

Operators provide:

  • Repeatability of installation and upgrade.
  • Constant health checks of every system component.
  • Over-the-air (OTA) updates for OpenShift components and ISV content.
  • A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two.
Why deploy on Kubernetes?
Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems – secret handling, load balancing, service discovery, autoscaling – that work across on-premise and cloud providers.
Why manage your app with Kubernetes APIs and kubectl tooling?
These APIs are feature rich, have clients for all platforms and plug into the cluster’s access control/auditing. An Operator uses the Kubernetes' extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB, looks and acts just like the built-in, native Kubernetes objects.
How do Operators compare with Service Brokers?
A Service Broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching your cluster’s current state. Off-cluster services continue to be a good match for a Service Broker, although Operators exist for these as well.

1.1.2. Operator Framework

The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems:

Operator SDK
The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities.
Operator Lifecycle Manager
Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. Deployed by default in OpenShift Container Platform 4.6.
Operator Registry
The Operator Registry stores ClusterServiceVersions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM.
OperatorHub
OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform.
Operator Metering
Operator Metering collects operational metrics about Operators on the cluster for Day 2 management and aggregating usage metrics.

These tools are designed to be composable, so you can use any that are useful to you.

1.1.3. Operator maturity model

The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator.

One can however generalize the scale of the maturity of an Operator’s encapsulated operations for certain set of capabilities that most Operators can include. To this end, the following Operator Maturity model defines five phases of maturity for generic day two operations of an Operator:

Figure 1.1. Operator maturity model

operator maturity model

The above model also shows how these capabilities can best be developed through the Operator SDK’s Helm, Go, and Ansible capabilities.

1.2. Operator Framework glossary of common terms

This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK, for both packaging formats: Package Manifest Format and Bundle Format.

1.2.1. Common Operator Framework terms

1.2.1.1. Bundle

In the Bundle Format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster.

1.2.1.2. Bundle image

In the Bundle Format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub.

1.2.1.3. CatalogSource

A CatalogSource is a repository of CSVs, CRDs, and packages that define an application.

1.2.1.4. Catalog image

In the Package Manifest Format, a catalog image is a containerized datastore that describes a set of Operator metadata and update metadata that can be installed onto a cluster using OLM.

1.2.1.5. Channel

A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest.

An Operator can have several channels, and a Subscription binding to a certain channel would only look for updates in that channel.

1.2.1.6. Channel head

A channel head refers to the latest known update in a particular channel.

1.2.1.7. ClusterServiceVersion

A ClusterServiceVersion (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on.

1.2.1.8. Dependency

An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer.

OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a Catalog that satisfies the required CRD API, and is not related to packages or bundles.

1.2.1.9. Index image

In the Bundle Format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool.

1.2.1.10. InstallPlan

An InstallPlan is a calculated list of resources to be created to automatically install or upgrade a CSV.

1.2.1.11. OperatorGroup

An OperatorGroup configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide.

1.2.1.12. Package

In the Bundle Format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a ClusterServiceVersion (CSV) manifest alongside the CustomResourceDefinitions (CRDs).

1.2.1.13. Registry

A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels.

1.2.1.14. Subscription

A Subscription keeps CSVs up to date by tracking a channel in a package.

1.2.1.15. Update graph

An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added.

1.3. Operator Framework packaging formats

This guide outlines the packaging formats for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.3.1. Bundle Format

The Bundle Format for Operators is a new packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the Bundle Format specification simplifies the distribution of Operator metadata.

An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image, which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay.

Operator metadata can include:

  • Information that identifies the Operator, for example its name and version.
  • Additional information that drives the UI, for example its icon and some example custom resources (CRs).
  • Required and provided APIs.
  • Related images.

When loading manifests into the Operator Registry database, the following requirements are validated:

  • The bundle must have at least one channel defined in the annotations.
  • Every bundle has exactly one CSV.
  • If a CSV owns a CRD, that CRD must exist in the bundle.

1.3.1.1. Manifests

Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator.

A bundle includes one ClusterServiceVersion (CSV) per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory.

Example Bundle Format layout

etcd
├── manifests
│   ├── etcdcluster.crd.yaml
│   └── etcdoperator.clusterserviceversion.yaml
│   └── secret.yaml
│   └── configmap.yaml
└── metadata
    └── annotations.yaml
    └── dependencies.yaml

Additionally supported objects

The following objects can also be optionally included in the /manifests directory of a bundle:

Supported optional objects

  • Secrets
  • ConfigMaps
  • Services
  • PodDisruptionBudget
  • PriorityClass
  • VerticalPodAutoScaler

When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV:

Lifecycle for optional objects

  • When the CSV is deleted, OLM deletes the optional object.
  • When the CSV is upgraded:

    • If the name of the optional object is the same, OLM updates it in place.
    • If the name of the optional object has changed between versions, OLM deletes and recreates it.

1.3.1.2. Annotations

A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles:

Example annotations.yaml

annotations:
  operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1
  operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2
  operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3
  operators.operatorframework.io.bundle.package.v1: "test-operator" 4
  operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5
  operators.operatorframework.io.bundle.channel.default.v1: "stable" 6

1
The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects.
2
The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/. The value manifests.v1 implies that the bundle contains Operator manifests.
3
The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/. The value metadata.v1 implies that this bundle has operator metadata.
4
The package name of the bundle.
5
The list of channels the bundle is subscribing to when added into an Operator Registry.
6
The default channel an Operator should be subscribed to when installed from a registry.
Note

In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file.

1.3.1.3. Dependencies file

The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.

The dependency list contains a type field for each item to specify what kind of dependency this is. There are two supported types of Operator dependencies:

  • olm.package: A package type means this is a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1.
  • olm.gvk: With a GVK type, the author can specify a dependency with GVK information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.

In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:

Example dependencies.yaml file

dependencies:
  - type: olm.package
    value:
      packageName: prometheus
      version: ">0.27.0"
  - type: olm.gvk
    value:
      group: etcd.database.coreos.com
      kind: EtcdCluster
      version: v1beta2

1.3.1.4. About opm

The opm CLI tool is provided by the Operator Framework for use with the Operator Bundle Format. This tool allows you to create and maintain catalogs of Operators from a list of bundles, called an index, that are similar to software repositories. The result is a container image, called an index image, which can be stored in a container registry and then installed on a cluster.

An index contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can use the index image as a catalog by referencing it in a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster.

  • See CLI tools for steps on installing the opm CLI.

1.3.2. Package Manifest Format

The Package Manifest Format for Operators is the legacy packaging format introduced by the Operator Framework. While this format is deprecated in OpenShift Container Platform 4.5, it is still supported and Operators provided by Red Hat are currently shipped using this method.

In this format, a version of an Operator is represented by a single ClusterServiceVersion (CSV) and typically the CustomResourceDefinitions (CRDs) that define the owned APIs of the CSV, though additional objects may be included. All versions of the Operator are nested in a single directory:

Example Package Manifest Format layout

etcd
├── 0.6.1
│   ├── etcdcluster.crd.yaml
│   └── etcdoperator.clusterserviceversion.yaml
├── 0.9.0
│   ├── etcdbackup.crd.yaml
│   ├── etcdcluster.crd.yaml
│   ├── etcdoperator.v0.9.0.clusterserviceversion.yaml
│   └── etcdrestore.crd.yaml
├── 0.9.2
│   ├── etcdbackup.crd.yaml
│   ├── etcdcluster.crd.yaml
│   ├── etcdoperator.v0.9.2.clusterserviceversion.yaml
│   └── etcdrestore.crd.yaml
└── etcd.package.yaml

It also includes a <name>.package.yaml file that is the package manifest that defines the package name and channels details:

Example package manifest

packageName: etcd
channels:
- name: alpha
  currentCSV: etcdoperator.v0.9.2
- name: beta
  currentCSV: etcdoperator.v0.9.0
- name: stable
  currentCSV: etcdoperator.v0.9.2
defaultChannel: alpha

When loading package manifests into the Operator Registry database, the following requirements are validated:

  • Every package has at least one channel.
  • Every CSV pointed to by a channel in a package exists.
  • Every version of an Operator has exactly one CSV.
  • If a CSV owns a CRD, that CRD must exist in the Operator version’s directory.
  • If a CSV replaces another, both the old and the new must exist in the package.

1.4. Operator Lifecycle Manager (OLM)

1.4.1. Operator Lifecycle Manager concepts

This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.4.1.1. What is Operator Lifecycle Manager?

Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of all Operators and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.

Figure 1.2. Operator Lifecycle Manager workflow

olm workflow

OLM runs by default in OpenShift Container Platform 4.6, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.

1.4.1.2. OLM resources

The following Custom Resource Definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM):

Table 1.1. CRDs managed by OLM and Catalog Operators

ResourceShort nameDescription

ClusterServiceVersion

csv

Application metadata: name, version, icon, required resources, installation, and so on.

CatalogSource

catsrc

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Keeps CSVs up to date by tracking a channel in a package.

InstallPlan

ip

Calculated list of resources to be created to automatically install or upgrade a CSV.

OperatorGroup

og

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

1.4.1.2.1. ClusterServiceVersion

A ClusterServiceVersion (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster.

OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm, dep, or apk bundle.

A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo.

A CSV is also a source of technical information required to run the Operator, such as which Custom Resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a Deployment.

1.4.1.2.2. CatalogSource

A CatalogSource represents a store of metadata that OLM can query to discover and install Operators and their dependencies. The spec of a CatalogSource indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API.

There are three primary sourceTypes for a CatalogSource:

  • grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API.
  • grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases.
  • internal or configmap: OLM parses the ConfigMap data and runs a pod that can serve the gRPC API over it.

Example CatalogSource

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
 name: operatorhubio-catalog
 namespace: olm
spec:
 sourceType: grpc
 image: quay.io/operatorhubio/catalog:latest 1
 priority: -400
 displayName: Community Operators
 publisher: OperatorHub.io
 updateStrategy:
  registryPoll: 2
    interval: 30m

1
Specify catalog image.
2
Automatically check for new versions at a given interval to keep up to date.

This example defines a CatalogSource for OperatorHub.io content. The name of the CatalogSource is used as input to a Subscription, which instructs OLM where to look to find a requested Operator:

Example Subscription referencing CatalogSource

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
 name: my-operator
 namespace: olm
spec:
 channel: stable
 name: my-operator
 source: operatorhubio-catalog

1.4.1.2.3. Subscription

A Subscription represents an intention to install an Operator. It is the custom resource that relates an Operator to a CatalogSource. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the Subscription ensures OLM manages and upgrades the Operator to ensure that the latest version is always running in the cluster.

Example Subscription

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-operator
  namespace: operators
spec:
  channel: stable
  name: my-operator
  source: my-catalog
  sourceNamespace: operators

This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha, beta, or stable, helps determine which Operator stream should be installed from the CatalogSource.

In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the Operator’s Subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster.

1.4.1.2.4. InstallPlan

An InstallPlan defines a set of resources to be created to install or upgrade to a specific version of a ClusterService defined by a CSV.

1.4.1.2.5. OperatorGroups

An OperatorGroup is an OLM resource that provides multitenant configuration to OLM-installed Operators. An OperatorGroup selects target namespaces in which to generate required RBAC access for its member Operators.

The set of target namespaces is provided by a comma-delimited string stored in the ClusterServiceVersion’s (CSV) olm.targetNamespaces annotation. This annotation is applied to member Operator’s CSV instances and is projected into their deployments.

For more information, see the OperatorGroups guide.

1.4.2. Operator Lifecycle Manager architecture

This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.4.2.1. Component responsibilities

Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator.

Each of these Operators is responsible for managing the Custom Resource Definitions (CRDs) that are the basis for the OLM framework:

Table 1.2. CRDs managed by OLM and Catalog Operators

ResourceShort nameOwnerDescription

ClusterServiceVersion

csv

OLM

Application metadata: name, version, icon, required resources, installation, and so on.

InstallPlan

ip

Catalog

Calculated list of resources to be created to automatically install or upgrade a CSV.

CatalogSource

catsrc

Catalog

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Catalog

Used to keep CSVs up to date by tracking a channel in a package.

OperatorGroup

og

OLM

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

Each of these Operators is also responsible for creating resources:

Table 1.3. Resources created by OLM and Catalog Operators

ResourceOwner

Deployments

OLM

ServiceAccounts

(Cluster)Roles

(Cluster)RoleBindings

Custom Resource Definitions (CRDs)

Catalog

ClusterServiceVersions (CSVs)

1.4.2.2. OLM Operator

The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster.

The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application.

The OLM Operator uses the following workflow:

  1. Watch for ClusterServiceVersion (CSVs) in a namespace and check that requirements are met.
  2. If requirements are met, run the install strategy for the CSV.

    Note

    A CSV must be an active member of an OperatorGroup for the install strategy to run.

1.4.2.3. Catalog Operator

The Catalog Operator is responsible for resolving and installing CSVs and the required resources they specify. It is also responsible for watching CatalogSources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions.

To track a package in a channel, you can create a Subscription resource configuring the desired package, channel, and the CatalogSource you want to pull updates from. When updates are found, an appropriate InstallPlan is written into the namespace on behalf of the user.

The Catalog Operator uses the following workflow:

  1. Connect to each CatalogSource in the cluster.
  2. Watch for unresolved InstallPlans created by a user, and if found:

    1. Find the CSV matching the name requested and add the CSV as a resolved resource.
    2. For each managed or required CRD, add the CRD as a resolved resource.
    3. For each required CRD, find the CSV that manages it.
  3. Watch for resolved InstallPlans and create all of the discovered resources for it, if approved by a user or automatically.
  4. Watch for CatalogSources and Subscriptions and create InstallPlans based on them.

1.4.2.4. Catalog Registry

The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels.

A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version.

1.4.3. Operator Lifecycle Manager workflow

This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.4.3.1. Operator installation and upgrade workflow in OLM

In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades:

  • ClusterServiceVersion (CSV)
  • CatalogSource
  • Subscription

Operator metadata, defined in CSVs, can be stored in a collection called a CatalogSource. OLM uses CatalogSources, which use the Operator Registry API, to query for available Operators as well as upgrades for installed Operators.

Figure 1.3. CatalogSource overview

olm catalogsource

Within a CatalogSource, Operators are organized into packages and streams of updates called channels, which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers.

Figure 1.4. Packages and channels in a CatalogSource

olm channels

A user indicates a particular package and channel in a particular CatalogSource in a Subscription, for example an etcd package and its alpha channel. If a Subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed.

Note

OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalogchannelpackage path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository.

Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates:

Figure 1.5. OLM’s graph of available channel updates

olm replaces

Example channels in a package

packageName: example
channels:
- name: alpha
  currentCSV: example.v0.1.2
- name: beta
  currentCSV: example.v0.1.3
defaultChannel: alpha

For OLM to successfully query for updates, given a CatalogSource, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV.

1.4.3.1.1. Example upgrade path

For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1. OLM queries the CatalogSource and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2, which in turn replaces the older and installed CSV version 0.1.1.

OLM walks back from the channel head to previous versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.30.1.20.1.1; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head.

For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1. Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2. At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed.

1.4.3.1.2. Skipping upgrades

OLM’s basic path for upgrades is:

  • A CatalogSource is updated with one or more updates to an Operator.
  • OLM traverses every version of the Operator until reaching the latest version the CatalogSource contains.

However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability.

In those cases, OLM must consider two cluster states and provide an update graph that supports both:

  • The "bad" intermediate Operator has been seen by the cluster and installed.
  • The "bad" intermediate Operator has not yet been installed onto the cluster.

By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet.

Example CSV with skipped release

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: etcdoperator.v0.9.2
  namespace: placeholder
  annotations:
spec:
    displayName: etcd
    description: Etcd Operator
    replaces: etcdoperator.v0.9.0
    skips:
    - etcdoperator.v0.9.1

Consider the following example Old CatalogSource and New CatalogSource:

Figure 1.6. Skipping updates

olm skipping updates

This graph maintains that:

  • Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
  • Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
  • If the bad update has not yet been installed, it will never be.
1.4.3.1.3. Replacing multiple Operators

Creating the New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation:

olm.skipRange: <semver_range>

where <semver_range> has the version range format supported by the semver library.

When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel.

The order of precedence is:

  1. Channel head in the source specified by sourceName on the Subscription, if the other criteria for skipping are met.
  2. The next Operator that replaces the current one, in the source specified by sourceName.
  3. Channel head in another source that is visible to the Subscription, if the other criteria for skipping are met.
  4. The next Operator that replaces the current one in any source visible to the Subscription.
  5. Example CSV with skipRange
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
    name: elasticsearch-operator.v4.1.2
    namespace: <namespace>
    annotations:
        olm.skipRange: '>=4.1.0 <4.1.2'
1.4.3.1.4. Z-stream support

A z-stream, or patch release, must replace all previous z-stream releases for the same minor version. OLM does not care about major, minor, or patch versions, it just needs to build the correct graph in a catalog.

In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource:

Figure 1.7. Replacing several Operators

olm z stream

This graph maintains that:

  • Any Operator found in Old CatalogSource has a single replacement in New CatalogSource.
  • Any Operator found in New CatalogSource has a single replacement in New CatalogSource.
  • Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource.
  • Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this.

1.4.4. Operator Lifecycle Manager dependency resolution

This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.4.4.1. About dependency resolution

OLM manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other operating system package managers like yum and rpm.

However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other.

This means that OLM must never do the following:

  • Install a set of Operators that require APIs that cannot be provided.
  • Update an Operator in a way that breaks another that depends upon it.

1.4.4.2. Dependencies file

The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies.

The dependency list contains a type field for each item to specify what kind of dependency this is. There are two supported types of Operator dependencies:

  • olm.package: A package type means this is a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1.
  • olm.gvk: With a GVK type, the author can specify a dependency with GVK information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place.

In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs:

Example dependencies.yaml file

dependencies:
  - type: olm.package
    value:
      packageName: prometheus
      version: ">0.27.0"
  - type: olm.gvk
    value:
      group: etcd.database.coreos.com
      kind: EtcdCluster
      version: v1beta2

1.4.4.3. Dependency preferences

There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear.

1.4.4.3.1. Catalog priority

On OpenShift Container Platform cluster, OLM reads CatalogSources to know which Operators are available for installation.

CatalogSource object

apiVersion: "operators.coreos.com/v1alpha1"
kind: "CatalogSource"
metadata:
  name: "my-operators"
  namespace: "operators"
spec:
  sourceType: grpc
  image: example.com/my/operator-index:v1
  displayName: "My Operators"
  priority: 100

CatalogSource has a priority field, which is used by the resolver to know how to prefer options for a dependency.

There are two rules that govern catalog preference:

  • Options in higher-priority catalogs are preferred to options in lower-priority catalogs.
  • Options in the same catalog as the dependent are preferred to any other catalogs.
1.4.4.3.2. Channel ordering

An Operator package in a catalog is a collection of update channels that a user can subscribe to in a OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release (1.2, 1.3) or a simple release frequency (stable, fast).

It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels.

Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name.

1.4.4.3.3. Order within a channel

There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs.

When a user creates a Subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency.

Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first.

1.4.4.3.4. Other constraints

In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants.

1.4.4.3.4.1. Subscription constraint

A Subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated.

1.4.4.3.4.2. Package constraint

Within a namespace, no two Operators may come from the same package.

1.4.4.4. CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular ClusterServiceVersion (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.
  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.

1.4.4.5. Dependency best practices

When specifying dependencies, there are best practices you should consider.

Depend on APIs or a specific version range of Operators
Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead.
Set a minimum version
The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible.

For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended.

For example:

  • TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource.
  • TestOperator v1.0.1 adds a new field spec.newfield to MyObject, but still at v1alpha1.

Your Operator might require the ability to write spec.newfield into MyObject. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0.

Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum.

Omit a maximum version or allow a very wide range
Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency.

Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0.

Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the previous version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound.

Note

Cluster administrators cannot override dependencies set by an Operator author.

However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1.

Additional resources

1.4.4.6. Dependency caveats

When specifying dependencies, there are caveats you should consider.

No compound constraints (AND)
There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0.

This means that when specifying a dependency such as:

dependencies:
- type: olm.package
  value:
    packageName: etcd
    version: ">3.1.0"
- type: olm.gvk
  value:
    group: etcd.database.coreos.com
    kind: EtcdCluster
    version: v1beta2

It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0. Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other.

Cross-namespace compatibility
OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa.

1.4.4.7. Example dependency resolution scenarios

In the following examples, a provider is an Operator which "owns" a CRD or APIService.

Example: Deprecating dependent APIs

A and B are APIs (e.g., CRDs):

  • A’s provider depends on B.
  • B’s provider has a Subscription.
  • B’s provider updates to provide C but deprecates B.

This results in:

  • B no longer has a provider.
  • A no longer works.

This is a case OLM prevents with its upgrade strategy.

Example: Version deadlock

A and B are APIs:

  • A’s provider requires B.
  • B’s provider requires A.
  • A’s provider updates to (provide A2, require B2) and deprecate A.
  • B’s provider updates to (provide B2, require A2) and deprecate B.

If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found.

This is another case OLM prevents with its upgrade strategy.

1.4.5. OperatorGroups

This guide outlines the use of OperatorGroups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

1.4.5.1. About OperatorGroups

An OperatorGroup is an OLM resource that provides multitenant configuration to OLM-installed Operators. An OperatorGroup selects target namespaces in which to generate required RBAC access for its member Operators.

The set of target namespaces is provided by a comma-delimited string stored in the ClusterServiceVersion’s (CSV) olm.targetNamespaces annotation. This annotation is applied to member Operator’s CSV instances and is projected into their deployments.

1.4.5.2. OperatorGroup membership

An Operator is considered a member of an OperatorGroup if the following conditions are true:

  • The Operator’s CSV exists in the same namespace as the OperatorGroup.
  • The Operator’s CSV’s InstallModes support the set of namespaces targeted by the OperatorGroup.

An InstallMode consists of an InstallModeType field and a boolean Supported field. A CSV’s spec can contain a set of InstallModes of four distinct InstallModeTypes:

Table 1.4. InstallModes and supported OperatorGroups

InstallModeTypeDescription

OwnNamespace

The Operator can be a member of an OperatorGroup that selects its own namespace.

SingleNamespace

The Operator can be a member of an OperatorGroup that selects one namespace.

MultiNamespace

The Operator can be a member of an OperatorGroup that selects more than one namespace.

AllNamespaces

The Operator can be a member of an OperatorGroup that selects all namespaces (target namespace set is the empty string "").

Note

If a CSV’s spec omits an entry of InstallModeType, then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it.

1.4.5.3. Target namespace selection

You can explicitly name the target namespace for an OperatorGroup using the spec.targetNamespaces parameter:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace
spec:
  targetNamespaces:
  - my-namespace

You can alternatively specify a namespace using a label selector with the spec.selector parameter:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace
  spec:
    selector:
      cool.io/prod: "true"
Important

Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an OperatorGroup will likely be removed in a future release.

If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global OperatorGroup, which selects all namespaces:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: my-group
  namespace: my-namespace

The resolved set of selected namespaces is shown in an OperatorGroup’s status.namespaces parameter. A global OperatorGroup’s status.namespace contains the empty string (""), which signals to a consuming Operator that it should watch all namespaces.

1.4.5.4. OperatorGroup CSV annotations

Member CSVs of an OperatorGroup have the following annotations:

AnnotationDescription

olm.operatorGroup=<group_name>

Contains the name of the OperatorGroup.

olm.operatorGroupNamespace=<group_namespace>

Contains the namespace of the OperatorGroup.

olm.targetNamespaces=<target_namespaces>

Contains a comma-delimited string that lists the OperatorGroup’s target namespace selection.

Note

All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants.

1.4.5.5. Provided APIs annotation

Information about what GroupVersionKinds (GVKs) are provided by an OperatorGroup are shown in an olm.providedAPIs annotation. The annotation’s value is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and APIServices provided by all active member CSVs of an OperatorGroup are included.

Review the following example of an OperatorGroup with a single active member CSV that provides the PackageManifest resource:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  annotations:
    olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com
  name: olm-operators
  namespace: local
  ...
spec:
  selector: {}
  serviceAccount:
    metadata:
      creationTimestamp: null
  targetNamespaces:
  - local
status:
  lastUpdated: 2019-02-19T16:18:28Z
  namespaces:
  - local

1.4.5.6. Role-based access control

When an OperatorGroup is created, three ClusterRoles are generated. Each contains a single AggregationRule with a ClusterRoleSelector set to match a label, as shown below:

ClusterRoleLabel to match

<operatorgroup_name>-admin

olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

<operatorgroup_name>-edit

olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

<operatorgroup_name>-view

olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

The following RBAC resources are generated when a CSV becomes an active member of an OperatorGroup, as long as the CSV is watching all namespaces with the AllNamespaces InstallMode and is not in a failed state with reason InterOperatorGroupOwnerConflict.

  • ClusterRoles for each API resource from a CRD
  • ClusterRoles for each API resource from an APIService
  • Additional Roles and RoleBindings

Table 1.5. ClusterRoles generated for each API resource from a CRD

ClusterRoleSettings

<kind>.<group>-<version>-admin

Verbs on <kind>:

  • *

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-admin: true
  • olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

<kind>.<group>-<version>-edit

Verbs on <kind>:

  • create
  • update
  • patch
  • delete

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-edit: true
  • olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

<kind>.<group>-<version>-view

Verbs on <kind>:

  • get
  • list
  • watch

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

<kind>.<group>-<version>-view-crdview

Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name>:

  • get

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

Table 1.6. ClusterRoles generated for each API resource from an APIService

ClusterRoleSettings

<kind>.<group>-<version>-admin

Verbs on <kind>:

  • *

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-admin: true
  • olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name>

<kind>.<group>-<version>-edit

Verbs on <kind>:

  • create
  • update
  • patch
  • delete

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-edit: true
  • olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name>

<kind>.<group>-<version>-view

Verbs on <kind>:

  • get
  • list
  • watch

Aggregation labels:

  • rbac.authorization.k8s.io/aggregate-to-view: true
  • olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name>

Additional Roles and RoleBindings

  • If the CSV defines exactly one target namespace that contains *, then a ClusterRole and corresponding ClusterRoleBinding are generated for each permission defined in the CSV’s permissions field. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels.
  • If the CSV does not define exactly one target namespace that contains *, then all Roles and RoleBindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace.

1.4.5.7. Copied CSVs

OLM creates copies of all active member CSVs of an OperatorGroup in each of that OperatorGroup’s target namespaces. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the OperatorGroup that their source CSV belongs to no longer targets the copied CSV’s namespace.

1.4.5.8. Static OperatorGroups

An OperatorGroup is static if its spec.staticProvidedAPIs field is set to true. As a result, OLM does not modify the OperatorGroup’s olm.providedAPIs annotation, which means that it can be set in advance. This is useful when a user wants to use an OperatorGroup to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources.

Below is an example of an OperatorGroup that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation:

apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: cluster-monitoring
  namespace: cluster-monitoring
  annotations:
    olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com
spec:
  staticProvidedAPIs: true
  selector:
    matchLabels:
      something.cool.io/cluster-monitoring: "true"

1.4.5.9. OperatorGroup intersection

Two OperatorGroups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set.

A potential issue is that OperatorGroups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces.

Note

When checking intersection rules, an OperatorGroup’s namespace is always included as part of its selected target namespaces.

Rules for intersection

Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the CSV’s OperatorGroup and all others. OLM then checks if that set is an empty set:

  • If true and the CSV’s provided APIs are a subset of the OperatorGroup’s:

    • Continue transitioning.
  • If true and the CSV’s provided APIs are not a subset of the OperatorGroup’s:

    • If the OperatorGroup is static:

      • Clean up any deployments that belong to the CSV.
      • Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs.
    • If the OperatorGroup is not static:

      • Replace the OperatorGroup’s olm.providedAPIs annotation with the union of itself and the CSV’s provided APIs.
  • If false and the CSV’s provided APIs are not a subset of the OperatorGroup’s:

    • Clean up any deployments that belong to the CSV.
    • Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict.
  • If false and the CSV’s provided APIs are a subset of the OperatorGroup’s:

    • If the OperatorGroup is static:

      • Clean up any deployments that belong to the CSV.
      • Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs.
    • If the OperatorGroup is not static:

      • Replace the OperatorGroup’s olm.providedAPIs annotation with the difference between itself and the CSV’s provided APIs.
Note

Failure states caused by OperatorGroups are non-terminal.

The following actions are performed each time an OperatorGroup synchronizes:

  • The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored.
  • The cluster set is compared to olm.providedAPIs, and if olm.providedAPIs contains any extra APIs, then those APIs are pruned.
  • All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV.

1.4.5.10. Troubleshooting OperatorGroups

Membership
  • If more than one OperatorGroup exists in a single namespace, any CSV created in that namespace will transition to a failure state with the reason TooManyOperatorGroups. CSVs in a failed state for this reason will transition to pending once the number of OperatorGroups in their namespaces reaches one.
  • If a CSV’s InstallModes do not support the target namespace selection of the OperatorGroup in its namespace, the CSV will transition to a failure state with the reason UnsupportedOperatorGroup. CSVs in a failed state for this reason will transition to pending once either the OperatorGroup’s target namespace selection changes to a supported configuration, or the CSV’s InstallModes are modified to support the OperatorGroup’s target namespace selection.

1.4.6. Operator Lifecycle Manager metrics

1.4.6.1. Exposed metrics

Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack.

Table 1.7. Metrics exposed by OLM

NameDescription

catalog_source_count

Number of CatalogSources.

csv_abnormal

When reconciling a ClusterServiceVersion (CSV), present whenever a CSV version is in any state other than Succeeded, for example when it is not installed. Includes the name, namespace, phase, reason, and version labels. A Prometheus alert is created when this metric is present.

csv_count

Number of CSVs successfully registered.

csv_succeeded

When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1) or not (value 0). Includes the name, namespace, and version labels.

csv_upgrade_count

Monotonic count of CSV upgrades.

install_plan_count

Number of InstallPlans.

subscription_count

Number of Subscriptions.

subscription_sync_total

Monotonic count of Subscription syncs. Includes the channel, installed CSV, and Subscription name labels.

1.4.7. Webhook management in Operator Lifecycle Manager

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

See Generating a ClusterServiceVersion (CSV) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM.

1.4.7.1. Additional resources

1.5. Understanding OperatorHub

This guide outlines the architecture of OperatorHub.

1.5.1. About OperatorHub

OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using the Operator Lifecycle Manager (OLM).

Cluster administrators can choose from catalogs grouped into the following categories:

CategoryDescription

Red Hat Operators

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

Certified Operators

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

Red Hat Marketplace

Certified software that can be purchased from Red Hat Marketplace.

Community Operators

Optionally-visible software maintained by relevant representatives in the operator-framework/community-operators GitHub repository. No official support.

Custom Operators

Operators you add to the cluster yourself. If you have not added any Custom Operators, the Custom category does not appear in the web console on your OperatorHub.

Note

OperatorHub content automatically refreshes every 60 minutes.

Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a ClusterServiceVersion (CSV) containing all of the CRDs, RBAC rules, Deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions.

The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided by Red Hat’s ISV partner portal at connect.redhat.com.

1.5.2. OperatorHub architecture

The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace.

1.5.2.1. OperatorHub custom resource

The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments.

Example OperatorHub custom resource

apiVersion: config.openshift.io/v1
kind: OperatorHub
metadata:
  name: cluster
spec:
  disableAllDefaultSources: true 1
  sources: [ 2
    {
      name: "community-operators",
      disabled: false
    }
  ]

1
disableAllDefaultSources is an override that controls availability of all default catalogs that are configured by default during an OpenShift Container Platform installation.
2
Disable default catalogs individually by changing the disabled parameter value per source.

1.5.3. Additional resources

1.6. CRDs

1.6.1. Extending the Kubernetes API with Custom Resource Definitions

This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing Custom Resource Definitions (CRDs).

1.6.1.1. Custom Resource Definitions

In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects.

A Custom Resource Definition (CRD) object defines a new, unique object Kind in the cluster and lets the Kubernetes API server handle its entire lifecycle.

Custom Resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects.

When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR.

Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin, edit, or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the cluster’s RBAC policy as if it was a built-in resource.

Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of an Operator’s lifecycle, making them available to all users.

Note

While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.

1.6.1.2. Creating a Custom Resource Definition

To create Custom Resource (CR) objects, cluster administrators must first create a Custom Resource Definition (CRD).

Prerequisites

  • Access to an OpenShift Container Platform cluster with cluster-admin user privileges.

Procedure

To create a CRD:

  1. Create a YAML file that contains the following field types:

    Example YAML file for a CRD

    apiVersion: apiextensions.k8s.io/v1 1
    kind: CustomResourceDefinition
    metadata:
      name: crontabs.stable.example.com 2
    spec:
      group: stable.example.com 3
      version: v1 4
      scope: Namespaced 5
      names:
        plural: crontabs 6
        singular: crontab 7
        kind: CronTab 8
        shortNames:
        - ct 9

    1
    Use the apiextensions.k8s.io/v1 API.
    2
    Specify a name for the definition. This must be in the <plural-name>.<group> format using the values from the group and plural fields.
    3
    Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like Job or ScheduledJob could be in the batch API Group (such as batch.api.example.com). A good practice is to use a fully-qualified-domain name of your organization.
    4
    Specify a version name to be used in the URL. Each API Group can exist in multiple versions. For example: v1alpha, v1beta, v1.
    5
    Specify whether the custom objects are available to a project (Namespaced) or all projects in the cluster (Cluster).
    6
    Specify the plural name to use in the URL. The plural field is the same as a resource in an API URL.
    7
    Specify a singular name to use as an alias on the CLI and for display.
    8
    Specify the kind of objects that can be created. The type can be in CamelCase.
    9
    Specify a shorter string to match your resource on the CLI.
    Note

    By default, a CRD is cluster-scoped and available to all projects.

  2. Create the CRD object:

    $ oc create -f <file_name>.yaml

    A new RESTful API endpoint is created at:

    /apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/...

    For example, using the example file, the following endpoint is created:

    /apis/stable.example.com/v1/namespaces/*/crontabs/...

    You can now use this endpoint URL to create and manage CRs. The object Kind is based on the spec.kind field of the CRD object you created.

1.6.1.3. Creating cluster roles for Custom Resource Definitions

Cluster administrators can grant permissions to existing cluster-scoped Custom Resource Definitions (CRDs). If you use the admin, edit, and view default cluster roles, take advantage of cluster role aggregation for their rules.

Important

You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template.

Prerequisites

  • Create a CRD.

Procedure

  1. Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. The OpenShift Container Platform controller adds the rules that you specify to the default cluster roles.

    Example YAML file for a cluster role definition

    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1 1
    metadata:
      name: aggregate-cron-tabs-admin-edit 2
      labels:
        rbac.authorization.k8s.io/aggregate-to-admin: "true" 3
        rbac.authorization.k8s.io/aggregate-to-edit: "true" 4
    rules:
    - apiGroups: ["stable.example.com"] 5
      resources: ["crontabs"] 6
      verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"] 7
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: aggregate-cron-tabs-view 8
      labels:
        # Add these permissions to the "view" default role.
        rbac.authorization.k8s.io/aggregate-to-view: "true" 9
        rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" 10
    rules:
    - apiGroups: ["stable.example.com"] 11
      resources: ["crontabs"] 12
      verbs: ["get", "list", "watch"] 13

    1
    Use the rbac.authorization.k8s.io/v1 API.
    2 8
    Specify a name for the definition.
    3
    Specify this label to grant permissions to the admin default role.
    4
    Specify this label to grant permissions to the edit default role.
    5 11
    Specify the group name of the CRD.
    6 12
    Specify the plural name of the CRD that these rules apply to.
    7 13
    Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the admin and edit roles and only read permission to the view role.
    9
    Specify this label to grant permissions to the view default role.
    10
    Specify this label to grant permissions to the cluster-reader default role.
  2. Create the cluster role:

    $ oc create -f <file_name>.yaml

1.6.1.4. Creating Custom Resources from a file

After a Custom Resource Definition (CRD) has been added to the cluster, Custom Resources (CRs) can be created with the CLI from a file using the CR specification.

Prerequisites

  • CRD added to the cluster by a cluster administrator.

Procedure

  1. Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab. The Kind comes from the spec.kind field of the CRD object.

    Example YAML file for a CR

    apiVersion: "stable.example.com/v1" 1
    kind: CronTab 2
    metadata:
      name: my-new-cron-object 3
      finalizers: 4
      - finalizer.stable.example.com
    spec: 5
      cronSpec: "* * * * /5"
      image: my-awesome-cron-image

    1
    Specify the group name and API version (name/version) from the Custom Resource Definition.
    2
    Specify the type in the CRD.
    3
    Specify a name for the object.
    4
    Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted.
    5
    Specify conditions specific to the type of object.
  2. After you create the file, create the object:

    $ oc create -f <file_name>.yaml

1.6.1.5. Inspecting custom resources

You can inspect custom resource (CR) objects that exist in your cluster using the CLI.

Prerequisites

  • A CR object exists in a namespace to which you have access.

Procedure

  1. To get information on a specific Kind of a CR, run:

    $ oc get <kind>

    For example:

    $ oc get crontab

    Example output

    NAME                 KIND
    my-new-cron-object   CronTab.v1.stable.example.com

    Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example:

    $ oc get crontabs
    $ oc get crontab
    $ oc get ct
  2. You can also view the raw YAML data for a CR:

    $ oc get <kind> -o yaml

    For example:

    $ oc get ct -o yaml

    Example output

    apiVersion: v1
    items:
    - apiVersion: stable.example.com/v1
      kind: CronTab
      metadata:
        clusterName: ""
        creationTimestamp: 2017-05-31T12:56:35Z
        deletionGracePeriodSeconds: null
        deletionTimestamp: null
        name: my-new-cron-object
        namespace: default
        resourceVersion: "285"
        selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object
        uid: 9423255b-4600-11e7-af6a-28d2447dc82b
      spec:
        cronSpec: '* * * * /5' 1
        image: my-awesome-cron-image 2

    1 2
    Custom data from the YAML that you used to create the object displays.

1.6.2. Managing resources from Custom Resource Definitions

This guide describes how developers can manage Custom Resources (CRs) that come from Custom Resource Definitions (CRDs).

1.6.2.1. Custom Resource Definitions

In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects.

A Custom Resource Definition (CRD) object defines a new, unique object Kind in the cluster and lets the Kubernetes API server handle its entire lifecycle.

Custom Resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects.

Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of an Operator’s lifecycle, making them available to all users.

Note

While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.

1.6.2.2. Creating Custom Resources from a file

After a Custom Resource Definition (CRD) has been added to the cluster, Custom Resources (CRs) can be created with the CLI from a file using the CR specification.

Prerequisites

  • CRD added to the cluster by a cluster administrator.

Procedure

  1. Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab. The Kind comes from the spec.kind field of the CRD object.

    Example YAML file for a CR

    apiVersion: "stable.example.com/v1" 1
    kind: CronTab 2
    metadata:
      name: my-new-cron-object 3
      finalizers: 4
      - finalizer.stable.example.com
    spec: 5
      cronSpec: "* * * * /5"
      image: my-awesome-cron-image

    1
    Specify the group name and API version (name/version) from the Custom Resource Definition.
    2
    Specify the type in the CRD.
    3
    Specify a name for the object.
    4
    Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted.
    5
    Specify conditions specific to the type of object.
  2. After you create the file, create the object:

    $ oc create -f <file_name>.yaml

1.6.2.3. Inspecting custom resources

You can inspect custom resource (CR) objects that exist in your cluster using the CLI.

Prerequisites

  • A CR object exists in a namespace to which you have access.

Procedure

  1. To get information on a specific Kind of a CR, run:

    $ oc get <kind>

    For example:

    $ oc get crontab

    Example output

    NAME                 KIND
    my-new-cron-object   CronTab.v1.stable.example.com

    Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example:

    $ oc get crontabs
    $ oc get crontab
    $ oc get ct
  2. You can also view the raw YAML data for a CR:

    $ oc get <kind> -o yaml

    For example:

    $ oc get ct -o yaml

    Example output

    apiVersion: v1
    items:
    - apiVersion: stable.example.com/v1
      kind: CronTab
      metadata:
        clusterName: ""
        creationTimestamp: 2017-05-31T12:56:35Z
        deletionGracePeriodSeconds: null
        deletionTimestamp: null
        name: my-new-cron-object
        namespace: default
        resourceVersion: "285"
        selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object
        uid: 9423255b-4600-11e7-af6a-28d2447dc82b
      spec:
        cronSpec: '* * * * /5' 1
        image: my-awesome-cron-image 2

    1 2
    Custom data from the YAML that you used to create the object displays.

Chapter 2. User tasks

2.1. Creating applications from installed Operators

This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console.

2.1.1. Creating an etcd cluster using an Operator

This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).

Prerequisites

  • Access to an OpenShift Container Platform 4.6 cluster.
  • The etcd Operator already installed cluster-wide by an administrator.

Procedure

  1. Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd.
  2. Navigate to the Operators → Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of ClusterServiceVersions (CSVs). CSVs are used to launch and manage the software provided by the Operator.

    Tip

    You can get this list from the CLI using:

    $ oc get csv
  3. On the Installed Operators page, click Copied, and then click the etcd Operator to view more details and available actions.

    As shown under Provided APIs, this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployments or ReplicaSets, but contain logic specific to managing etcd.

  4. Create a new etcd cluster:

    1. In the etcd Cluster API box, click Create New.
    2. The next screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, Services, and other components of the new etcd cluster.
  5. Click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator.

    Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.

  6. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:

    $ oc policy add-role-to-user edit <user> -n <target_project>

You now have an etcd cluster that will react to failures and rebalance data as Pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.

2.2. Installing Operators in your namespace

If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner.

2.2.1. Prerequisites

A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details.

2.2.2. Installing Operators from OperatorHub

As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI.

During installation, you must determine the following initial settings for the Operator:

Installation Mode
Choose a specific namespace in which to install the Operator.
Update Channel
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
Approval Strategy
You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

2.2.2.1. Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.
  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Note

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    2. Select an Update Channel (if more than one is available).
    3. Select Automatic or Manual approval strategy, as described earlier.
  6. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the Subscription remains Upgrading until you review and approve its Install Plan.

      After approving on the Install Plan page, the Subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
  7. After the upgrade status of the Subscription is Up to date, select Operators → Installed Operators to verify that the ClusterServiceVersion (CSV) of the installed Operator eventually shows up. Its Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    Note

    For the All namespaces…​ Installation Mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ Installation Mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.

2.2.2.2. Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
  • Install the oc command to your local system.

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace

    Example output

    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the CatalogSource for your desired Operator.

  2. Inspect your desired Operator to verify its supported InstallModes and available Channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An OperatorGroup is an OLM resource that selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the OperatorGroup.

    The namespace to which you subscribe the Operator must have an OperatorGroup that matches the InstallMode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate OperatorGroup in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate OperatorGroup in place, you must create one.

    Note

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>

    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <operator_name>
      namespace: openshift-operators 1
    spec:
      channel: alpha
      name: <operator_name> 2
      source: redhat-operators 3
      sourceNamespace: openshift-marketplace 4

    1
    For AllNamespaces InstallMode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace InstallMode usage.
    2
    Name of the Operator to subscribe to.
    3
    Name of the CatalogSource that provides the Operator.
    4
    Namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub CatalogSources.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources

Chapter 3. Administrator tasks

3.1. Adding Operators to a cluster

This guide walks cluster administrators through installing Operators to an OpenShift Container Platform cluster and subscribing Operators to namespaces.

3.1.1. Installing Operators from OperatorHub

As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI.

During installation, you must determine the following initial settings for the Operator:

Installation Mode
Choose a specific namespace in which to install the Operator.
Update Channel
If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.
Approval Strategy
You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

3.1.1.1. Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.
  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Note

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.
  5. On the Install Operator page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.
      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    2. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.
    3. Select an Update Channel (if more than one is available).
    4. Select Automatic or Manual approval strategy, as described earlier.
  6. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the Subscription remains Upgrading until you review and approve its Install Plan.

      After approving on the Install Plan page, the Subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.
  7. After the upgrade status of the Subscription is Up to date, select Operators → Installed Operators to verify that the ClusterServiceVersion (CSV) of the installed Operator eventually shows up. Its Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    Note

    For the All namespaces…​ Installation Mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ Installation Mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.

3.1.1.2. Installing from OperatorHub using the CLI

Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with Operator installation permissions.
  • Install the oc command to your local system.

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    $ oc get packagemanifests -n openshift-marketplace

    Example output

    NAME                               CATALOG               AGE
    3scale-operator                    Red Hat Operators     91m
    advanced-cluster-management        Red Hat Operators     91m
    amq7-cert-manager                  Red Hat Operators     91m
    ...
    couchbase-enterprise-certified     Certified Operators   91m
    crunchy-postgres-operator          Certified Operators   91m
    mongodb-enterprise                 Certified Operators   91m
    ...
    etcd                               Community Operators   91m
    jaeger                             Community Operators   91m
    kubefed                            Community Operators   91m
    ...

    Note the CatalogSource for your desired Operator.

  2. Inspect your desired Operator to verify its supported InstallModes and available Channels:

    $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An OperatorGroup is an OLM resource that selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the OperatorGroup.

    The namespace to which you subscribe the Operator must have an OperatorGroup that matches the InstallMode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate OperatorGroup in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate OperatorGroup in place, you must create one.

    Note

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: <operatorgroup_name>
        namespace: <namespace>
      spec:
        targetNamespaces:
        - <namespace>

    2. Create the OperatorGroup object:

      $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: <operator_name>
      namespace: openshift-operators 1
    spec:
      channel: alpha
      name: <operator_name> 2
      source: redhat-operators 3
      sourceNamespace: openshift-marketplace 4

    1
    For AllNamespaces InstallMode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace InstallMode usage.
    2
    Name of the Operator to subscribe to.
    3
    Name of the CatalogSource that provides the Operator.
    4
    Namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub CatalogSources.
  5. Create the Subscription object:

    $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources

3.2. Upgrading installed Operators

As a cluster administrator, you can upgrade Operators that have been previously installed using Operator Lifecycle Manager (OLM) on your OpenShift Container Platform cluster.

3.2.1. Changing the update channel for an Operator

The Subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Operator to start tracking and receiving updates from a newer channel, you can change the update channel in the Subscription.

The names of update channels in a Subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a simple release frequency (stable, fast).

Note

Installed Operators cannot change to a channel that is older than the current channel.

If the approval strategy in the Subscription is set to Automatic, the upgrade process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending upgrades.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
  2. Click the name of the Operator you want to change the update channel for.
  3. Click the Subscription tab.
  4. Click the name of the update channel under Channel.
  5. Click the newer update channel that you want to change to, then click Save.
  6. For Subscriptions with an Automatic approval strategy, the upgrade begins automatically. Navigate back to the Operators → Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.

    For Subscriptions with a Manual approval strategy, you can manually approve the upgrade from the Subscription tab.

3.2.2. Manually approving a pending Operator upgrade

If an installed Operator has the approval strategy in its Subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.

Prerequisites

  • An Operator previously installed using Operator Lifecycle Manager (OLM).

Procedure

  1. In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
  2. Operators that have a pending upgrade display a status with Upgrade available. Click the name of the Operator you want to upgrade.
  3. Click the Subscription tab. Any upgrades requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
  4. Click 1 requires approval, then click Preview Install Plan.
  5. Review the resources that are listed as available for upgrade. When satisfied, click Approve.
  6. Navigate back to the Operators → Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.

3.3. Deleting Operators from a cluster

The following describes how to delete Operators from a cluster using either the web console or the CLI.

3.3.1. Deleting Operators from a cluster using the web console

Cluster administrators can delete installed Operators from a selected namespace by using the web console.

Prerequisites

  • Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions.

Procedure

  1. From the OperatorsInstalled Operators page, scroll or type a keyword into the Filter by name to find the Operator you want. Then, click on it.
  2. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.

    An Uninstall Operator? dialog box is displayed, reminding you that:

    Removing the Operator will not remove any of its custom resource definitions or managed resources. If your Operator has deployed applications on the cluster or configured off-cluster resources, these will continue to run and need to be cleaned up manually.

    The Operator, any Operator deployments, and pods are removed by this action. Any resources managed by the Operator, including CRDs and CRs are not removed. The web console enables dashboards and navigation items for some Operators. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.

  3. Select Uninstall. This Operator stops running and no longer receives updates.

3.3.2. Deleting Operators from a cluster using the CLI

Cluster administrators can delete installed Operators from a selected namespace by using the CLI.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • oc command installed on workstation.

Procedure

  1. Check the current version of the subscribed Operator (for example, jaeger) in the currentCSV field:

    $ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV

    Example output

      currentCSV: jaeger-operator.v1.8.2

  2. Delete the Operator’s Subscription (for example, jaeger):

    $ oc delete subscription jaeger -n openshift-operators

    Example output

    subscription.operators.coreos.com "jaeger" deleted

  3. Delete the CSV for the Operator in the target namespace using the currentCSV value from the previous step:

    $ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators

    Example output

    clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted

3.4. Configuring proxy support in Operator Lifecycle Manager

If a global proxy is configured on the OpenShift Container Platform cluster, Operator Lifecycle Manager automatically configures Operators that it manages with the cluster-wide proxy. However, you can also configure installed Operators to override the global proxy or inject a custom CA certificate.

Additional resources

3.4.1. Overriding an Operator’s proxy settings

If a cluster-wide egress proxy is configured, applications created from Operators using Operator Lifecycle Manager (OLM) inherit the cluster-wide proxy settings on their Deployments and pods. Cluster administrators can also override these proxy settings by configuring the Operator’s Subscription.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.
  2. Select the Operator and click Install.
  3. On the Install Operator page, modify the Subscription object’s YAML to include one or more of the following environment variables in the spec section:

    • HTTP_PROXY
    • HTTPS_PROXY
    • NO_PROXY

    For example:

    Subscription object with proxy setting overrides

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: etcd-config-test
      namespace: openshift-operators
    spec:
      config:
        env:
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
      channel: clusterwide-alpha
      installPlanApproval: Automatic
      name: etcd
      source: community-operators
      sourceNamespace: openshift-marketplace
      startingCSV: etcdoperator.v0.9.4-clusterwide

    Note

    These environment variables can also be unset using an empty value to remove any previously set cluster-wide or custom proxy settings.

    OLM handles these environment variables as a unit; if at least one of them is set, all three are considered overridden and the cluster-wide defaults are not used for the subscribed Operator’s Deployments.

  4. Click Install to make the Operator available to the selected namespaces.
  5. After the Operator’s CSV appears in the relevant namespace, you can verify that custom proxy environment variables are set in the Deployment. For example, using the CLI:

    $ oc get deployment -n openshift-operators \
        etcd-operator -o yaml \
        | grep -i "PROXY" -A 2

    Example output

            - name: HTTP_PROXY
              value: test_http
            - name: HTTPS_PROXY
              value: test_https
            - name: NO_PROXY
              value: test
            image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c
    ...

3.4.2. Injecting a custom CA certificate

When a cluster administrator adds a custom CA certificate to a cluster using a ConfigMap, the Cluster Network Operator merges the user-provided certificates and system CA certificates into a single bundle. You can inject this merged bundle into your Operator running on Operator Lifecycle Manager (OLM), which is useful if you have a man-in-the-middle HTTPS proxy.

Prerequisites

  • Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions.
  • Custom CA certificate added to the cluster using a ConfigMap.
  • Desired Operator installed and running on OLM.

Procedure

  1. Create an empty ConfigMap in the namespace where your Operator’s Subscription exists and include the following label:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: trusted-ca 1
      labels:
        config.openshift.io/inject-trusted-cabundle: "true" 2
    1
    Name of the ConfigMap.
    2
    Requests the Cluster Network Operator to inject the merged bundle.

    After creating this ConfigMap, the ConfigMap is immediately populated with the certificate contents of the merged bundle.

  2. Update your Operator’s Subscription object to include a spec.config section that mounts the trusted-ca ConfigMap as a volume to each container within a Pod that requires a custom CA:

    kind: Subscription
    metadata:
      name: my-operator
    spec:
      package: etcd
      channel: alpha
      config: 1
      - selector:
          matchLabels:
            <labels_for_pods> 2
        volumes: 3
        - name: trusted-ca
          configMap:
            name: trusted-ca
            items:
              - key: ca-bundle.crt 4
                path: tls-ca-bundle.pem 5
        volumeMounts: 6
        - name: trusted-ca
          mountPath: /etc/pki/ca-trust/extracted/pem
          readOnly: true
    1
    Add a config section if it does not exist.
    2
    Specify labels to match pods that are owned by the Operator.
    3
    Create a trusted-ca volume.
    4
    ca-bundle.crt is required as the ConfigMap key.
    5
    tls-ca-bundle.pem is required as the ConfigMap path.
    6
    Create a trusted-ca volume mount.

3.5. Viewing Operator status

Understanding the state of the system in Operator Lifecycle Manager (OLM) is important for making decisions about and debugging problems with installed Operators. OLM provides insight into Subscriptions and related Catalog Source resources regarding their state and actions performed. This helps users better understand the healthiness of their Operators.

3.5.1. Operator Subscription condition types

Subscriptions can report the following condition types:

Table 3.1. Subscription condition types

ConditionDescription

CatalogSourcesUnhealthy

Some or all of the Catalog Sources to be used in resolution are unhealthy.

InstallPlanMissing

A Subscription’s InstallPlan is missing.

InstallPlanPending

A Subscription’s InstallPlan is pending installation.

InstallPlanFailed

A Subscription’s InstallPlan has failed.

Note

Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

3.5.2. Viewing Operator Subscription status using the CLI

You can view Operator Subscription status using the CLI.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. List Operator Subscriptions:

    $ oc get subs -n <operator_namespace>
  2. Use the oc describe command to inspect a Subscription resource:

    $ oc describe sub <subscription_name> -n <operator_namespace>
  3. In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy:

    Example output

    Conditions:
       Last Transition Time:  2019-07-29T13:42:57Z
       Message:               all available catalogsources are healthy
       Reason:                AllCatalogSourcesHealthy
       Status:                False
       Type:                  CatalogSourcesUnhealthy

Note

Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.

3.6. Allowing non-cluster administrators to install Operators

Operators can require wide privileges to run, and the required privileges can change between versions. Operator Lifecycle Manager (OLM) runs with cluster-admin privileges. By default, Operator authors can specify any set of permissions in the ClusterServiceVersion (CSV) and OLM will consequently grant it to the Operator.

Cluster administrators should take measures to ensure that an Operator cannot achieve cluster-scoped privileges and that users cannot escalate privileges using OLM. One method for locking this down requires cluster administrators auditing Operators before they are added to the cluster. Cluster administrators are also provided tools for determining and constraining which actions are allowed during an Operator installation or upgrade using service accounts.

By associating an OperatorGroup with a service account that has a set of privileges granted to it, cluster administrators can set policy on Operators to ensure they operate only within predetermined boundaries using RBAC rules. The Operator is unable to do anything that is not explicitly permitted by those rules.

This self-sufficient, limited scope installation of Operators by non-cluster administrators means that more of the Operator Framework tools can safely be made available to more users, providing a richer experience for building applications with Operators.

3.6.1. Understanding Operator installation policy

Using OLM, cluster administrators can choose to specify a service account for an OperatorGroup so that all Operators associated with the OperatorGroup are deployed and run against the privileges granted to the service account.

APIService and CustomResourceDefinition resources are always created by OLM using the cluster-admin role. A service account associated with an OperatorGroup should never be granted privileges to write these resources.

If the specified service account does not have adequate permissions for an Operator that is being installed or upgraded, useful and contextual information is added to the status of the respective resource(s) so that it is easy for the cluster administrator to troubleshoot and resolve the issue.

Any Operator tied to this OperatorGroup is now confined to the permissions granted to the specified service account. If the Operator asks for permissions that are outside the scope of the service account, the install fails with appropriate errors.

3.6.1.1. Installation scenarios

When determining whether an Operator can be installed or upgraded on a cluster, OLM considers the following scenarios:

  • A cluster administrator creates a new OperatorGroup and specifies a service account. All Operator(s) associated with this OperatorGroup are installed and run against the privileges granted to the service account.
  • A cluster administrator creates a new OperatorGroup and does not specify any service account. OpenShift Container Platform maintains backward compatibility, so the default behavior remains and Operator installs and upgrades are permitted.
  • For existing OperatorGroups that do not specify a service account, the default behavior remains and Operator installs and upgrades are permitted.
  • A cluster administrator updates an existing OperatorGroup and specifies a service account. OLM allows the existing Operator to continue to run with their current privileges. When such an existing Operator is going through an upgrade, it is reinstalled and run against the privileges granted to the service account like any new Operator.
  • A service account specified by an OperatorGroup changes by adding or removing permissions, or the existing service account is swapped with a new one. When existing Operators go through an upgrade, it is reinstalled and run against the privileges granted to the updated service account like any new Operator.
  • A cluster administrator removes the service account from an OperatorGroup. The default behavior remains and Operator installs and upgrades are permitted.

3.6.1.2. Installation workflow

When an OperatorGroup is tied to a service account and an Operator is installed or upgraded, OLM uses the following workflow:

  1. The given Subscription object is picked up by OLM.
  2. OLM fetches the OperatorGroup tied to this Subscription.
  3. OLM determines that the OperatorGroup has a service account specified.
  4. OLM creates a client scoped to the service account and uses the scoped client to install the Operator. This ensures that any permission requested by the Operator is always confined to that of the service account in the OperatorGroup.
  5. OLM creates a new service account with the set of permissions specified in the CSV and assigns it to the Operator. The Operator runs as the assigned service account.

3.6.2. Scoping Operator installations

To provide scoping rules to Operator installations and upgrades on OLM, associate a service account with an OperatorGroup.

Using this example, a cluster administrator can confine a set of Operators to a designated namespace.

Procedure

  1. Create a new namespace:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Namespace
    metadata:
      name: scoped
    EOF
  2. Allocate permissions that you want the Operator(s) to be confined to. This involves creating a new service account, relevant Role(s), and RoleBinding(s).

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: scoped
      namespace: scoped
    EOF

    The following example grants the service account permissions to do anything in the designated namespace for simplicity. In a production environment, you should create a more fine-grained set of permissions:

    $ cat <<EOF | oc create -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: scoped
      namespace: scoped
    rules:
    - apiGroups: ["*"]
      resources: ["*"]
      verbs: ["*"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: scoped-bindings
      namespace: scoped
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: scoped
    subjects:
    - kind: ServiceAccount
      name: scoped
      namespace: scoped
    EOF
  3. Create an OperatorGroup in the designated namespace. This OperatorGroup targets the designated namespace to ensure that its tenancy is confined to it. In addition, OperatorGroups allow a user to specify a service account. Specify the ServiceAccount created in the previous step:

    $ cat <<EOF | oc create -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: scoped
      namespace: scoped
    spec:
      serviceAccountName: scoped
      targetNamespaces:
      - scoped
    EOF

    Any Operator installed in the designated namespace is tied to this OperatorGroup and therefore to the service account specified.

  4. Create a Subscription in the designated namespace to install an Operator:

    $ cat <<EOF | oc create -f -
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: etcd
      namespace: scoped
    spec:
      channel: singlenamespace-alpha
      name: etcd
      source: <catalog_source_name> 1
      sourceNamespace: <catalog_source_namespace> 2
    EOF
    1
    Specify a CatalogSource that already exists in the designated namespace or one that is in the global catalog namespace.
    2
    Specify a CatalogSourceNamespace where the CatalogSource was created.

    Any Operator tied to this OperatorGroup is confined to the permissions granted to the specified service account. If the Operator requests permissions that are outside the scope of the service account, the installation fails with appropriate errors.

3.6.2.1. Fine-grained permissions

OLM uses the service account specified in OperatorGroup to create or update the following resources related to the Operator being installed:

  • ClusterServiceVersion
  • Subscription
  • Secret
  • ServiceAccount
  • Service
  • ClusterRole and ClusterRoleBinding
  • Role and RoleBinding

In order to confine Operators to a designated namespace, cluster administrators can start by granting the following permissions to the service account:

Note

The following role is a generic example and additional rules might be required based on the specific Operator.

kind: Role
rules:
- apiGroups: ["operators.coreos.com"]
  resources: ["subscriptions", "clusterserviceversions"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: [""]
  resources: ["services", "serviceaccounts"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["roles", "rolebindings"]
  verbs: ["get", "create", "update", "patch"]
- apiGroups: ["apps"] 1
  resources: ["deployments"]
  verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
- apiGroups: [""] 2
  resources: ["pods"]
  verbs: ["list", "watch", "get", "create", "update", "patch", "delete"]
1 2
Add permissions to create other resources, such as Deployments and pods shown here.

In addition, if any Operator specifies a pull secret, the following permissions must also be added:

kind: ClusterRole 1
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
kind: Role
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create", "update", "patch"]
1
Required to get the secret from the OLM namespace.

3.6.3. Troubleshooting permission failures

If an Operator installation fails due to lack of permissions, identify the errors using the following procedure.

Procedure

  1. Review the Subscription object. Its status has an object reference installPlanRef that points to the InstallPlan object that attempted to create the necessary [Cluster]Role[Binding](s) for the Operator:

    apiVersion: operators.coreos.com/v1
    kind: Subscription
    metadata:
      name: etcd
      namespace: scoped
    status:
      installPlanRef:
        apiVersion: operators.coreos.com/v1
        kind: InstallPlan
        name: install-4plp8
        namespace: scoped
        resourceVersion: "117359"
        uid: 2c1df80e-afea-11e9-bce3-5254009c9c23
  2. Check the status of the InstallPlan object for any errors:

    apiVersion: operators.coreos.com/v1
    kind: InstallPlan
    status:
      conditions:
      - lastTransitionTime: "2019-07-26T21:13:10Z"
        lastUpdateTime: "2019-07-26T21:13:10Z"
        message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io
          is forbidden: User "system:serviceaccount:scoped:scoped" cannot create resource
          "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope'
        reason: InstallComponentFailed
        status: "False"
        type: Installed
      phase: Failed

    The error message tells you:

    • The type of resource it failed to create, including the API group of the resource. In this case, it was clusterroles in the rbac.authorization.k8s.io group.
    • The name of the resource.
    • The type of error: is forbidden tells you that the user does not have enough permission to do the operation.
    • The name of the user who attempted to create or update the resource. In this case, it refers to the service account specified in the OperatorGroup.
    • The scope of the operation: cluster scope or not.

      The user can add the missing permission to the service account and then iterate.

      Note

      OLM does not currently provide the complete list of errors on the first try, but may be added in a future release.

3.7. Managing custom catalogs

This guide describes how to work with custom catalogs for Operators packaged using either the Bundle Format or the legacy Package Manifest Format on Operator Lifecycle Manager (OLM) in OpenShift Container Platform.

3.7.1. Understanding Operator catalogs

An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. As of OpenShift Container Platform 4.6, Red Hat-provided catalogs are distributed using index images.

An index image, based on the Operator Bundle Format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster.

Note

Starting in OpenShift Container Platform 4.6, index images provided by Red Hat replace the App Registry catalog images, based on the deprecated Package Manifest Format, that are distributed for previous versions of OpenShift Container Platform 4. While App Registry catalog images are not distributed by Red Hat for OpenShift Container Platform 4.6 and later, custom catalog images based on the Package Manifest Format are still supported.

The following catalogs are distributed by Red Hat:

Table 3.2. Red Hat-provided Operator catalogs

CatalogIndex imageDescription

redhat-operators

registry.redhat.io/redhat/redhat-operator-index:v4.6

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

registry.redhat.io/redhat/certified-operator-index:v4.6

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

redhat-marketplace

registry.redhat.io/redhat/redhat-marketplace-index:v4.6

Certified software that can be purchased from Red Hat Marketplace.

community-operators

registry.redhat.io/redhat/community-operator-index:latest

Software maintained by relevant representatives in the operator-framework/community-operators GitHub repository. No official support.

As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the Internet to pull the latest content.

As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues.

Important

When creating custom catalog images, previous versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which has been deprecated for several releases. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders should start switching to using the opm index command to manage index images before the oc adm catalog build command is removed in a future release.

3.7.2. Custom catalogs using the Bundle Format

3.7.2.1. Prerequisites

3.7.2.2. Creating an index image

You can create an index image using the opm CLI.

Prerequisites

  • opm version 1.12.3+
  • podman version 1.4.4+
  • A bundle image built and pushed to a registry that supports Docker v2-2

Procedure

  1. Start a new index:

    $ opm index add \
        --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \1
        --tag <registry>/<namespace>/<index_image_name>:<tag> \2
        [--binary-image <registry_base_image>] 3
    1
    Comma-separated list of bundle images to add to the index.
    2
    The image tag that you want the index image to have.
    3
    Optional: An alternative registry base image to use for serving the catalog.
  2. Push the index image to a registry.

    1. If required, authenticate with your target registry:

      $ podman login <registry>
    2. Push the index image:

      $ podman push <registry>/<namespace>/test-catalog:latest

3.7.2.3. Creating a catalog from an index image

You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM).

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 1
        displayName: My Operator Catalog
        publisher: <publisher_name> 2
        updateStrategy:
          registryPoll: 3
            interval: 30m
      1
      Specify your index image.
      2
      Specify your name or an organization name publishing the catalog.
      3
      CatalogSources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the CatalogSource:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the PackageManifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

3.7.2.4. Updating an index image

After configuring OperatorHub to use a CatalogSource that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.

You can update an existing index image using the opm index add commad.

Prerequisites

  • opm version 1.12.3+
  • podman version 1.4.4+
  • An index image built and pushed to a registry.
  • An existing CatalogSource referencing the index image.

Procedure

  1. Update the existing index by adding bundle images:

    $ opm index add \
        --bundles <registry>/<namespace>/<new_bundle_image>:<tag> \1
        --from-index <registry>/<namespace>/<existing_index_image>:<tag> \2
        --tag <registry>/<namespace>/<existing_index_image>:<tag> 3
    1
    A comma-separated list of additional bundle images to add to the index.
    2
    The existing index that was previously pushed.
    3
    The image tag that you want the updated index image to have.
  2. Push the updated index image:

    $ podman push <registry>/<namespace>/<existing_index_image>:<tag>
  3. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the CatalogSource at its regular interval, verify that the new packages are successfully added:

    $ oc get packagemanifests -n openshift-marketplace

3.7.2.5. Pruning an index image

An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, creating a copy of the source index containing only the Operators that you want.

Prerequisites

  • podman version 1.4.4+
  • grpcurl
  • opm version 1.12.3+
  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with your target registry:

    $ podman login <target_registry>
  2. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.6

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.6...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  3. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6 3
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Custom tag for new index image being built.
  4. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6

    where <namespace> is any existing namespace on the registry.

3.7.3. Custom catalogs using the Package Manifest Format

3.7.3.1. Building a Package Manifest Format catalog image

Cluster administrators can build a custom Operator catalog image based on the Package Manifest Format to be used by Operator Lifecycle Manager (OLM). The catalog image can be pushed to a container image registry that supports Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

For this example, the procedure assumes use of a mirror registry that has access to both your network and the Internet.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>
  2. Authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  3. Build a catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.6 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v1 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the Operator Registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v1.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v1

    Sometimes invalid manifests are accidentally introduced into Red Hat’s catalogs; when this happens, you might see some errors:

    Example output with errors

    ...
    INFO[0014] directory                                     dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605 file=4.2 load=package
    W1114 19:42:37.876180   34665 builder.go:141] error building database: error loading package into db: fuse-camel-k-operator.v7.5.0 specifies replacement that couldn't be found
    Uploading ... 244.9kB/s

    These errors are usually non-fatal, and if the Operator package mentioned does not contain an Operator you plan to install or a dependency of one, then they can be ignored.

3.7.3.2. Mirroring a Package Manifest Format catalog image

Cluster administrators can mirror a custom Operator catalog image based on the Package Manifest Format into a registry and use a CatalogSource to load the content onto their cluster. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • Workstation with unrestricted network access
  • A custom Operator catalog image based on the Package Manifest Format pushed to a supported registry
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. The oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring. You can choose to either:

    • Allow the default behavior of the command to automatically mirror all of the image content to your mirror registry after generating manifests, or
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to a registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of the content. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v1 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        [--filter-by-os="<os>/<arch>"] \4
        [--manifests-only] 5
    1
    Specify your Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    5
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    using database path mapping: /:/tmp/190214037
    wrote database to /tmp/190214037
    using database at: /tmp/190214037/bundles.db 1
    ...

    1
    Temporary database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  2. If you used the --manifests-only flag in the previous step and want to mirror only a subset of the content:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string clusterlogging.4.3:

        $ echo "select * from related_image \
            where operatorbundle_name like 'clusterlogging.4.3%';" \
            | sqlite3 -line /tmp/190214037/bundles.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        image = registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        
        image = registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506
        operatorbundle_name = clusterlogging.4.3.33-202008111029.p0
        ...

      2. Use the results from the previous step to edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        registry.redhat.io/openshift4/ose-logging-kibana5@sha256:aa4a8b2a00836d0e28aa6497ad90a3c116f135f382d8211e3c55f34fb36dfe61=<registry_host_name>:<port>/openshift4-ose-logging-kibana5:a767c8f0
        registry.redhat.io/openshift4/ose-oauth-proxy@sha256:6b4db07f6e6c962fc96473d86c44532c93b146bbefe311d0c348117bf759c506=<registry_host_name>:<port>/openshift4-ose-oauth-proxy:3754ea2b

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above two lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operators-manifests/mapping.txt
  3. Apply the ImageContentSourcePolicy:

    $ oc apply -f ./redhat-operators-manifests/imageContentSourcePolicy.yaml

You can now create a CatalogSource to reference your mirrored content.

3.7.3.3. Updating a Package Manifest Format catalog image

After a cluster administrator has configured OperatorHub to use custom Operator catalog images, administrators can keep their OpenShift Container Platform cluster up to date with the latest Operators by capturing updates made to Red Hat’s App Registry catalogs. This is done by building and pushing a new Operator catalog image, then replacing the existing CatalogSource’s spec.image parameter with the new image digest.

For this example, the procedure assumes a custom redhat-operators catalog image is already configured for use with OperatorHub.

Prerequisites

  • Workstation with unrestricted network access
  • oc version 4.3.5+
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • OperatorHub configured to use custom catalog images
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json
  • If you are working with private namespaces that your quay.io account has access to, you must set a Quay authentication token. Set the AUTH_TOKEN environment variable for use with the --auth-token flag by making a request against the login API using your quay.io credentials:

    $ AUTH_TOKEN=$(curl -sH "Content-Type: application/json" \
        -XPOST https://quay.io/cnr/api/v1/users/login -d '
        {
            "user": {
                "username": "'"<quay_username>"'",
                "password": "'"<quay_password>"'"
            }
        }' | jq -r '.token')

Procedure

  1. On the workstation with unrestricted network access, authenticate with the target mirror registry:

    $ podman login <registry_host_name>
  2. Authenticate with registry.redhat.io so that the base image can be pulled during the build:

    $ podman login registry.redhat.io
  3. Build a new catalog image based on the redhat-operators catalog from Quay.io, tagging and pushing it to your mirror registry:

    $ oc adm catalog build \
        --appregistry-org redhat-operators \1
        --from=registry.redhat.io/openshift4/ose-operator-registry:v4.6 \2
        --filter-by-os="linux/amd64" \3
        --to=<registry_host_name>:<port>/olm/redhat-operators:v2 \4
        [-a ${REG_CREDS}] \5
        [--insecure] \6
        [--auth-token "${AUTH_TOKEN}"] 7
    1
    Organization (namespace) to pull from an App Registry instance.
    2
    Set --from to the Operator Registry base image using the tag that matches the target OpenShift Container Platform cluster major and minor version.
    3
    Set --filter-by-os to the operating system and architecture to use for the base image, which must match the target OpenShift Container Platform cluster. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    4
    Name your catalog image and include a tag, for example, v2 because it is the updated catalog.
    5
    Optional: If required, specify the location of your registry credentials file.
    6
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    7
    Optional: If other application registry catalogs are used that are not public, specify a Quay authentication token.

    Example output

    INFO[0013] loading Bundles                               dir=/var/folders/st/9cskxqs53ll3wdn434vw4cd80000gn/T/300666084/manifests-829192605
    ...
    Pushed sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3 to example_registry:5000/olm/redhat-operators:v2

  4. Mirror the contents of your catalog to your target registry. The following oc adm catalog mirror command extracts the contents of your custom Operator catalog image to generate the manifests required for mirroring and mirrors the images to your registry:

    $ oc adm catalog mirror \
        <registry_host_name>:<port>/olm/redhat-operators:v2 \1
        <registry_host_name>:<port> \
        [-a ${REG_CREDS}] \2
        [--insecure] \3
        [--filter-by-os="<os>/<arch>"] 4
    1
    Specify your new Operator catalog image.
    2
    Optional: If required, specify the location of your registry credentials file.
    3
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    4
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
  5. Apply the newly generated manifests:

    $ oc apply -f ./redhat-operators-manifests
    Important

    It is possible that you do not need to apply the imageContentSourcePolicy.yaml manifest. Complete a diff of the files to determine if changes are necessary.

  6. Update your CatalogSource object that references your catalog image.

    1. If you have your original catalogsource.yaml file for this CatalogSource:

      1. Edit your catalogsource.yaml file to reference your new catalog image in the spec.image field:

        apiVersion: operators.coreos.com/v1alpha1
        kind: CatalogSource
        metadata:
          name: my-operator-catalog
          namespace: openshift-marketplace
        spec:
          sourceType: grpc
          image: <registry_host_name>:<port>/olm/redhat-operators:v2 1
          displayName: My Operator Catalog
          publisher: grpc
        1
        Specify your new Operator catalog image.
      2. Use the updated file to replace the CatalogSource object:

        $ oc replace -f catalogsource.yaml
    2. Alternatively, edit the CatalogSource using the following command and reference your new catalog image in the spec.image parameter:

      $ oc edit catalogsource <catalog_source_name> -n openshift-marketplace

Updated Operators should now be available from the OperatorHub page on your OpenShift Container Platform cluster.

3.7.3.4. Testing a Package Manifeste Format catalog image

You can validate Operator catalog image content by running it as a container and querying its gRPC API. To further test the image, you can then resolve a Subscription in Operator Lifecycle Manager (OLM) by referencing the image in a CatalogSource. For this example, the procedure uses a custom redhat-operators catalog image previously built and pushed to a supported registry.

Prerequisites

  • A custom Package Manifest Format catalog image pushed to a supported registry
  • podman version 1.4.4+
  • oc version 4.3.5+
  • Access to mirror registry that supports Docker v2-2
  • grpcurl

Procedure

  1. Pull the Operator catalog image:

    $ podman pull <registry_host_name>:<port>/olm/redhat-operators:v1
  2. Run the image:

    $ podman run -p 50051:50051 \
        -it <registry_host_name>:<port>/olm/redhat-operators:v1
  3. Query the running image for available packages using grpcurl:

    $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages

    Example output

    {
      "name": "3scale-operator"
    }
    {
      "name": "amq-broker"
    }
    {
      "name": "amq-online"
    }

  4. Get the latest Operator bundle in a channel:

    $  grpcurl -plaintext -d '{"pkgName":"kiali-ossm","channelName":"stable"}' localhost:50051 api.Registry/GetBundleForChannel

    Example output

    {
      "csvName": "kiali-operator.v1.0.7",
      "packageName": "kiali-ossm",
      "channelName": "stable",
    ...

  5. Get the digest of the image:

    $ podman inspect \
        --format='{{index .RepoDigests 0}}' \
        <registry_host_name>:<port>/olm/redhat-operators:v1

    Example output

    example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3

  6. Assuming an OperatorGroup exists in namespace my-ns that supports your Operator and its dependencies, create a CatalogSource object using the image digest. For example:

    apiVersion: operators.coreos.com/v1alpha1
    kind: CatalogSource
    metadata:
      name: custom-redhat-operators
      namespace: my-ns
    spec:
      sourceType: grpc
      image: example_registry:5000/olm/redhat-operators@sha256:f73d42950021f9240389f99ddc5b0c7f1b533c054ba344654ff1edaf6bf827e3
      displayName: Red Hat Operators
  7. Create a Subscription that resolves the latest available servicemeshoperator and its dependencies from your catalog image:

    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: servicemeshoperator
      namespace: my-ns
    spec:
      source: custom-redhat-operators
      sourceNamespace: my-ns
      name: servicemeshoperator
      channel: "1.0"

3.8. Using Operator Lifecycle Manager on restricted networks

For OpenShift Container Platform clusters that are installed on restricted networks, also known as disconnected clusters, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted remotely on Quay.io because those remote sources require full Internet connectivity.

However, as a cluster administrator you can still enable your cluster to use OLM in a restricted network if you have a workstation that has full Internet access. The workstation is used to prepare local mirrors of the remote OperatorHub sources, and requires full Internet access to pull the remote content.

This guide describes the following process that is required to enable OLM in restricted networks:

  • Disable the default remote OperatorHub sources for OLM.
  • Use a workstation with full Internet access to create local mirrors of the OperatorHub content.
  • Configure OLM to install and manage Operators from the local sources instead of the default remote sources.

After enabling OLM in a restricted network, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released.

Important

While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a restricted network still depends on the Operator itself. The Operator must:

  • List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object.
  • Reference all specified images by a digest (SHA) and not by a tag.

See the following Red Hat Knowledgebase Article for a list of Red Hat Operators that support running in disconnected mode:

https://access.redhat.com/articles/4740011

3.8.1. Understanding Operator catalogs

An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. As of OpenShift Container Platform 4.6, Red Hat-provided catalogs are distributed using index images.

An index image, based on the Operator Bundle Format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster.

Note

Starting in OpenShift Container Platform 4.6, index images provided by Red Hat replace the App Registry catalog images, based on the deprecated Package Manifest Format, that are distributed for previous versions of OpenShift Container Platform 4. While App Registry catalog images are not distributed by Red Hat for OpenShift Container Platform 4.6 and later, custom catalog images based on the Package Manifest Format are still supported.

The following catalogs are distributed by Red Hat:

Table 3.3. Red Hat-provided Operator catalogs

CatalogIndex imageDescription

redhat-operators

registry.redhat.io/redhat/redhat-operator-index:v4.6

Red Hat products packaged and shipped by Red Hat. Supported by Red Hat.

certified-operators

registry.redhat.io/redhat/certified-operator-index:v4.6

Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV.

redhat-marketplace

registry.redhat.io/redhat/redhat-marketplace-index:v4.6

Certified software that can be purchased from Red Hat Marketplace.

community-operators

registry.redhat.io/redhat/community-operator-index:latest

Software maintained by relevant representatives in the operator-framework/community-operators GitHub repository. No official support.

As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the Internet to pull the latest content.

As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues.

Important

When creating custom catalog images, previous versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which has been deprecated for several releases. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders should start switching to using the opm index command to manage index images before the oc adm catalog build command is removed in a future release.

3.8.2. Prerequisites

  • If you want to prune the default catalog and selectively mirror only a subset of Operators, install the opm CLI.

3.8.3. Disabling the default OperatorHub sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. Before configuring OperatorHub to instead use local catalog sources in a restricted network environment, you must disable the default catalogs.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub spec:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

3.8.4. Pruning an index image

An index image, based on the Operator Bundle Format, is a containerized snapshot of an Operator catalog. You can prune an index of all but a specified list of packages, creating a copy of the source index containing only the Operators that you want.

When configuring Operator Lifecycle Manager (OLM) to use mirrored content on restricted network OpenShift Container Platform clusters, use this pruning method if you want to only mirror a subset of Operators from the default catalogs.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows pruning the index image for the default redhat-operators catalog, but the process is the same for all index images.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • grpcurl
  • opm version 1.12.3+
  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  2. Authenticate with your target registry:

    $ podman login <target_registry>
  3. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      $ podman run -p50051:50051 \
          -it registry.redhat.io/redhat/redhat-operator-index:v4.6

      Example output

      Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.6...
      Getting image source signatures
      Copying blob ae8a0c23f5b1 done
      ...
      INFO[0000] serving registry                              database=/database/index.db port=50051

    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      ...
      {
        "name": "advanced-cluster-management"
      }
      ...
      {
        "name": "jaeger-product"
      }
      ...
      {
      {
        "name": "quay-operator"
      }
      ...

    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.
  4. Run the following command to prune the source index of all but the specified packages:

    $ opm index prune \
        -f registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        -p advanced-cluster-management,jaeger-product,quay-operator \2
        -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6 3
    1
    Index to prune.
    2
    Comma-separated list of packages to keep.
    3
    Custom tag for new index image being built.
  5. Run the following command to push the new index image to your target registry:

    $ podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.6

    where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to.

3.8.5. Mirroring an Operator catalog

You can mirror the Operator content of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2. For a cluster on a restricted network, this registry can be a registry that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation.

You must also mirror the Red Hat-provided index image, or push your own custom-built index image, to the target registry by using the oc image mirror command. You can then use the mirrored index image to create a CatalogSource that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster.

For the steps in this procedure, the target registry is an existing mirror registry that is accessible by both your cluster and a workstation with unrestricted network access. This example also shows mirroring the default redhat-operators catalog, but the process is the same for all catalogs.

Prerequisites

  • Workstation with unrestricted network access
  • podman version 1.4.4+
  • Access to mirror registry that supports Docker v2-2
  • If you are working with private registries, set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI:

    $ REG_CREDS=${XDG_RUNTIME_DIR}/containers/auth.json

Procedure

  1. On your workstation with unrestricted network access, use the podman login command to authenticate with the your target mirror registry:

    $ podman login <mirror_registry>
  2. Authenticate with registry.redhat.io:

    $ podman login registry.redhat.io
  3. The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. You can choose either of the following:

    • Allow the default behavior of the command to automatically mirror all of the image content from the index image to your mirror registry after generating manifests.
    • Add the --manifests-only flag to only generate the manifests required for mirroring, but do not actually mirror the image content to the registry yet. This can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you only require a subset of packages. You can then use that file with the oc image mirror command to mirror the modified list of images in a later step.

      Note

      The --manifests-only flag is intended for advanced selective mirroring of content from the catalog. The opm index prune command, if you used it previously to prune the index image, is suitable for most use cases.

    On your workstation with unrestricted network access, run the following command:

    $ oc adm catalog mirror \
        <index_image> \1
        <mirror_registry>:<port> \2
        [-a ${REG_CREDS}] \3
        [--insecure] \4
        [--filter-by-os="<os>/<arch>"] \5
        [--manifests-only] 6
    1
    Specify the index image for the catalog you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.6.
    2
    Specify the target registry to mirror the Operator content to.
    3
    Optional: If required, specify the location of your registry credentials file.
    4
    Optional: If you do not want to configure trust for the target registry, add the --insecure flag.
    5
    Optional: Because the catalog might reference images that support multiple architectures and operating systems, you can filter by architecture and operating system to mirror only the images that match. Valid values are linux/amd64, linux/ppc64le, and linux/s390x.
    6
    Optional: Only generate the manifests required for mirroring and do not actually mirror the image content to a registry.

    Example output

    src image has index label for database path: /database/index.db
    using database path mapping: /database/index.db:/tmp/153048078
    wrote database to /tmp/153048078 1
    ...
    wrote mirroring manifests to redhat-operator-index-manifests

    1
    Directory for the temporary index.db database generated by the command.

    After running the command, a <image_name>-manifests/ directory is created in the current directory and generates the following files:

    • The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry.
    • The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration.
  4. If you used the --manifests-only flag in the previous step and want to further trim the subset of packages to be mirrored:

    1. Modify the list of images in your mapping.txt file to your specifications. If you are unsure of the exact names and versions of the subset of images you want to mirror, use the following steps to find them:

      1. Run the sqlite3 tool against the temporary database that was generated by the oc adm catalog mirror command to retrieve a list of images matching a general search query. The output helps inform how you will later edit your mapping.txt file.

        For example, to retrieve a list of images that are similar to the string jaeger:

        $ echo "select * from related_image \
            where operatorbundle_name like '%jaeger%';" \
            | sqlite3 -line /tmp/153048078/index.db 1
        1
        Refer to the previous output of the oc adm catalog mirror command to find the path of the database file.

        Example output

        ...
        image = registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb
        operatorbundle_name = jaeger-operator.v1.17.6
        
        image = registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2
        operatorbundle_name = jaeger-operator.v1.13.2-1

      2. Use the results from the previous step to help you edit the mapping.txt file to only include the subset of images you want to mirror.

        For example, you can use the image values from the previous example output to find that the following matching lines exist in your mapping.txt file:

        Matching image mappings in mapping.txt

        ...
        registry.redhat.io/distributed-tracing/jaeger-all-in-one-rhel7@sha256:41f769c2c32f3f050aa42d86f084b739914ff9ba2f0aed2d9b0b69357b48459d=quay.io/adellape/distributed-tracing-jaeger-all-in-one-rhel7:5cf7a033
        ...
        registry.redhat.io/distributed-tracing/jaeger-es-index-cleaner-rhel7@sha256:c64ac461d96523516a199bd132ad4d7148317e503a735028f0d8f7ba063a61cb=quay.io/adellape/distributed-tracing-jaeger-es-index-cleaner-rhel7:ecfd2ca7
        ...
        registry.redhat.io/distributed-tracing/jaeger-rhel7-operator:1.13.2=quay.io/adellape/distributed-tracing-jaeger-rhel7-operator:1.13.2
        ...

        In this example, if you only want to mirror these images, you would then remove all other entries in the mapping.txt file and leave only the above matching image mapping lines.

    2. Still on your workstation with unrestricted network access, use your modified mapping.txt file to mirror the images to your registry using the oc image mirror command:

      $ oc image mirror \
          [-a ${REG_CREDS}] \
          -f ./redhat-operator-index-manifests/mapping.txt
  5. Apply the ImageContentSourcePolicy:

    $ oc apply -f ./redhat-operator-index-manifests/imageContentSourcePolicy.yaml
  6. If you are not using a custom, pruned version of an index image, push the Red Hat-provided index image to your registry:

    $ oc image mirror \
        [-a ${REG_CREDS}] \
        registry.redhat.io/redhat/redhat-operator-index:v4.6 \1
        <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 2
    1
    Specify the index image for catalog that you mirrored content for in the previous step.
    2
    Specify where to mirror the index image.

You can now create a CatalogSource to reference your mirrored index image and Operator content.

3.8.6. Creating a catalog from an index image

You can create an Operator catalog from an index image and apply it to an OpenShift Container Platform cluster for use with Operator Lifecycle Manager (OLM).

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogsource.yaml file:

      apiVersion: operators.coreos.com/v1alpha1
      kind: CatalogSource
      metadata:
        name: my-operator-catalog
        namespace: openshift-marketplace
      spec:
        sourceType: grpc
        image: <mirror_registry>:<port>/<namespace>/redhat-operator-index:v4.6 1
        displayName: My Operator Catalog
        publisher: <publisher_name> 2
        updateStrategy:
          registryPoll: 3
            interval: 30m
      1
      Specify your index image.
      2
      Specify your name or an organization name publishing the catalog.
      3
      CatalogSources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      $ oc create -f catalogsource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      $ oc get pods -n openshift-marketplace

      Example output

      NAME                                    READY   STATUS    RESTARTS  AGE
      my-operator-catalog-6njx6               1/1     Running   0         28s
      marketplace-operator-d9f549946-96sgr    1/1     Running   0         26h

    2. Check the CatalogSource:

      $ oc get catalogsource -n openshift-marketplace

      Example output

      NAME                  DISPLAY               TYPE PUBLISHER  AGE
      my-operator-catalog   My Operator Catalog   grpc            5s

    3. Check the PackageManifest:

      $ oc get packagemanifest -n openshift-marketplace

      Example output

      NAME                          CATALOG               AGE
      jaeger-product                My Operator Catalog   93s

You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console.

3.8.7. Updating an index image

After configuring OperatorHub to use a CatalogSource that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.

You can update an existing index image using the opm index add commad.

Prerequisites

  • opm version 1.12.3+
  • podman version 1.4.4+
  • An index image built and pushed to a registry.
  • An existing CatalogSource referencing the index image.

Procedure

  1. Update the existing index by adding bundle images:

    $ opm index add \
        --bundles <registry>/<namespace>/<new_bundle_image>:<tag> \1
        --from-index <registry>/<namespace>/<existing_index_image>:<tag> \2
        --tag <registry>/<namespace>/<existing_index_image>:<tag> 3
    1
    A comma-separated list of additional bundle images to add to the index.
    2
    The existing index that was previously pushed.
    3
    The image tag that you want the updated index image to have.
  2. Push the updated index image:

    $ podman push <registry>/<namespace>/<existing_index_image>:<tag>
  3. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the CatalogSource at its regular interval, verify that the new packages are successfully added:

    $ oc get packagemanifests -n openshift-marketplace

Chapter 4. Developing Operators

4.1. Getting started with the Operator SDK

This guide outlines the basics of the Operator SDK and walks Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) through an example of building a simple Go-based Memcached Operator and managing its lifecycle from installation to upgrade.

This is accomplished using two centerpieces of the Operator Framework: Operator SDK (the operator-sdk CLI tool and controller-runtime library API) and Operator Lifecycle Manager (OLM).

Note

OpenShift Container Platform 4.4 supports Operator SDK v0.15.0 or later.

4.1.1. Architecture of the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes' extensibility to deliver the automation advantages of cloud services like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.

Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.

The Operator SDK is a framework designed to make writing Operators easier by providing:

  • High-level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to quickly bootstrap a new project
  • Extensions to cover common Operator use cases

4.1.1.1. Workflow

The Operator SDK provides the following workflow to develop a new Operator:

  1. Create a new Operator project using the Operator SDK command line interface (CLI).
  2. Define new resource APIs by adding Custom Resource Definitions (CRDs).
  3. Specify resources to watch using the Operator SDK API.
  4. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
  5. Use the Operator SDK CLI to build and generate the Operator deployment manifests.

Figure 4.1. Operator SDK workflow

osdk workflow

At a high level, an Operator using the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.

4.1.1.2. Manager file

The main program for the Operator is the manager file at cmd/manager/main.go. The manager automatically registers the scheme for all Custom Resources (CRs) defined under pkg/apis/ and runs all controllers under pkg/controller/.

The manager can restrict the namespace that all controllers watch for resources:

mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})

By default, this is the namespace that the Operator is running in. To watch all namespaces, you can leave the namespace option empty:

mgr, err := manager.New(cfg, manager.Options{Namespace: ""})

4.1.1.3. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

4.1.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

4.1.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    $ RELEASE_VERSION=v0.19.4
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the public key of the maintainer on your workstation, you will get the following error:

      Example output with error

      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key

      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.1.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.1.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • Git
  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.1.3. Building a Go-based Operator using the Operator SDK

The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. The SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code needed for many common management capabilities, such as metering or monitoring.

This procedure walks through an example of building a simple Memcached Operator using tools and libraries provided by the SDK.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.8 or above to support the apps/v1beta2 API group), for example OpenShift Container Platform 4.6
  • Access to the cluster using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed

Procedure

  1. Create a new project.

    Use the CLI to create a new memcached-operator project:

    $ mkdir -p $GOPATH/src/github.com/example-inc/
    $ cd $GOPATH/src/github.com/example-inc/
    $ operator-sdk new memcached-operator
    $ cd memcached-operator
  2. Add a new Custom Resource Definition (CRD).

    1. Use the CLI to add a new CRD API called Memcached, with APIVersion set to cache.example.com/v1apha1 and Kind set to Memcached:

      $ operator-sdk add api \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached

      This scaffolds the Memcached resource API under pkg/apis/cache/v1alpha1/.

    2. Modify the spec and status of the Memcached Custom Resource (CR) at the pkg/apis/cache/v1alpha1/memcached_types.go file:

      type MemcachedSpec struct {
      	// Size is the size of the memcached deployment
      	Size int32 `json:"size"`
      }
      type MemcachedStatus struct {
      	// Nodes are the names of the memcached pods
      	Nodes []string `json:"nodes"`
      }
    3. After modifying the *_types.go file, always run the following command to update the generated code for that resource type:

      $ operator-sdk generate k8s
  3. Optional: Add custom validation to your CRD.

    OpenAPI v3.0 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached CR when it is created or updated.

    Additionally, a pkg/apis/<group>/<version>/zz_generated.openapi.go file is generated. This file contains the Go representation of this validation block if the +k8s:openapi-gen=true annotation is present above the Kind type declaration, which is present by default. This auto-generated code is your Go Kind type’s OpenAPI model, from which you can create a full OpenAPI Specification and generate a client.

    As an Operator author, you can use Kubebuilder markers (annotations) to configure custom validations for your API. These markers must always have a +kubebuilder:validation prefix. For example, adding an enum-type specification can be done by adding the following marker:

    // +kubebuilder:validation:Enum=Lion;Wolf;Dragon
    type Alias string

    Usage of markers in API code is discussed in the Kubebuilder Generating CRDs and Markers for Config/Code Generation documentation. A full list of OpenAPIv3 validation markers is also available in the Kubebuilder CRD Validation documentation.

    If you add any custom validations, run the following command to update the OpenAPI validation section in the CRD’s deploy/crds/cache.example.com_memcacheds_crd.yaml file:

    $ operator-sdk generate crds

    Example generated YAML

    spec:
      validation:
        openAPIV3Schema:
          properties:
            spec:
              properties:
                size:
                  format: int32
                  type: integer

  4. Add a new Controller.

    1. Add a new Controller to the project to watch and reconcile the Memcached resource:

      $ operator-sdk add controller \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached

      This scaffolds a new Controller implementation under pkg/controller/memcached/.

    2. For this example, replace the generated controller file pkg/controller/memcached/memcached_controller.go with the example implementation.

      The example controller executes the following reconciliation logic for each Memcached CR:

      • Create a Memcached Deployment if it does not exist.
      • Ensure that the Deployment size is the same as specified by the Memcached CR spec.
      • Update the Memcached CR status with the names of the Memcached pods.

      The next two sub-steps inspect how the Controller watches resources and how the reconcile loop is triggered. You can skip these steps to go directly to building and running the Operator.

    3. Inspect the Controller implementation at the pkg/controller/memcached/memcached_controller.go file to see how the Controller watches resources.

      The first watch is for the Memcached type as the primary resource. For each Add, Update, or Delete event, the reconcile loop is sent a reconcile Request (a <namespace>:<name> key) for that Memcached object:

      err := c.Watch(
        &source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})

      The next watch is for Deployments, but the event handler maps each event to a reconcile Request for the owner of the Deployment. In this case, this is the Memcached object for which the Deployment was created. This allows the controller to watch Deployments as a secondary resource:

      err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
      		IsController: true,
      		OwnerType:    &cachev1alpha1.Memcached{},
      	})
    4. Every Controller has a Reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument which is a <namespace>:<name> key used to lookup the primary resource object, Memcached, from the cache:

      func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
        // Lookup the Memcached instance for this reconcile request
        memcached := &cachev1alpha1.Memcached{}
        err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
        ...
      }

      Based on the return value of Reconcile() the reconcile Request may be requeued and the loop may be triggered again:

      // Reconcile successful - don't requeue
      return reconcile.Result{}, nil
      // Reconcile failed due to error - requeue
      return reconcile.Result{}, err
      // Requeue for any reason other than error
      return reconcile.Result{Requeue: true}, nil
  5. Build and run the Operator.

    1. Before running the Operator, the CRD must be registered with the Kubernetes API server:

      $ oc create \
          -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
    2. After registering the CRD, there are two options for running the Operator:

      • As a Deployment inside a Kubernetes cluster
      • As Go program outside a cluster

      Choose one of the following methods.

      1. Option A: Running as a Deployment inside the cluster.

        1. Build the memcached-operator image and push it to a registry:

          $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        2. The Deployment manifest is generated at deploy/operator.yaml. Update the Deployment image as follows since the default is just a placeholder:

          $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
        3. Ensure you have an account on Quay.io for the next step, or substitute your preferred container registry. On the registry, create a new public image repository named memcached-operator.
        4. Push the image to the registry:

          $ podman push quay.io/example/memcached-operator:v0.0.1
        5. Setup RBAC and deploy memcached-operator:

          $ oc create -f deploy/role.yaml
          $ oc create -f deploy/role_binding.yaml
          $ oc create -f deploy/service_account.yaml
          $ oc create -f deploy/operator.yaml
        6. Verify that memcached-operator is up and running:

          $ oc get deployment

          Example output

          NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
          memcached-operator       1         1         1            1           1m

      2. Option B: Running locally outside the cluster.

        This method is preferred during development cycle to deploy and test faster.

        Run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local --namespace=default

        You can use a specific kubeconfig using the flag --kubeconfig=<path/to/kubeconfig>.

  6. Verify that the Operator can deploy a Memcached application by creating a Memcached CR.

    1. Create the example Memcached CR that was generated at deploy/crds/cache_v1alpha1_memcached_cr.yaml.
    2. View the file:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3

    3. Create the object:

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    4. Ensure that memcached-operator creates the Deployment for the CR:

      $ oc get deployment

      Example output

      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m

    5. Check the pods and CR status to confirm the status is updated with the memcached pod names:

      $ oc get pods

      Example output

      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m

      $ oc get memcached/example-memcached -o yaml

      Example output

      apiVersion: cache.example.com/v1alpha1
      kind: Memcached
      metadata:
        clusterName: ""
        creationTimestamp: 2018-03-31T22:51:08Z
        generation: 0
        name: example-memcached
        namespace: default
        resourceVersion: "245453"
        selfLink: /apis/cache.example.com/v1alpha1/namespaces/default/memcacheds/example-memcached
        uid: 0026cc97-3536-11e8-bd83-0800274106a1
      spec:
        size: 3
      status:
        nodes:
        - example-memcached-6fd7c98d8-7dqdr
        - example-memcached-6fd7c98d8-g5k7v
        - example-memcached-6fd7c98d8-m7vn7

  7. Verify that the Operator can manage a deployed Memcached application by updating the size of the deployment.

    1. Change the spec.size field in the memcached CR from 3 to 4:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4

    2. Apply the change:

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    3. Confirm that the Operator changes the Deployment size:

      $ oc get deployment

      Example output

      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m

  8. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/service_account.yaml

Additional resources

4.1.4. Managing a Go-based Operator using Operator Lifecycle Manager

The previous section has covered manually running an Operator. In the next sections, we will explore using Operator Lifecycle Manager (OLM), which is what enables a more robust deployment model for Operators being run in production environments.

OLM helps you to install, update, and generally manage the lifecycle of all of the Operators (and their associated services) on a Kubernetes cluster. It runs as an Kubernetes extension and lets you use oc for all the lifecycle management functions without any additional tools.

Prerequisites

  • OLM installed on a Kubernetes-based cluster (v1.8 or above to support the apps/v1beta2 API group), for example OpenShift Container Platform 4.6 Preview OLM enabled
  • Memcached Operator built

Procedure

  1. Generate an Operator manifest.

    An Operator manifest describes how to display, create, and manage the application, in this case Memcached, as a whole. It is defined by a ClusterServiceVersion (CSV) object and is required for OLM to function.

    From the memcached-operator/ directory that was created when you built the Memcached Operator, generate the CSV manifest:

    $ operator-sdk generate csv --csv-version 0.0.1
    Note

    See Building a CSV for the Operator Framework for more information on manually defining a manifest file.

  2. Create an OperatorGroup that specifies the namespaces that the Operator will target. Create the following OperatorGroup in the namespace where you will create the CSV. In this example, the default namespace is used:

    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: memcached-operator-group
      namespace: default
    spec:
      targetNamespaces:
      - default
  3. Deploy the Operator. Use the files that were generated into the deploy/ directory by the Operator SDK when you built the Memcached Operator.

    1. Apply the Operator’s CSV manifest to the specified namespace in the cluster:

      $ oc apply -f deploy/olm-catalog/memcached-operator/0.0.1/memcached-operator.v0.0.1.clusterserviceversion.yaml

      When you apply this manifest, the cluster does not immediately update because it does not yet meet the requirements specified in the manifest.

    2. Create the role, role binding, and service account to grant resource permissions to the Operator, and the Custom Resource Definition (CRD) to create the Memcached type that the Operator manages:

      $ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
      $ oc create -f deploy/service_account.yaml
      $ oc create -f deploy/role.yaml
      $ oc create -f deploy/role_binding.yaml

      Because OLM creates Operators in a particular namespace when a manifest is applied, administrators can leverage the native Kubernetes RBAC permission model to restrict which users are allowed to install Operators.

  4. Create an application instance.

    The Memcached Operator is now running in the default namespace. Users interact with Operators via instances of CustomResources; in this case, the resource has the kind Memcached. Native Kubernetes RBAC also applies to CustomResources, providing administrators control over who can interact with each Operator.

    Creating instances of Memcached in this namespace will now trigger the Memcached Operator to instantiate pods running the memcached server that are managed by the Operator. The more CustomResources you create, the more unique instances of Memcached are managed by the Memcached Operator running in this namespace.

    $ cat <<EOF | oc apply -f -
    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "memcached-for-wordpress"
    spec:
      size: 1
    EOF
    $ cat <<EOF | oc apply -f -
    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "memcached-for-drupal"
    spec:
      size: 1
    EOF
    $ oc get Memcached

    Example output

    NAME                      AGE
    memcached-for-drupal      22s
    memcached-for-wordpress   27s

    $ oc get pods

    Example output

    NAME                                       READY     STATUS    RESTARTS   AGE
    memcached-app-operator-66b5777b79-pnsfj    1/1       Running   0          14m
    memcached-for-drupal-5476487c46-qbd66      1/1       Running   0          3s
    memcached-for-wordpress-65b75fd8c9-7b9x7   1/1       Running   0          8s

4.1.5. Additional resources

4.2. Creating Ansible-based Operators

This guide outlines Ansible support in the Operator SDK and walks Operator authors through examples building and running Ansible-based Operators with the operator-sdk CLI tool that use Ansible playbooks and modules.

4.2.1. Ansible support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK’s options for generating an Operator project includes leveraging existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.

4.2.1.1. Custom Resource files

Operators use the Kubernetes' extension mechanism, Custom Resource Definitions (CRDs), so your Custom Resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 4.1. Custom Resource fields

FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the operator_sdk.util.k8s_status Ansible module by default, which includes condition information to the CR’s status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 4.2. Ansible-based Operator annotations

AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "foo.example.com/v1alpha1"
kind: "Foo"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

4.2.1.2. Watches file

The Watches file contains a list of mappings from Custom Resources (CRs), identified by its Group, Version, and Kind, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location, /opt/ansible/watches.yaml.

Table 4.3. Watches file mappings

FieldDescription

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be simply a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example Watches file

- version: v1alpha1 1
  group: foo.example.com
  kind: Foo
  role: /opt/ansible/roles/Foo

- version: v1alpha1 2
  group: bar.example.com
  kind: Bar
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 3
  group: baz.example.com
  kind: Baz
  playbook: /opt/ansible/baz.yml
  reconcilePeriod: 0
  manageStatus: false

1
Simple example mapping Foo to the Foo role.
2
Simple example mapping Bar to a playbook.
3
More complex example for the Baz kind. Disables re-queuing and managing the CR status in the playbook.
4.2.1.2.1. Advanced options

Advanced features can be enabled by adding them to your Watches file per GVK (group, version, and kind). They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that Custom Resource (CR). The options that can be overridden have the annotation specified below.

Table 4.4. Advanced Watches file options

FeatureYAML keyDescriptionAnnotation for overrideDefault value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansbile.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR’s status section.

 

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

 

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

 

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example Watches file with advanced options

- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

4.2.1.3. Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the Custom Resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message:"Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

- debug:
    msg: "name: {{ meta.name }}, {{ meta.namespace }}"

4.2.1.4. Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

4.2.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

4.2.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    $ RELEASE_VERSION=v0.19.4
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the public key of the maintainer on your workstation, you will get the following error:

      Example output with error

      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key

      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.2.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.2.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • Git
  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.2.3. Building an Ansible-based Operator using the Operator SDK

This procedure walks through an example of building a simple Memcached Operator powered by Ansible playbooks and modules using tools and libraries provided by the Operator SDK.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.6) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed
  • ansible v2.9.0+
  • ansible-runner v1.1.0+
  • ansible-runner-http v1.0.0+

Procedure

  1. Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

    To create a new Ansible-based, namespace-scoped memcached-operator project and change to its directory, use the following commands:

    $ operator-sdk new memcached-operator \
        --api-version=cache.example.com/v1alpha1 \
        --kind=Memcached \
        --type=ansible
    $ cd memcached-operator

    This creates the memcached-operator project specifically for watching the Memcached resource with APIVersion example.com/v1apha1 and Kind Memcached.

  2. Customize the Operator logic.

    For this example, the memcached-operator executes the following reconciliation logic for each Memcached Custom Resource (CR):

    • Create a memcached Deployment if it does not exist.
    • Ensure that the Deployment size is the same as specified by the Memcached CR.

    By default, the memcached-operator watches Memcached resource events as shown in the watches.yaml file and executes the Ansible role Memcached:

    - version: v1alpha1
      group: cache.example.com
      kind: Memcached

    You can optionally customize the following logic in the watches.yaml file:

    1. Specifying a role option configures the Operator to use this specified path when launching ansible-runner with an Ansible role. By default, the new command fills in an absolute path to where your role should go:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        role: /opt/ansible/roles/memcached
    2. Specifying a playbook option in the watches.yaml file configures the Operator to use this specified path when launching ansible-runner with an Ansible playbook:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        playbook: /opt/ansible/playbook.yaml
  3. Build the Memcached Ansible role.

    Modify the generated Ansible role under the roles/memcached/ directory. This Ansible role controls the logic that is executed when a resource is modified.

    1. Define the Memcached spec.

      Defining the spec for an Ansible-based Operator can be done entirely in Ansible. The Ansible Operator passes all key-value pairs listed in the CR spec field along to Ansible as variables. The names of all variables in the spec field are converted to snake case (lowercase with an underscore) by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

      Tip

      You should perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

      In case the user does not set the spec field, set a default by modifying the roles/memcached/defaults/main.yml file:

      size: 1
    2. Define the Memcached Deployment.

      With the Memcached spec now defined, you can define what Ansible is actually executed on resource changes. Because this is an Ansible role, the default behavior executes the tasks in the roles/memcached/tasks/main.yml file.

      The goal is for Ansible to create a Deployment if it does not exist, which runs the memcached:1.4.36-alpine image. Ansible 2.7+ supports the k8s Ansible module, which this example leverages to control the Deployment definition.

      Modify the roles/memcached/tasks/main.yml to match the following:

      - name: start memcached
        k8s:
          definition:
            kind: Deployment
            apiVersion: apps/v1
            metadata:
              name: '{{ meta.name }}-memcached'
              namespace: '{{ meta.namespace }}'
            spec:
              replicas: "{{size}}"
              selector:
                matchLabels:
                  app: memcached
              template:
                metadata:
                  labels:
                    app: memcached
                spec:
                  containers:
                  - name: memcached
                    command:
                    - memcached
                    - -m=64
                    - -o
                    - modern
                    - -v
                    image: "docker.io/memcached:1.4.36-alpine"
                    ports:
                      - containerPort: 11211
      Note

      This example used the size variable to control the number of replicas of the Memcached Deployment. This example sets the default to 1, but any user can create a CR that overwrites the default.

  4. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new Custom Resource Definition (CRD) the Operator will be watching. Deploy the Memcached CRD:

    $ oc create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
  5. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a Pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the memcached-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        $ podman push quay.io/example/memcached-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
      3. Deploy the memcached-operator:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      4. Verify that the memcached-operator is up and running:

        $ oc get deployment
        NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        memcached-operator       1         1         1            1           1m
    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      Ensure that Ansible Runner and Ansible Runner HTTP Plug-in are installed or else you will see unexpected errors from Ansible Runner when a CR is created.

      It is also important that the role path referenced in the watches.yaml file exists on your machine. Because normally a container is used where the role is put on disk, the role must be manually copied to the configured Ansible roles path (for example /etc/ansible/roles).

      1. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk run --local --kubeconfig=config
  6. Create a Memcached CR.

    1. Modify the deploy/crds/cache_v1alpha1_memcached_cr.yaml file as shown and create a Memcached CR:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Ensure that the memcached-operator creates the Deployment for the CR:

      $ oc get deployment

      Example output

      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m

    3. Check the pods to confirm three replicas were created:

      $ oc get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
  7. Update the size.

    1. Change the spec.size field in the memcached CR from 3 to 4 and apply the change:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml

      Example output

      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Confirm that the Operator changes the Deployment size:

      $ oc get deployment

      Example output

      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m

  8. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml

4.2.4. Managing application lifecycle using the k8s Ansible module

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the k8s Ansible module. This Ansible module allows a developer to either leverage their existing Kubernetes resource files (written in YAML) or express the lifecycle management in native Ansible.

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the k8s Ansible module. To get started, install the module on your local workstation and test it using a playbook before moving on to using it within an Operator.

4.2.4.1. Installing the k8s Ansible module

To install the k8s Ansible module on your local workstation:

Procedure

  1. Install Ansible 2.9+:

    $ sudo yum install ansible
  2. Install the OpenShift python client package using pip:

    $ sudo pip install openshift
    $ sudo pip install kubernetes

4.2.4.2. Testing the k8s Ansible module locally

Sometimes, it is beneficial for a developer to run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Procedure

  1. Install the community.kubernetes collection:

    $ ansible-galaxy collection install community.kubernetes
  2. Initialize a new Ansible-based Operator project:

    $ operator-sdk new --type ansible \
        --kind Foo \
        --api-version foo.example.com/v1alpha1 foo-operator

    Example output

    Create foo-operator/tmp/init/galaxy-init.sh
    Create foo-operator/tmp/build/Dockerfile
    Create foo-operator/tmp/build/test-framework/Dockerfile
    Create foo-operator/tmp/build/go-test.sh
    Rendering Ansible Galaxy role [foo-operator/roles/foo]...
    Cleaning up foo-operator/tmp/init
    Create foo-operator/watches.yaml
    Create foo-operator/deploy/rbac.yaml
    Create foo-operator/deploy/crd.yaml
    Create foo-operator/deploy/cr.yaml
    Create foo-operator/deploy/operator.yaml
    Run git init ...
    Initialized empty Git repository in /home/dymurray/go/src/github.com/dymurray/opsdk/foo-operator/.git/
    Run git init done

    $ cd foo-operator
  3. Modify the roles/foo/tasks/main.yml file with the desired Ansible logic. This example creates and deletes a namespace with the switch of a variable.

    - name: set test namespace to "{{ state }}"
      community.kubernetes.k8s:
        api_version: v1
        kind: Namespace
        state: "{{ state }}"
        name: test
      ignore_errors: true 1
    1
    Setting ignore_errors: true ensures that deleting a nonexistent project does not fail.
  4. Modify the roles/foo/defaults/main.yml file to set state to present by default.

    state: present
  5. Create an Ansible playbook playbook.yml in the top-level directory, which includes the foo role:

    - hosts: localhost
      roles:
        - foo
  6. Run the playbook:

    $ ansible-playbook playbook.yml

    Example output

     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [foo : set test namespace to present]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0

  7. Check that the namespace was created:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s

  8. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent

    Example output

     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    PROCEDURE [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [foo : set test namespace to absent]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0

  9. Check that the namespace was deleted:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

4.2.4.3. Testing the k8s Ansible module inside an Operator

After you are familiar using the k8s Ansible module locally, you can trigger the same Ansible logic inside of an Operator when a Custom Resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the Watches file.

4.2.4.3.1. Testing an Ansible-based Operator locally

After getting comfortable testing Ansible workflows locally, you can test the logic inside of an Ansible-based Operator running locally.

To do so, use the operator-sdk run --local command from the top-level directory of your Operator project. This command reads from the ./watches.yaml file and uses the ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s Ansible module does.

Procedure

  1. Because the run --local command reads from the ./watches.yaml file, there are options available to the Operator author. If role is left alone (by default, /opt/ansible/roles/<name>) you must copy the role over to the /opt/ansible/roles/ directory from the Operator directly.

    This is cumbersome because changes are not reflected from the current directory. Instead, change the role field to point to the current directory and comment out the existing line:

    - version: v1alpha1
      group: foo.example.com
      kind: Foo
      #  role: /opt/ansible/roles/Foo
      role: /home/user/foo-operator/Foo
  2. Create a Custom Resource Definiton (CRD) and proper role-based access control (RBAC) definitions for the Custom Resource (CR) Foo. The operator-sdk command autogenerates these files inside of the deploy/ directory:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
  3. Run the run --local command:

    $ operator-sdk run --local

    Example output

    [...]
    INFO[0000] Starting to serve on 127.0.0.1:8888
    INFO[0000] Watching foo.example.com/v1alpha1, Foo, default

  4. Now that the Operator is watching the resource Foo for events, the creation of a CR triggers your Ansible role to execute. View the deploy/cr.yaml file:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"

    Because the spec field is not set, Ansible is invoked with no extra variables. The next section covers how extra variables are passed from a CR to Ansible. This is why it is important to set sane defaults for the Operator.

  5. Create a CR instance of Foo with the default variable state set to present:

    $ oc create -f deploy/cr.yaml
  6. Check that the namespace test was created:

    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s

  7. Modify the deploy/cr.yaml file to set the state field to absent:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"
    spec:
      state: "absent"
  8. Apply the changes and confirm that the namespace is deleted:

    $ oc apply -f deploy/cr.yaml
    $ oc get namespace

    Example output

    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

4.2.4.3.2. Testing an Ansible-based Operator on a cluster

After getting familiar running Ansible logic inside of an Ansible-based Operator locally, you can test the Operator inside of a Pod on a Kubernetes cluster, such as OpenShift Container Platform. Running as a Pod on a cluster is preferred for production use.

Procedure

  1. Build the foo-operator image and push it to a registry:

    $ operator-sdk build quay.io/example/foo-operator:v0.0.1
    $ podman push quay.io/example/foo-operator:v0.0.1
  2. Deployment manifests are generated in the deploy/operator.yaml file. The Deployment image in this file must be modified from the placeholder REPLACE_IMAGE to the previously-built image. To do so, run the following command:

    $ sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml

    If you are performing these steps on OSX, use the following command instead:

    $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
  3. Deploy the foo-operator:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml 1
    1
    Only required if the CRD does not exist already.
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
    $ oc create -f deploy/operator.yaml
  4. Verify that the foo-operator is up and running:

    $ oc get deployment

    Example output

    NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    foo-operator       1         1         1            1           1m

  5. You can now view the Ansible logs for the foo-operator:

    $ oc logs deployment/foo-operator

4.2.5. Managing Custom Resource status using the operator_sdk.util Ansible collection

Ansible-based Operators automatically update Custom Resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
    - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module, which is included in the operator_sdk.util collection. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

Procedure

  1. To track CR status manually from your application, update the Watches file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: Foo
      role: Foo
      manageStatus: false
  2. Use the operator_sdk.util.k8s_status Ansible module to update the subresource. For example, to update with key foo and value bar, operator_sdk.util can be used as shown:

    - operator_sdk.util.k8s_status:
        api_version: app.example.com/v1
        kind: Foo
        name: "{{ meta.name }}"
        namespace: "{{ meta.namespace }}"
        status:
          foo: bar

    Collections can also be declared in the meta/main.yml for the role, which is included for new scaffolded Ansible Operators.

    collections:
      - operator_sdk.util

    Declaring collections in the role meta allows you to invoke the k8s_status module directly:

    k8s_status:
      <snip>
      status:
        foo: bar

Additional resources

4.2.6. Additional resources

4.3. Creating Helm-based Operators

This guide outlines Helm chart support in the Operator SDK and walks Operator authors through an example of building and running an Nginx Operator with the operator-sdk CLI tool that uses an existing Helm chart.

4.3.1. Helm chart support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK’s options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.

The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the object’s spec field is a list of configuration options that are typically described in Helm’s values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a Custom Resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.

For an example of a simple CR called Tomcat:

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

The replicaCount value, 2 in this case, is propagated into the chart’s templates where following is used:

{{ .Values.replicaCount }}

After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:

$ oc get Tomcats --all-namespaces

There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a Custom Resource Definition (CRD). And because it obeys RBAC, you can more easily prevent production changes.

4.3.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

4.3.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    $ RELEASE_VERSION=v0.19.4
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the public key of the maintainer on your workstation, you will get the following error:

      Example output with error

      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key

      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.3.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.3.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • Git
  • Go v1.13+
  • docker v17.03+, podman v1.2.0+, or buildah v1.7+
  • OpenShift CLI (oc) v4.6+ installed
  • Access to a cluster based on Kubernetes v1.12.0+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

4.3.3. Building a Helm-based Operator using the Operator SDK

This procedure walks through an example of building a simple Nginx Operator powered by a Helm chart using tools and libraries provided by the Operator SDK.

Tip

It is best practice to build a new Operator for each chart. This can allow for more native-behaving Kubernetes APIs (for example, oc get Nginx) and flexibility if you ever want to write a fully-fledged Operator in Go, migrating away from a Helm-based Operator.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.6) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.6+ installed

Procedure

  1. Create a new Operator project. A namespace-scoped Operator watches and manages resources in a single namespace. Namespace-scoped Operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

    To create a new Helm-based, namespace-scoped nginx-operator project, use the following command:

    $ operator-sdk new nginx-operator \
      --api-version=example.com/v1alpha1 \
      --kind=Nginx \
      --type=helm
    $ cd nginx-operator

    This creates the nginx-operator project specifically for watching the Nginx resource with APIVersion example.com/v1apha1 and Kind Nginx.

  2. Customize the Operator logic.

    For this example, the nginx-operator executes the following reconciliation logic for each Nginx Custom Resource (CR):

    • Create a Nginx Deployment if it does not exist.
    • Create a Nginx Service if it does not exist.
    • Create a Nginx Ingress if it is enabled and does not exist.
    • Ensure that the Deployment, Service, and optional Ingress match the desired configuration (for example, replica count, image, service type) as specified by the Nginx CR.

    By default, the nginx-operator watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

    - version: v1alpha1
      group: example.com
      kind: Nginx
      chart: /opt/helm/helm-charts/nginx
    1. Review the Nginx Helm chart.

      When a Helm Operator project is created, the Operator SDK creates an example Helm chart that contains a set of templates for a simple Nginx release.

      For this example, templates are available for Deployment, Service, and Ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

      If you are not already familiar with Helm Charts, take a moment to review the Helm Chart developer documentation.

    2. Understand the Nginx CR spec.

      Helm uses a concept called values to provide customizations to a Helm chart’s defaults, which are defined in the Helm chart’s values.yaml file.

      Override these defaults by setting the desired values in the CR spec. You can use the number of replicas as an example:

      1. First, inspect the helm-charts/nginx/values.yaml file to find that the chart has a value called replicaCount and it is set to 1 by default. To have 2 Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

        Update the deploy/crds/example.com_v1alpha1_nginx_cr.yaml file to look like the following:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
      2. Similarly, the default service port is set to 80. To instead use 8080, update the deploy/crds/example.com_v1alpha1_nginx_cr.yaml file again by adding the service port override:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
          service:
            port: 8080

        The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command works.

  3. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new custom resource definition (CRD) the operator will be watching. Deploy the following CRD:

    $ oc create -f deploy/crds/example_v1alpha1_nginx_crd.yaml
  4. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a Pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the nginx-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/nginx-operator:v0.0.1
        $ podman push quay.io/example/nginx-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
      3. Deploy the nginx-operator:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      4. Verify that the nginx-operator is up and running:

        $ oc get deployment

        Example output

        NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        nginx-operator       1         1         1            1           1m

    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      It is important that the chart path referenced in the watches.yaml file exists on your machine. By default, the watches.yaml file is scaffolded to work with an Operator image built with the operator-sdk build command. When developing and testing your operator with the operator-sdk run --local command, the SDK looks in your local file system for this path.

      1. Create a symlink at this location to point to your Helm chart’s path:

        $ sudo mkdir -p /opt/helm/helm-charts
        $ sudo ln -s $PWD/helm-charts/nginx /opt/helm/helm-charts/nginx
      2. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk run --local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk run --local --kubeconfig=<path_to_config>
  5. Deploy the Nginx CR.

    Apply the Nginx CR that you modified earlier:

    $ oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Ensure that the nginx-operator creates the Deployment for the CR:

    $ oc get deployment

    Example output

    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        2         2         2            2           1m

    Check the pods to confirm two replicas were created:

    $ oc get pods

    Example output

    NAME                                                      READY     STATUS    RESTARTS   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-fjcr9   1/1       Running   0          1m
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-ljbzl   1/1       Running   0          1m

    Check that the Service port is set to 8080:

    $ oc get service

    Example output

    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        8080/TCP   1m

  6. Update the replicaCount and remove the port.

    Change the spec.replicaCount field from 2 to 3, remove the spec.service field, and apply the change:

    $ cat deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Example output

    apiVersion: "example.com/v1alpha1"
    kind: "Nginx"
    metadata:
      name: "example-nginx"
    spec:
      replicaCount: 3

    $ oc apply -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml

    Confirm that the Operator changes the Deployment size:

    $ oc get deployment

    Example output

    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        3         3         3            3           1m

    Check that the Service port is set to the default 80:

    $ oc get service

    Example output

    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)  AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        80/TCP   1m

  7. Clean up the resources:

    $ oc delete -f deploy/crds/example.com_v1alpha1_nginx_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/example_v1alpha1_nginx_crd.yaml

4.3.4. Additional resources

4.4. Generating a ClusterServiceVersion (CSV)

A ClusterServiceVersion (CSV) is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which Custom Resources (CRs) it manages or depends on.

The Operator SDK includes the generate csv subcommand to generate a ClusterServiceVersion (CSV) for the current Operator project customized using information contained in manually-defined YAML manifests and Operator source files.

A CSV-generating command removes the responsibility of Operator authors having in-depth OLM knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

The CSV version is the same as the Operator’s, and a new CSV is generated when upgrading Operator versions. Operator authors can use the --csv-version flag to have their Operators' state encapsulated in a CSV with the supplied semantic version:

$ operator-sdk generate csv --csv-version <version>

This action is idempotent and only updates the CSV file when a new version is supplied, or a YAML manifest or source file is changed. Operator authors should not have to directly modify most fields in a CSV manifest. Those that require modification are defined in this guide. For example, the CSV version must be included in metadata.name.

4.4.1. How CSV generation works

An Operator project’s deploy/ directory is the standard location for all manifests required to deploy an Operator. The Operator SDK can use data from manifests in deploy/ to write a CSV. The following command:

$ operator-sdk generate csv --csv-version <version>

writes a CSV YAML file to the deploy/olm-catalog/ directory by default.

Exactly three types of manifests are required to generate a CSV:

  • operator.yaml
  • *_{crd,cr}.yaml
  • RBAC role files, for example role.yaml

Operator authors may have different versioning requirements for these files and can configure which specific files are included in the deploy/olm-catalog/csv-config.yaml file.

Workflow

Depending on whether an existing CSV is detected, and assuming all configuration defaults are used, the generate csv subcommand either:

  • Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is not found, it creates a ClusterServiceVersion object, referred to here as a cache, and populates fields easily derived from Operator metadata, such as Kubernetes API ObjectMeta.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.

or:

  • Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is found, the CSV YAML file contents are marshaled into a ClusterServiceVersion cache.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.
Note

Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved.

4.4.2. CSV composition configuration

Operator authors can configure CSV composition by populating several fields in the deploy/olm-catalog/csv-config.yaml file:

FieldDescription

operator-path (string)

The Operator resource manifest file path. Defaults to deploy/operator.yaml.

crd-cr-path-list (string(, string)*)

A list of CRD and CR manifest file paths. Defaults to [deploy/crds/*_{crd,cr}.yaml].

rbac-path-list (string(, string)*)

A list of RBAC role manifest file paths. Defaults to [deploy/role.yaml].

4.4.3. Manually-defined CSV fields

Many CSV fields cannot be populated using generated, non-SDK-specific manifests. These fields are mostly human-written, English metadata about the Operator and various Custom Resource Definitions (CRDs).

Operator authors must directly modify their CSV YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning CSV generation when a lack of data in any of the required fields is detected.

Table 4.5. Required

FieldDescription

metadata.name

A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example app-operator.v0.1.1.

metadata.capabilities

The Operator’s capability level according to the Operator maturity model. Options include Basic Install, Seamless Upgrades, Full Lifecycle, Deep Insights, and Auto Pilot.

spec.displayName

A public name to identify the Operator.

spec.description

A short description of the Operator’s functionality.

spec.keywords

Keywords describing the operator.

spec.maintainers

Human or organizational entities maintaining the Operator, with a name and email.

spec.provider

The Operators' provider (usually an organization), with a name.

spec.labels

Key-value pairs to be used by Operator internals.

spec.version

Semantic version of the Operator, for example 0.1.1.

spec.customresourcedefinitions

Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in deploy/. However, several fields not in the CRD manifest spec require user input:

  • description: description of the CRD.
  • resources: any Kubernetes resources leveraged by the CRD, for example pods and StatefulSets.
  • specDescriptors: UI hints for inputs and outputs of the Operator.

Table 4.6. Optional

FieldDescription

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version. Options include planning, pre-alpha, alpha, beta, stable, mature, inactive, and deprecated.

Further details on what data each field above should hold are found in the CSV spec.

Note

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code; such Operator SDK functionality will be addressed in a future design document.

Additional resources

4.4.4. Generating a CSV

Prerequisites

  • An Operator project generated using the Operator SDK

Procedure

  1. In your Operator project, configure your CSV composition by modifying the deploy/olm-catalog/csv-config.yaml file, if desired.
  2. Generate the CSV:

    $ operator-sdk generate csv --csv-version <version>
  3. In the new CSV generated in the deploy/olm-catalog/ directory, ensure all required, manually-defined fields are set appropriately.

4.4.5. Enabling your Operator for restricted network environments

As an Operator author, your Operator must meet additional requirements to run properly in a restricted network, or disconnected, environment.

Operator requirements for supporting disconnected mode

  • In the ClusterServiceVersion (CSV) of your Operator:

    • List any related images, or other container images that your Operator might require to perform their functions.
    • Reference all specified images by a digest (SHA) and not by a tag.
  • All dependencies of your Operator must also support running in a disconnected mode.
  • Your Operator must not require any off-cluster resources.

For the CSV requirements, you can make the following changes as the Operator author.

Prerequisites

  • An Operator project with a CSV.

Procedure

  1. Use SHA references to related images in two places in the CSV for your Operator:

    1. Update spec.relatedImages:

      ...
      spec:
        relatedImages: 1
          - name: etcd-operator 2
            image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3
          - name: etcd-image
            image: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68
      ...
      1
      Create a relatedImages section and set the list of related images.
      2
      Specify a unique identifier for the image.
      3
      Specify each image by a digest (SHA), not by an image tag.
    2. Update the env section of the Operators Deployments when declaring environment variables that inject the image that the Operator should use:

      spec:
        install:
          spec:
            deployments:
            - name: etcd-operator-v3.1.1
              spec:
                replicas: 1
                selector:
                  matchLabels:
                    name: etcd-operator
                strategy:
                  type: Recreate
                template:
                  metadata:
                    labels:
                      name: etcd-operator
                  spec:
                    containers:
                    - args:
                      - /opt/etcd/bin/etcd_operator_run.sh
                      env:
                      - name: WATCH_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.annotations['olm.targetNamespaces']
                      - name: ETCD_OPERATOR_DEFAULT_ETCD_IMAGE 1
                        value: quay.io/etcd-operator/etcd@sha256:13348c15263bd8838ec1d5fc4550ede9860fcbb0f843e48cbccec07810eebb68 2
                      - name: ETCD_LOG_LEVEL
                        value: INFO
                      image: quay.io/etcd-operator/operator@sha256:d134a9865524c29fcf75bbc4469013bc38d8a15cb5f41acfddb6b9e492f556e4 3
                      imagePullPolicy: IfNotPresent
                      livenessProbe:
                        httpGet:
                          path: /healthy
                          port: 8080
                        initialDelaySeconds: 10
                        periodSeconds: 30
                      name: etcd-operator
                      readinessProbe:
                        httpGet:
                          path: /ready
                          port: 8080
                        initialDelaySeconds: 10
                        periodSeconds: 30
                      resources: {}
                    serviceAccountName: etcd-operator
          strategy: deployment
      1
      Inject the images referenced by the Operator via environment variables.
      2
      Specify each image by a digest (SHA), not by an image tag.
      3
      Also reference the Operator container image by a digest (SHA), not by an image tag.
  2. Add the Disconnected annotation, which indicates that the Operator works in a disconnected environment:

    metadata:
      annotations:
        operators.openshift.io/infrastructure-features: '["Disconnected"]'

    Operators can be filtered in OperatorHub by this infrastructure feature.

4.4.6. Enabling your Operator for multiple architectures and operating systems

Operator Lifecycle Manager (OLM) assumes that all Operators run on Linux hosts. However, as an Operator author, you can specify whether your Operator supports managing workloads on other architectures, if worker nodes are available in the OpenShift Container Platform cluster.

If your Operator supports variants other than AMD64 and Linux, you can add labels to the CSV that provides the Operator to list the supported variants. Labels indicating supported architectures and operating systems are defined by the following:

labels:
    operatorframework.io/arch.<arch>: supported 1
    operatorframework.io/os.<os>: supported 2
1
Set <arch> to a supported string.
2
Set <os> to a supported string.
Note

Only the labels on the channel head of the default channel are considered for filtering PackageManifests by label. This means, for example, that providing an additional architecture for an Operator in the non-default channel is possible, but that architecture is not available for filtering in the PackageManifest API.

If a CSV does not include an os label, it is treated as if it has the following Linux support label by default:

labels:
    operatorframework.io/os.linux: supported

If a CSV does not include an arch label, it is treated as if it has the following AMD64 support label by default:

labels:
    operatorframework.io/arch.amd64: supported

If an Operator supports multiple node architectures or operating systems, you can add multiple labels, as well.

Prerequisites

  • An Operator project with a CSV.
  • To support listing multiple architectures and operating systems, your Operator image referenced in the CSV must be a manifest list image.
  • For the Operator to work properly in restricted network, or disconnected, environments, the image referenced must also be specified using a digest (SHA) and not by a tag.

Procedure

  • Add a label in your CSV’s metadata.labels for each supported architecture and operating system that your Operator supports:

    labels:
      operatorframework.io/arch.s390x: supported
      operatorframework.io/os.zos: supported
      operatorframework.io/os.linux: supported 1
      operatorframework.io/arch.amd64: supported 2
    1 2
    After you add a new architecture or operating system, you must also now include the default os.linux and arch.amd64 variants explicitly.

Additional resources

4.4.6.1. Architecture and operating system support for Operators

The following strings are supported in Operator Lifecycle Manager (OLM) on OpenShift Container Platform when labeling or filtering Operators that support multiple architectures and operating systems:

Table 4.7. Architectures supported on OpenShift Container Platform

ArchitectureString

AMD64

amd64

64-bit PowerPC little-endian

ppc64le

IBM Z

s390x

Table 4.8. Operating systems supported on OpenShift Container Platform

Operating systemString

Linux

linux

z/OS

zos

Note

Different versions of OpenShift Container Platform and other Kubernetes-based distributions might support a different set of architectures and operating systems.

4.4.7. Setting a suggested namespace

Some Operators must be deployed in a specific namespace, or with ancillary resources in specific namespaces, in order to work properly. If resolved from a Subscription, OLM defaults the namespaced resources of an Operator to the namespace of its Subscription.

As an Operator author, you can instead express a desired target namespace as part of your CSV to maintain control over the final namespaces of the resources installed for their Operators. When adding the Operator to a cluster using OperatorHub, this enables the web console to autopopulate the suggested namespace for the cluster administrator during the installation process.

Procedure

  • In your CSV, set the operatorframework.io/suggested-namespace annotation to your suggested namespace:

    metadata:
      annotations:
        operatorframework.io/suggested-namespace: <namespace> 1
    1
    Set your suggested namespace.

4.4.8. Defining webhooks

Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator.

The ClusterServiceVersion (CSV) resource of an Operator can include a webhookdefinitions section to define the following types of webhooks:

  • Admission webhooks (validating and mutating)
  • Conversion webhooks

Procedure

  • Add a webhookdefinitions section to the spec section of the CSV of your Operator and include any webhook definitions using a type of ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook. The following example contains all three types of webhooks:

    CSV containing webhooks

      apiVersion: operators.coreos.com/v1alpha1
      kind: ClusterServiceVersion
      metadata:
        name: webhook-operator.v0.0.1
      spec:
        customresourcedefinitions:
          owned:
          - kind: WebhookTest
            name: webhooktests.webhook.operators.coreos.io 1
            version: v1
        install:
          spec:
            deployments:
            - name: webhook-operator-webhook
              ...
              ...
              ...
          strategy: deployment
        installModes:
        - supported: false
          type: OwnNamespace
        - supported: false
          type: SingleNamespace
        - supported: false
          type: MultiNamespace
        - supported: true
          type: AllNamespaces
        webhookdefinitions:
        - type: ValidatingAdmissionWebhook 2
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: vwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest
        - type: MutatingAdmissionWebhook 3
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          failurePolicy: Fail
          generateName: mwebhooktest.kb.io
          rules:
          - apiGroups:
            - webhook.operators.coreos.io
            apiVersions:
            - v1
            operations:
            - CREATE
            - UPDATE
            resources:
            - webhooktests
          sideEffects: None
          webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest
        - type: ConversionWebhook 4
          admissionReviewVersions:
          - v1beta1
          - v1
          containerPort: 443
          targetPort: 4343
          deploymentName: webhook-operator-webhook
          generateName: cwebhooktest.kb.io
          sideEffects: None
          webhookPath: /convert
          conversionCRDs:
          - webhooktests.webhook.operators.coreos.io 5
    ...

    1
    The CRDs targeted by the conversion webhook must exist here.
    2
    A validating admission webhook.
    3
    A mutating admission webhook.
    4
    A conversion webhook.
    5
    The spec.PreserveUnknownFields property of each CRD must be set to false or nil.

4.4.8.1. Webhook considerations for OLM

When deploying an Operator with webhooks using Operator Lifecycle Manager (OLM), you must define the following:

  • The type field must be set to either ValidatingAdmissionWebhook, MutatingAdmissionWebhook, or ConversionWebhook, or the CSV will be placed in a failed phase.
  • The CSV must contain a Deployment whose name is equivalent to the value supplied in the deploymentName field of the webhookdefinition.

When the webhook is created, OLM ensures that the webhook only acts upon namespaces that match the OperatorGroup that the Operator is deployed in.

Certificate authority constraints

OLM is configured to provide each Deployment with a single certificate authority (CA). The logic that generates and mounts the CA into the Deployment was originally used by the API Service lifecycle logic. As a result:

  • The TLS certificate file is mounted to the Deployment at /apiserver.local.config/certificates/apiserver.crt.
  • The TLS key file is mounted to the Deployment at /apiserver.local.config/certificates/apiserver.key.
Admission webhook rules constraints

To prevent an Operator from configuring the cluster into an unrecoverable state, OLM places the CSV in the failed phase if the rules defined in an admission webhook intercept any of the following requests:

  • Requests that target all groups
  • Requests that target the operators.coreos.com group
  • Requests that target the ValidatingWebhookConfigurations or MutatingWebhookConfigurations resources
Conversion webhook constraints

OLM places the CSV in the failed phase if a conversion webhook definition does not adhere to the following constraints:

  • CSVs featuring a conversion webhook can only support the AllNamespaces InstallMode.
  • The CRD targeted by the conversion webhook must have its spec.preserveUnknownFields field set to false or nil.
  • The conversion webhook defined in the CSV must target an owned CRD.
  • There can only be one conversion webhook on the entire cluster for a given CRD.

4.4.9. Understanding your Custom Resource Definitions (CRDs)

There are two types of Custom Resource Definitions (CRDs) that your Operator may use: ones that are owned by it and ones that it depends on, which are required.

4.4.9.1. Owned CRDs

The CRDs owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of ReplicaSets in another. Each one should be listed out in the CSV file.

Table 4.9. Owned CRD fields

FieldDescriptionRequired/Optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

Kind

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These Descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a Secret or ConfigMap that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.
  • StatusDescriptors: A reference to fields in the status block of an object.
  • ActionDescriptors: A reference to actions that can be performed on an object.

All Descriptors accept the following fields:

  • DisplayName: A human readable name for the Spec, Status, or Action.
  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.
  • Path: A dot-delimited path of the field on the object that this descriptor describes.
  • X-Descriptors: Used to determine which "capabilities" this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OpenShift Container Platform.

Also see the openshift/console project for more information on Descriptors in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a Secret and ConfigMap, and orchestrates Services, StatefulSets, pods and ConfigMaps:

Example owned CRD

      - displayName: MongoDB Standalone
        group: mongodb.com
        kind: MongoDbStandalone
        name: mongodbstandalones.mongodb.com
        resources:
          - kind: Service
            name: ''
            version: v1
          - kind: StatefulSet
            name: ''
            version: v1beta2
          - kind: Pod
            name: ''
            version: v1
          - kind: ConfigMap
            name: ''
            version: v1
        specDescriptors:
          - description: Credentials for Ops Manager or Cloud Manager.
            displayName: Credentials
            path: credentials
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
          - description: Project this deployment belongs to.
            displayName: Project
            path: project
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
          - description: MongoDB version to be installed.
            displayName: Version
            path: version
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:label'
        statusDescriptors:
          - description: The status of each of the pods for the MongoDB cluster.
            displayName: Pod Status
            path: pods
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
        version: v1
        description: >-
          MongoDB Deployment consisting of only one host. No replication of
          data.

4.4.9.2. Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a Service Account created for each Operator to create, watch, and modify the Kubernetes resources required.

Table 4.10. Required CRD fields

FieldDescriptionRequired/Optional

Name

The full name of the CRD you require.

Required

Version

The version of that object API.

Required

Kind

The Kubernetes object kind.

Required

DisplayName

A human readable version of the CRD.

Required

Description

A summary of how the component fits in your larger architecture.

Required

Example required CRD

    required:
    - name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.

4.4.9.3. CRD upgrades

OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular ClusterServiceVersion (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions:

  • All existing serving versions in the current CRD are present in the new CRD.
  • All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD.
4.4.9.3.1. Adding a new CRD version

Procedure

To add a new version of a CRD to your Operator:

  1. Add a new entry in the CRD resource under the versions section of your CSV.

    For example, if the current CRD has one version v1alpha1 and you want to add a new version v1beta1 and mark it as the new storage version:

    versions:
      - name: v1alpha1
        served: true
        storage: false
      - name: v1beta1 1
        served: true
        storage: true
    1
    Add a new entry for v1beta1.
  2. Ensure the referencing version of the CRD in the owned section of your CSV is updated if the CSV intends to use the new version:

    customresourcedefinitions:
      owned:
      - name: cluster.example.com
        version: v1beta1 1
        kind: cluster
        displayName: Cluster
    1
    Update the version.
  3. Push the updated CRD and CSV to your bundle.
4.4.9.3.2. Deprecating or removing a CRD version

OLM does not allow a serving version of a CRD to be removed right away. Instead, a deprecated version of the CRD must be first disabled by setting the served field in the CRD to false. Then, the non-serving version can be removed on the subsequent CRD upgrade.

Procedure

To deprecate and remove a specific version of a CRD:

  1. Mark the deprecated version as non-serving to indicate this version is no longer in use and may be removed in a subsequent upgrade. For example:

    versions:
      - name: v1alpha1
        served: false 1
        storage: true
    1
    Set to false.
  2. Switch the storage version to a serving version if the version to be deprecated is currently the storage version. For example:

    versions:
      - name: v1alpha1
        served: false
        storage: false 1
      - name: v1beta1
        served: true
        storage: true 2
    1 2
    Update the storage fields accordingly.
    Note

    In order to remove a specific version that is or was the storage version from a CRD, that version must be removed from the storedVersion in the CRD’s status. OLM will attempt to do this for you if it detects a stored version no longer exists in the new CRD.

  3. Upgrade the CRD with the above changes.
  4. In subsequent upgrade cycles, the non-serving version can be removed completely from the CRD. For example:

    versions:
      - name: v1beta1
        served: true
        storage: true
  5. Ensure the referencing version of the CRD in your CSV’s owned section is updated accordingly if that version is removed from the CRD.

4.4.9.4. CRD templates

Users of your Operator will need to be aware of which options are required versus optional. You can provide templates for each of your Custom Resource Definitions (CRDs) with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

metadata:
  annotations:
    alm-examples: >-
      [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

4.4.9.5. Hiding internal objects

It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for users to manipulate and can be confusing to users of the Operator. For example, a database Operator might have a Replication CRD that is created whenever a user creates a Database object with replication: true.

As an Operator author, you can hide any CRDs in the user interface that are not meant for user manipulation by adding the operators.operatorframework.io/internal-objects annotation to the ClusterServiceVersion (CSV) of your Operator.

Procedure

  1. Before marking one of your CRDs as internal, ensure that any debugging information or configuration that might be required to manage the application is reflected on the status or spec block of your CR, if applicable to your Operator.
  2. Add the operators.operatorframework.io/internal-objects annotation to the CSV of your Operator to specify any internal objects to hide in the user interface:

    Internal object annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operators.operatorframework.io/internal-objects: '["my.internal.crd1.io","my.internal.crd2.io"]' 1
    ...

    1
    Set any internal CRDs as an array of strings.

4.4.9.6. Initializing required custom resources

An Operator might require the user to instantiate a custom resource before the Operator can be fully functional. However, it can be challenging for a user to determine what is required or how to define the resource.

As an Operator developer, you can specify a single required custom resource that must be created at the time that the Operator is installed by adding the operatorframework.io/initialization-resource annotation to the ClusterServiceVersion (CSV). The annotation must include a template that contains a complete YAML definition that is required to initialize the resource during installation.

If this annotation is defined, after installing the Operator from the OpenShift Container Platform web console, the user is prompted to create the resource using the template provided in the CSV.

Procedure

  • Add the operatorframework.io/initialization-resource annotation to the CSV of your Operator to specify a required custom resource. For example, the following annotation requires the creation of a StorageCluster resource and provides a full YAML definition:

    Initialization resource annotation

    apiVersion: operators.coreos.com/v1alpha1
    kind: ClusterServiceVersion
    metadata:
      name: my-operator-v1.2.3
      annotations:
        operatorframework.io/initialization-resource: |-
            {
                "apiVersion": "ocs.openshift.io/v1",
                "kind": "StorageCluster",
                "metadata": {
                    "name": "example-storagecluster"
                },
                "spec": {
                    "manageNodes": false,
                    "monPVCTemplate": {
                        "spec": {
                            "accessModes": [
                                "ReadWriteOnce"
                            ],
                            "resources": {
                                "requests": {
                                    "storage": "10Gi"
                                }
                            },
                            "storageClassName": "gp2"
                        }
                    },
                    "storageDeviceSets": [
                        {
                            "count": 3,
                            "dataPVCTemplate": {
                                "spec": {
                                    "accessModes": [
                                        "ReadWriteOnce"
                                    ],
                                    "resources": {
                                        "requests": {
                                            "storage": "1Ti"
                                        }
                                    },
                                    "storageClassName": "gp2",
                                    "volumeMode": "Block"
                                }
                            },
                            "name": "example-deviceset",
                            "placement": {},
                            "portable": true,
                            "resources": {}
                        }
                    ]
                }
            }
    ...

4.4.10. Understanding your API services

As with CRDs, there are two types of APIServices that your Operator may use: owned and required.

4.4.10.1. Owned APIServices

When a CSV owns an APIService, it is responsible for describing the deployment of the extension api-server that backs it and the group-version-kinds it provides.

An APIService is uniquely identified by the group-version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 4.11. Owned APIService fields

FieldDescriptionRequired/Optional

Group

Group that the APIService provides, for example database.example.com.

Required

Version

Version of the APIService, for example v1alpha1.

Required

Kind

A kind that the APIService is expected to provide.

Required

Name

The plural name for the APIService provided

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your APIService (required for owned APIServices). During the CSV pending phase, the OLM Operator searches the InstallStrategy of your CSV for a Deployment spec with a matching name, and if not found, does not transition the CSV to the install ready phase.

Required

DisplayName

A human readable version of your APIService name, for example MongoDB Standalone.

Required

Description

A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService.

Required

Resources

Your APIServices own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

4.4.10.1.1. APIService Resource Creation

Operator Lifecycle Manager (OLM) is responsible for creating or replacing the Service and APIService resources for each unique owned APIService:

  • Service Pod selectors are copied from the CSV deployment matching the APIServiceDescription’s DeploymentName.
  • A new CA key/cert pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective APIService resource.
4.4.10.1.2. APIService Serving Certs

OLM handles generating a serving key/cert pair whenever an owned APIService is being installed. The serving certificate has a CN containing the host name of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding APIService resource.

The cert is stored as a type kubernetes.io/tls Secret in the deployment namespace, and a Volume named apiservice-cert is automatically appended to the Volumes section of the deployment in the CSV matching the APIServiceDescription’s DeploymentName field.

If one does not already exist, a VolumeMount with a matching name is also appended to all containers of that deployment. This allows users to define a VolumeMount with the expected name to accommodate any custom path requirements. The generated VolumeMount’s path defaults to /apiserver.local.config/certificates and any existing VolumeMounts with the same path are replaced.

4.4.10.2. Required APIServices

OLM ensures all required CSVs have an APIService that is available and all expected group-version-kinds are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by APIServices it does not own.

Table 4.12. Required APIService fields

FieldDescriptionRequired/Optional

Group

Group that the APIService provides, for example database.example.com.

Required

Version

Version of the APIService, for example v1alpha1.

Required

Kind

A kind that the APIService is expected to provide.

Required

DisplayName

A human readable version of your APIService name, for example MongoDB Standalone.

Required

Description

A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService.

Required

4.5. Working with bundle images

You can use the Operator SDK to package Operators using the Bundle Format.

4.5.1. Building a bundle image

You can build, push, and validate an Operator bundle image using the Operator SDK.

Prerequisites

  • Operator SDK version 0.19.4
  • podman version 1.4.4+
  • An Operator project generated using the Operator SDK
  • Access to a registry that supports Docker v2-2

Procedure

  1. From your Operator project directory, build the bundle image using the Operator SDK:

    $ operator-sdk bundle create \
        <registry>/<namespace>/<bundle_image_name>:<tag> \1
        -b podman 2
    1
    The image tag that you want the bundle image to have.
    2
    The CLI tool to use for building the container image, either docker (default), podman, or buildah. This example uses podman.
    Note

    If your local manifests are not located in the default <project_root>/deploy/olm-catalog/<bundle_name>/manifests, specify the location with the --directory flag.

  2. Log in to the registry where you want to push the bundle image. For example:

    $ podman login <registry>
  3. Push the bundle image to the registry:

    $ podman push <registry>/<namespace>/<bundle_image_name>:<tag>
  4. Validate the bundle image in the remote registry:

    $ operator-sdk bundle validate \
        <registry>/<namespace>/<bundle_image_name>:<tag> \
        -b podman

    Example output

    INFO[0000] Unpacked image layers                                 bundle-dir=/tmp/bundle-041168359 container-tool=podman
    INFO[0000] running podman pull                                   bundle-dir=/tmp/bundle-041168359 container-tool=podman
    INFO[0002] running podman save                                   bundle-dir=/tmp/bundle-041168359 container-tool=podman
    INFO[0002] All validation tests have completed successfully      bundle-dir=/tmp/bundle-041168359 container-tool=podman

4.5.2. Additional resources

4.6. Validating Operators using the scorecard

Operator authors should validate that their Operator is packaged correctly and free of syntax errors. As an Operator author, you can use the Operator SDK’s scorecard tool to validate your Operator packaging and run tests.

Note

OpenShift Container Platform 4.6 supports Operator SDK v0.19.4.

4.6.1. About the scorecard tool

To validate an Operator, the Operator SDK’s scorecard tool begins by creating all resources required by any related Custom Resources (CRs) and the Operator. The scorecard then creates a proxy container in the Operator’s Deployment which is used to record calls to the API server and run some of the tests. The tests performed also examine some of the parameters in the CRs.

4.6.2. Scorecard configuration

The scorecard tool uses a configuration file that allows you to configure internal plug-ins, as well as several global configuration options.

4.6.2.1. Configuration file

The default location for the scorecard tool’s configuration is the <project_dir>/.osdk-scorecard.*. The following is an example of a YAML-formatted configuration file:

Scorecard configuration file

scorecard:
  output: json
  plugins:
    - basic: 1
        cr-manifest:
          - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml"
          - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml"
    - olm: 2
        cr-manifest:
          - "deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml"
          - "deploy/crds/cache.example.com_v1alpha1_memcachedrs_cr.yaml"
        csv-path: "deploy/olm-catalog/memcached-operator/0.0.3/memcached-operator.v0.0.3.clusterserviceversion.yaml"

1
basic tests configured to test two CRs.
2
olm tests configured to test two CRs.

Configuration methods for global options take the following priority, highest to lowest:

Command arguments (if available) → configuration file → default

The configuration file must be in YAML format. As the configuration file might be extended to allow configuration of all operator-sdk subcommands in the future, the scorecard’s configuration must be under a scorecard subsection.

Note

Configuration file support is provided by the viper package. For more info on how viper configuration works, see the viper package README.

4.6.2.2. Command arguments

While most of the scorecard tool’s configuration is done using a configuration file, you can also use the following arguments:

Table 4.13. Scorecard tool arguments

FlagTypeDescription

--bundle, -b

string

The path to a bundle directory used for the bundle validation test.

--config

string

The path to the scorecard configuration file. The default is <project_dir>/.osdk-scorecard. The file type and extension must be .yaml. If a configuration file is not provided or found at the default location, the scorecard exits with an error.

--output, -o

string

Output format. Valid options are text and json. The default format is text, which is designed to be a human readable format. The json format uses the JSON schema output format used for plug-ins defined later.

--kubeconfig, -o

string

The path to the kubeconfig file. It sets the kubeconfig for internal plug-ins.

--version

string

The version of scorecard to run. The default and only valid option is v1alpha2.

--selector, -l

string

The label selector to filter tests on.

--list, -L

bool

If true, only print the test names that would be run based on selector filtering.

4.6.2.3. Configuration file options

The scorecard configuration file provides the following options:

Table 4.14. Scorecard configuration file options

OptionTypeDescription

bundle

string

Equivalent of the --bundle flag. OLM bundle directory path, when specified, runs bundle validation.

output

string

Equivalent of the --output flag. If this option is defined by both the configuration file and the flag, the flag’s value takes priority.

kubeconfig

string

Equivalent of the --kubeconfig flag. If this option is defined by both the configuration file and the flag, the flag’s value takes priority.

plugins

array

An array of plug-in names.

4.6.2.3.1. Basic and OLM plug-ins

The scorecard supports the internal basic and olm plug-ins, which are configured by a plugins section in the configuration file.

Table 4.15. Plug-in options

OptionTypeDescription

cr-manifest

[]string

The path(s) for CRs being tested. Required if olm-deployed is unset or false.

csv-path

string

The path to the CSV for the Operator. Required for OLM tests or if olm-deployed is set to true.

olm-deployed

bool

Indicates that the CSV and relevant CRDs have been deployed onto the cluster by OLM.

kubeconfig

string

The path to the kubeconfig file. If both the global kubeconfig and this field are set, this field is used for the plug-in.

namespace

string

The namespace to run the plug-ins in. If unset, the default specified by the kubeconfig file is used.

init-timeout

int

Time in seconds until a timeout during initialization of the Operator.

crds-dir

string

The path to the directory containing CRDs that must be deployed to the cluster.

namespaced-manifest

string

The manifest file with all resources that run within a namespace. By default, the scorecard combines the service_account.yaml, role.yaml, role_binding.yaml, and operator.yaml files from the deploy directory into a temporary manifest to use as the namespaced manifest.

global-manifest

string

The manifest containing required resources that run globally (not namespaced). By default, the scorecard combines all CRDs in the crds-dir directory into a temporary manifest to use as the global manifest.

Note

Currently, using the scorecard with a CSV does not permit multiple CR manifests to be set through the CLI, configuration file, or CSV annotations. You must tear down your Operator in the cluster, re-deploy, and re-run the scorecard for each CR that is tested.

Additional resources

  • You can either set cr-manifest or your CSV’s metadata.annotations['alm-examples'] to provide CRs to the scorecard, but not both. See CRD templates for details.

4.6.3. Tests performed

By default, the scorecard tool has eight internal tests it can run available across two internal plug-ins. If multiple CRs are specified for a plug-in, the test environment is fully cleaned up after each CR so that each CR gets a clean testing environment.

Each test has a short name that uniquely identifies the test. This is useful when selecting a specific test or tests to run. For example:

$ operator-sdk scorecard -o text --selector=test=checkspectest
$ operator-sdk scorecard -o text --selector='test in (checkspectest,checkstatustest)'

4.6.3.1. Basic plug-in

The following basic Operator tests are available from the basic plug-in:

Table 4.16. basic plug-in tests

TestDescriptionShort name

Spec Block Exists

This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a spec block. This test has a maximum score of 1.

checkspectest

Status Block Exists

This test checks the Custom Resource(s) created in the cluster to make sure that all CRs have a status block. This test has a maximum score of 1.

checkstatustest

Writing Into CRs Has An Effect

This test reads the scorecard proxy’s logs to verify that the Operator is making PUT or POST, or both, requests to the API server, indicating that it is modifying resources. This test has a maximum score of 1.

writingintocrshaseffecttest

4.6.3.2. OLM plug-in

The following OLM integration tests are available from the olm plug-in:

TestDescriptionShort name

OLM Bundle Validation

This test validates the OLM bundle manifests found in the bundle directory as specified by the bundle flag. If the bundle contents contain errors, then the test result output includes the validator log as well as error messages from the validation library.

bundlevalidationtest

Provided APIs Have Validation

This test verifies that the CRDs for the provided CRs contain a validation section and that there is validation for each spec and status field detected in the CR. This test has a maximum score equal to the number of CRs provided by the cr-manifest option.

crdshavevalidationtest

Owned CRDs Have Resources Listed

This test makes sure that the CRDs for each CR provided by the cr-manifest option have a resources subsection in the owned CRDs section of the CSV. If the test detects used resources that are not listed in the resources section, it lists them in the suggestions at the end of the test. This test has a maximum score equal to the number of CRs provided by the cr-manifest option.

crdshaveresourcestest

Spec Fields With Descriptors

This test verifies that every field in the Custom Resources' spec sections have a corresponding descriptor listed in the CSV. This test has a maximum score equal to the total number of fields in the spec sections of each custom resource passed in by the cr-manifest option.

specdescriptorstest

Status Fields With Descriptors

This test verifies that every field in the Custom Resources' status sections have a corresponding descriptor listed in the CSV. This test has a maximum score equal to the total number of fields in the status sections of each custom resource passed in by the cr-manifest option.

statusdescriptorstest

Additional resources

4.6.4. Running the scorecard

Prerequisites

The following prerequisites for the Operator project are checked by the scorecard tool:

  • Access to a cluster running Kubernetes 1.11.3 or later.
  • If you want to use the scorecard to check the integration of your Operator project with Operator Lifecycle Manager (OLM), then a ClusterServiceVersion (CSV) file is also required. This is a requirement when the olm-deployed option is used.
  • For Operators that were not generated using the Operator SDK (non-SDK Operators):

    • Resource manifests for installing and configuring the Operator and CRs.
    • Configuration getter that supports reading from the KUBECONFIG environment variable, such as the clientcmd or controller-runtime configuration getters. This is required for the scorecard proxy to work correctly.

Procedure

  1. Define a .osdk-scorecard.yaml configuration file in your Operator project.
  2. Create the namespace defined in the RBAC files (role_binding).
  3. Run the scorecard from the root directory of your Operator project:

    $ operator-sdk scorecard

    The scorecard return code is 1 if any of the executed texts did not pass and 0 if all selected tests passed.

4.6.5. Running the scorecard with an OLM-managed Operator

The scorecard can be run using a ClusterServiceVersion (CSV), providing a way to test cluster-ready and non-SDK Operators.

Procedure

  1. The scorecard requires a proxy container in the Operator’s Deployment Pod to read Operator logs. A few modifications to your CSV and creation of one extra object are required to run the proxy before deploying your Operator with OLM.

    This step can be performed manually or automated using bash functions. Choose one of the following methods.

    • Manual method:

      1. Create a proxy server Secret containing a local Kubeconfig.

        1. Generate a user name using the scorecard proxy’s namespaced owner reference.

          $ echo '{"apiVersion":"","kind":"","name":"scorecard","uid":"","Namespace":"'<namespace>'"}' | base64 -w 0 1
          1
          Replace <namespace> with the namespace your Operator will deploy in.
        2. Write a Config manifest scorecard-config.yaml using the following template, replacing <username> with the base64 user name generated in the previous step:

          apiVersion: v1
          kind: Config
          clusters:
          - cluster:
              insecure-skip-tls-verify: true
              server: http://<username>@localhost:8889
            name: proxy-server
          contexts:
          - context:
              cluster: proxy-server
              user: admin/proxy-server
            name: <namespace>/proxy-server
          current-context: <namespace>/proxy-server
          preferences: {}
          users:
          - name: admin/proxy-server
            user:
              username: <username>
              password: unused
        3. Encode the Config as base64:

          $ cat scorecard-config.yaml | base64 -w 0
        4. Create a Secret manifest scorecard-secret.yaml:

          apiVersion: v1
          kind: Secret
          metadata:
            name: scorecard-kubeconfig
            namespace: <namespace> 1
          data:
            kubeconfig: <kubeconfig_base64> 2
          1
          Replace <namespace> with the namespace your Operator will deploy in.
          2
          Replace <kubeconfig_base64> with the Config encoded as base64.
        5. Apply the Secret:

          $ oc apply -f scorecard-secret.yaml
        6. Insert a volume referring to the Secret into the Deployment for the Operator:

          spec:
            install:
              spec:
                deployments:
                - name: memcached-operator
                  spec:
                    ...
                    template:
                      ...
                      spec:
                        containers:
                        ...
                        volumes:
                        - name: scorecard-kubeconfig 1
                          secret:
                            secretName: scorecard-kubeconfig
                            items:
                            - key: kubeconfig
                              path: config
          1
          Scorecard kubeconfig volume.
      2. Insert a volume mount and KUBECONFIG environment variable into each container in your Operator’s Deployment:

        spec:
          install:
            spec:
              deployments:
              - name: memcached-operator
                spec:
                  ...
                  template:
                    ...
                    spec:
                      containers:
                      - name: container1
                        ...
                        volumeMounts:
                        - name: scorecard-kubeconfig 1
                          mountPath: /scorecard-secret
                        env:
                        - name: KUBECONFIG 2
                          value: /scorecard-secret/config
                      - name: container2 3
                        ...
        1
        Scorecard kubeconfig volume mount.
        2
        Scorecard kubeconfig environment variable.
        3
        Repeat the same for this and all other containers.
      3. Insert the scorecard proxy container into the Operator’s Deployment:

        spec:
          install:
            spec:
              deployments:
              - name: memcached-operator
                spec:
                  ...
                  template:
                    ...
                    spec:
                      containers:
                      ...
                      - name: scorecard-proxy 1
                        command:
                        - scorecard-proxy
                        env:
                        - name: WATCH_NAMESPACE
                          valueFrom:
                            fieldRef:
                              apiVersion: v1
                              fieldPath: metadata.namespace
                        image: quay.io/operator-framework/scorecard-proxy:master
                        imagePullPolicy: Always
                        ports:
                        - name: proxy
                          containerPort: 8889
        1
        Scorecard proxy container.
    • Automated method:

      The community-operators repository has several bash functions that can perform the previous steps in the procedure for you.

      1. Run the following curl command:

        $ curl -Lo csv-manifest-modifiers.sh \
            https://raw.githubusercontent.com/operator-framework/community-operators/master/scripts/lib/file
      2. Source the csv-manifest-modifiers.sh file:

        $ . ./csv-manifest-modifiers.sh
      3. Create the Kubeconfig Secret file:

        $ create_kubeconfig_secret_file scorecard-secret.yaml "<namespace>" 1
        1
        Replace <namespace> with the namespace your Operator will deploy in.
      4. Apply the Secret:

        $ oc apply -f scorecard-secret.yaml
      5. Insert the Kubeconfig volume:

        $ insert_kubeconfig_volume "<csv_file>" 1
        1
        Replace <csv_file> with the path to your Operator’s CSV manifest.
      6. Insert the Kubeconfig Secret mount:

        $ insert_kubeconfig_secret_mount "<csv_file>"
      7. Insert the proxy container:

        $ insert_proxy_container "<csv_file>" "quay.io/operator-framework/scorecard-proxy:master"
  2. After inserting the proxy container, follow the steps in the Getting started with the Operator SDK guide to bundle your CSV and CRDs and deploy your Operator on OLM.
  3. After your Operator has been deployed on OLM, define a .osdk-scorecard.yaml configuration file in your Operator project and ensure both the csv-path: <csv_manifest_path> and olm-deployed options are set.
  4. Run the scorecard with both the csv-path: <csv_manifest_path> and olm-deployed options set in your scorecard configuration file:

    $ operator-sdk scorecard

4.7. Configuring built-in monitoring with Prometheus

This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.

4.7.1. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

4.7.2. Metrics helper

In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:

func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)

These metrics are inherited from the controller-runtime library API. By default, the metrics are served on 0.0.0.0:8383/metrics.

A Service object is created with the metrics port exposed, which can be then accessed by Prometheus. The Service object is garbage collected when the leader Pod’s root owner is deleted.

The following example is present in the cmd/manager/main.go file in all Operators generated using the Operator SDK:

import(
    "github.com/operator-framework/operator-sdk/pkg/metrics"
    "machine.openshift.io/controller-runtime/pkg/manager"
)

var (
    // Change the below variables to serve metrics on a different host or port.
    metricsHost       = "0.0.0.0" 1
    metricsPort int32 = 8383 2
)
...
func main() {
    ...
    // Pass metrics address to controller-runtime manager
    mgr, err := manager.New(cfg, manager.Options{
        Namespace:          namespace,
        MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort),
    })

    ...
    // Create Service object to expose the metrics port.
    _, err = metrics.ExposeMetricsPort(ctx, metricsPort)
    if err != nil {
        // handle error
        log.Info(err.Error())
    }
    ...
}
1
The host that the metrics are exposed on.
2
The port that the metrics are exposed on.

4.7.2.1. Modifying the metrics port

Operator authors can modify the port that metrics are exposed on.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • In the generated Operator’s cmd/manager/main.go file, change the value of metricsPort in the line var metricsPort int32 = 8383.

4.7.3. ServiceMonitor resources

A ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator that discovers the Endpoints in Service objects and configures Prometheus to monitor those pods.

In Go-based Operators generated using the Operator SDK, the GenerateServiceMonitor() helper function can take a Service object and generate a ServiceMonitor Custom Resource (CR) based on it.

Additional resources

4.7.3.1. Creating ServiceMonitor resources

Operator authors can add Service target discovery of created monitoring Services using the metrics.CreateServiceMonitor() helper function, which accepts the newly created Service.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • Add the metrics.CreateServiceMonitor() helper function to your Operator code:

    import(
        "k8s.io/api/core/v1"
        "github.com/operator-framework/operator-sdk/pkg/metrics"
        "machine.openshift.io/controller-runtime/pkg/client/config"
    )
    func main() {
    
        ...
        // Populate below with the Service(s) for which you want to create ServiceMonitors.
        services := []*v1.Service{}
        // Create one ServiceMonitor per application per namespace.
        // Change the below value to name of the Namespace you want the ServiceMonitor to be created in.
        ns := "default"
        // restConfig is used for talking to the Kubernetes apiserver
        restConfig := config.GetConfig()
    
        // Pass the Service(s) to the helper function, which in turn returns the array of ServiceMonitor objects.
        serviceMonitors, err := metrics.CreateServiceMonitors(restConfig, ns, services)
        if err != nil {
            // Handle errors here.
        }
        ...
    }

4.8. Configuring leader election

During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.

There are two different leader election implementations to choose from, each with its own trade-off:

  • Leader-for-life: The leader Pod only gives up leadership (using garbage collection) when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders (split brain). However, this method can be subject to a delay in electing a new leader. For example, when the leader Pod is on an unresponsive or partitioned node, the pod-eviction-timeout dictates how it takes for the leader Pod to be deleted from the node and step down (default 5m). See the Leader-for-life Go documentation for more.
  • Leader-with-lease: The leader Pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.

By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case,

The following examples illustrate how to use the two options.

4.8.1. Using Leader-for-life election

With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the ConfigMap named memcached-operator-lock:

import (
  ...
  "github.com/operator-framework/operator-sdk/pkg/leader"
)

func main() {
  ...
  err = leader.Become(context.TODO(), "memcached-operator-lock")
  if err != nil {
    log.Error(err, "Failed to retry for leader lock")
    os.Exit(1)
  }
  ...
}

If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the Operator’s namespace.

4.8.2. Using Leader-with-lease election

The Leader-with-lease implementation can be enabled using the Manager Options for leader election:

import (
  ...
  "sigs.k8s.io/controller-runtime/pkg/manager"
)

func main() {
  ...
  opts := manager.Options{
    ...
    LeaderElection: true,
    LeaderElectionID: "memcached-operator-lock"
  }
  mgr, err := manager.New(cfg, opts)
  ...
}

When the Operator is not running in a cluster, the Manager returns an error when starting since it cannot detect the Operator’s namespace in order to create the ConfigMap for leader election. You can override this namespace by setting the Manager’s LeaderElectionNamespace option.

4.9. Operator SDK CLI reference

This guide documents the Operator SDK CLI commands and their syntax:

$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]

4.9.1. build

The operator-sdk build command compiles the code and builds the executables. After build completes, the image is built locally in docker. It must then be pushed to a remote registry.

Table 4.17. build arguments

ArgumentDescription

<image>

The container image to be built, e.g., quay.io/example/operator:v0.0.1.

Table 4.18. build flags

FlagDescription

--enable-tests (bool)

Enable in-cluster testing by adding test binary to the image.

--namespaced-manifest (string)

Path of namespaced resources manifest for tests. Default: deploy/operator.yaml.

--test-location (string)

Location of tests. Default: ./test/e2e

-h, --help

Usage help output.

If --enable-tests is set, the build command also builds the testing binary, adds it to the container image, and generates a deploy/test-pod.yaml file that allows a user to run the tests as a Pod on a cluster.

For example:

$ operator-sdk build quay.io/example/operator:v0.0.1

Example output

building example-operator...

building container quay.io/example/operator:v0.0.1...
Sending build context to Docker daemon  163.9MB
Step 1/4 : FROM alpine:3.6
 ---> 77144d8c6bdc
Step 2/4 : ADD tmp/_output/bin/example-operator /usr/local/bin/example-operator
 ---> 2ada0d6ca93c
Step 3/4 : RUN adduser -D example-operator
 ---> Running in 34b4bb507c14
Removing intermediate container 34b4bb507c14
 ---> c671ec1cff03
Step 4/4 : USER example-operator
 ---> Running in bd336926317c
Removing intermediate container bd336926317c
 ---> d6b58a0fcb8c
Successfully built d6b58a0fcb8c
Successfully tagged quay.io/example/operator:v0.0.1

4.9.2. completion

The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.

Table 4.19. completion subcommands

SubcommandDescription

bash

Generate bash completions.

zsh

Generate zsh completions.

Table 4.20. completion flags

FlagDescription

-h, --help

Usage help output.

For example:

$ operator-sdk completion bash

Example output

# bash completion for operator-sdk                         -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh

4.9.3. print-deps

The operator-sdk print-deps command prints the most recent Golang packages and versions required by Operators. It prints in columnar format by default.

Table 4.21. print-deps flags

FlagDescription

--as-file

Print packages and versions in Gopkg.toml format.

For example:

$ operator-sdk print-deps --as-file

Example output

required = [
  "k8s.io/code-generator/cmd/defaulter-gen",
  "k8s.io/code-generator/cmd/deepcopy-gen",
  "k8s.io/code-generator/cmd/conversion-gen",
  "k8s.io/code-generator/cmd/client-gen",
  "k8s.io/code-generator/cmd/lister-gen",
  "k8s.io/code-generator/cmd/informer-gen",
  "k8s.io/code-generator/cmd/openapi-gen",
  "k8s.io/gengo/args",
]

[[override]]
  name = "k8s.io/code-generator"
  revision = "6702109cc68eb6fe6350b83e14407c8d7309fd1a"
...

4.9.4. generate

The operator-sdk generate command invokes a specific generator to generate code as needed.

4.9.4.1. crds

The generate crds subcommand generates CRDs or updates them if they exist, under deploy/crds/__crd.yaml. OpenAPI V3 validation YAML is generated as a validation object.

Table 4.22. generate crds flags

FlagDescription

--crd-version (string)

CRD version to generate (default v1beta1)

-h, --help

Help for generate crds

For example:

$ operator-sdk generate crds
$ tree deploy/crds

Example output

├── deploy/crds/app.example.com_v1alpha1_appservice_cr.yaml
└── deploy/crds/app.example.com_appservices_crd.yaml

4.9.4.2. csv

The csv subcommand writes a ClusterServiceVersion (CSV) manifest for use with Operator Lifecycle Manager (OLM). It also optionally writes CustomResourceDefinition (CRD) files to deploy/olm-catalog/<operator_name>/<csv_version>.

Table 4.23. generate csv flags

FlagDescription

--csv-channel (string)

The channel the CSV should be registered under in the package manifest.

--csv-config (string)

The path to the CSV configuration file. Default: deploy/olm-catalog/csv-config.yaml.

--csv-version (string)

The semantic version of the CSV manifest. Required.

--default-channel

Use the channel passed to --csv-channel as the package manifests' default channel. Only valid when --csv-channel is set.

--from-version (string)

The semantic version of CSV manifest to use as a base for a new version.

--operator-name

The Operator name to use while generating the CSV.

--update-crds

Updates CRD manifests in deploy/<operator_name>/<csv_version> using the latest CRD manifests.

For example:

$ operator-sdk generate csv --csv-version 0.1.0 --update-crds

Example output

INFO[0000] Generating CSV manifest version 0.1.0
INFO[0000] Fill in the following required fields in file deploy/olm-catalog/operator-name/0.1.0/operator-name.v0.1.0.clusterserviceversion.yaml:
	spec.keywords
	spec.maintainers
	spec.provider
	spec.labels
INFO[0000] Created deploy/olm-catalog/operator-name/0.1.0/operator-name.v0.1.0.clusterserviceversion.yaml

4.9.4.3. k8s

The k8s subcommand runs the Kubernetes code-generators for all CRD APIs under pkg/apis/. Currently, k8s only runs deepcopy-gen to generate the required DeepCopy() functions for all Custom Resource (CR) types.

Note

This command must be run every time the API (spec and status) for a custom resource type is updated.

For example:

$ tree pkg/apis/app/v1alpha1/

Example output

pkg/apis/app/v1alpha1/
├── appservice_types.go
├── doc.go
└── register.go

$ operator-sdk generate k8s

Example output

Running code-generation for Custom Resource (CR) group versions: [app:v1alpha1]
Generating deepcopy funcs

$ tree pkg/apis/app/v1alpha1/

Example output

pkg/apis/app/v1alpha1/
├── appservice_types.go
├── doc.go
├── register.go
└── zz_generated.deepcopy.go

4.9.5. new

The operator-sdk new command creates a new Operator application and generates (or scaffolds) a default project directory layout based on the input <project_name>.

Table 4.24. new arguments

ArgumentDescription

<project_name>

Name of the new project.

Table 4.25. new flags

FlagDescription

--api-version

CRD APIVersion in the format $GROUP_NAME/$VERSION, for example app.example.com/v1alpha1. Used with ansible or helm types.

--crd-version

CRD version to generate, like v1. Default setting is v1beta1.

--generate-playbook

Generate an Ansible playbook skeleton. Used with ansible type.

--header-file <string>

Path to file containing headers for generated Go files. Copied to hack/boilerplate.go.txt.

--helm-chart <string>

Initialize Helm operator with existing Helm chart: <url>, <repo>/<name>, or local path.

--helm-chart-repo <string>

Chart repository URL for the requested Helm chart.

--helm-chart-version <string>

Specific version of the Helm chart. Default is latest version.

--help, -h

Usage and help output.

--kind <string>

CRD Kind, for example AppService. Used with ansible or helm types.

--skip-git-init

Do not initialize the directory as a Git repository.

--type

Type of Operator to initialize: go, ansible or helm. Default is go.

Note

Starting with Operator SDK v0.12.0, the --dep-manager flag and support for dep-based projects have been removed. Go projects are now scaffolded to use Go modules.

Example usage for Go project

$ mkdir $GOPATH/src/github.com/example.com/

$ cd $GOPATH/src/github.com/example.com/
$ operator-sdk new app-operator

Example usage for Ansible project

$ operator-sdk new app-operator \
    --type=ansible \
    --api-version=app.example.com/v1alpha1 \
    --kind=AppService

4.9.6. add

The operator-sdk add command adds a controller or resource to the project. The command must be run from the Operator project root directory.

Table 4.26. add subcommands

SubcommandDescription

api

Adds a new API definition for a new Custom Resource (CR) under pkg/apis and generates the Customer Resource Definition (CRD) and Custom Resource (CR) files under deploy/crds/. If the API already exists at pkg/apis/<group>/<version>, then the command does not overwrite and returns an error.

controller

Adds a new controller under pkg/controller/<kind>/. The controller expects to use the CR type that should already be defined under pkg/apis/<group>/<version> via the operator-sdk add api --kind=<kind> --api-version=<group/version> command. If the controller package for that Kind already exists at pkg/controller/<kind>, then the command does not overwrite and returns an error.

crd

Adds a CRD and the CR files. The <project-name>/deploy path must already exist. The --api-version and --kind flags are required to generate the new Operator application.

  • Generated CRD filename: <project-name>/deploy/crds/<group>_<version>_<kind>_crd.yaml
  • Generated CR filename: <project-name>/deploy/crds/<group>_<version>_<kind>_cr.yaml

Table 4.27. add api flags

FlagDescription

--api-version (string)

CRD APIVersion in the format $GROUP_NAME/$VERSION (e.g., app.example.com/v1alpha1).

--crd-version

CRD version to generate, like v1. Default setting is v1beta1.

--kind (string)

CRD Kind. For example, AppService.

Table 4.28. add crd flags

FlagDescription

--api-version (string)

CRD APIVersion in the format $GROUP_NAME/$VERSION. For example, app.example.com/v1alpha1.

--crd-version

CRD version to generate, like v1. Default setting is v1beta1.

--kind (string)

CRD Kind. For example, AppService.

For example:

$ operator-sdk add api --api-version app.example.com/v1alpha1 --kind AppService

Example output

Create pkg/apis/app/v1alpha1/appservice_types.go
Create pkg/apis/addtoscheme_app_v1alpha1.go
Create pkg/apis/app/v1alpha1/register.go
Create pkg/apis/app/v1alpha1/doc.go
Create deploy/crds/app_v1alpha1_appservice_cr.yaml
Create deploy/crds/app_v1alpha1_appservice_crd.yaml
Running code-generation for Custom Resource (CR) group versions: [app:v1alpha1]
Generating deepcopy funcs

$ tree pkg/apis

Example output

pkg/apis/
├── addtoscheme_app_appservice.go
├── apis.go
└── app
    └── v1alpha1
        ├── doc.go
        ├── register.go
        └── types.go

$ operator-sdk add controller --api-version app.example.com/v1alpha1 --kind AppService

Example output

Create pkg/controller/appservice/appservice_controller.go
Create pkg/controller/add_appservice.go

$ tree pkg/controller

Example output

pkg/controller/
├── add_appservice.go
├── appservice
│   └── appservice_controller.go
└── controller.go

$ operator-sdk add crd --api-version app.example.com/v1alpha1 --kind AppService

Example output

Generating Custom Resource Definition (CRD) files
Create deploy/crds/app_v1alpha1_appservice_crd.yaml
Create deploy/crds/app_v1alpha1_appservice_cr.yaml

4.9.7. test

The operator-sdk test command can test the Operator locally.

4.9.7.1. local

The local subcommand runs Go tests built using the test framework of the Operator SDK locally.

Table 4.29. test local arguments

ArgumentsDescription

<test_location> (string)

Location of e2e test files (e.g., ./test/e2e/).

Table 4.30. test local flags

FlagsDescription

--kubeconfig (string)

Location of kubeconfig for a cluster. Default: ~/.kube/config.

--global-manifest (string)

Path to manifest for global resources. Default: deploy/crd.yaml.

--namespaced-manifest (string)

Path to manifest for per-test, namespaced resources. Default: combines deploy/service_account.yaml, deploy/rbac.yaml, and deploy/operator.yaml.

--namespace (string)

If non-empty, a single namespace to run tests in (e.g., operator-test). Default: ""

--go-test-flags (string)

Extra arguments to pass to go test (e.g., -f "-v -parallel=2").

--up-local

Enable running the Operator locally with go run instead of as an image in the cluster.

--no-setup

Disable test resource creation.

--image (string)

Use a different Operator image from the one specified in the namespaced manifest.

-h, --help

Usage help output.

For example:

$ operator-sdk test local ./test/e2e/

Example output

ok  	github.com/operator-framework/operator-sdk-samples/memcached-operator/test/e2e	20.410s

4.9.8. run

The operator-sdk run command provides options that can launch the Operator in various environments.

Table 4.31. run arguments

ArgumentsDescription

--kubeconfig (string)

The file path to a Kubernetes configuration file. Defaults: $HOME/.kube/config

--local

The Operator is run locally by building the Operator binary with the ability to access a Kubernetes cluster using a kubeconfig file.

--namespace (string)

The namespace where the Operator watches for changes. Default: default

--operator-flags

Flags that the local Operator may need. Example: --flag1 value1 --flag2=value2. For use with the --local flag only.

-h, --help

Usage help output.

4.9.8.1. --local

The --local flag launches the Operator on the local machine by building the Operator binary with the ability to access a Kubernetes cluster using a kubeconfig file.

For example:

$ operator-sdk run --local \
  --kubeconfig "mycluster.kubecfg" \
  --namespace "default" \
  --operator-flags "--flag1 value1 --flag2=value2"

The following example uses the default kubeconfig, the default namespace environment variable, and passes in flags for the Operator. To use the Operator flags, your Operator must know how to handle the option. For example, for an Operator that understands the resync-interval flag:

$ operator-sdk run --local --operator-flags "--resync-interval 10"

If you are planning on using a different namespace than the default, use the --namespace flag to change where the Operator is watching for Custom Resources (CRs) to be created:

$ operator-sdk run --local --namespace "testing"

For this to work, your Operator must handle the WATCH_NAMESPACE environment variable. This can be accomplished using the utility functionk8sutil.GetWatchNamespace in your Operator.

4.10. Appendices

4.10.1. Operator project scaffolding layout

The operator-sdk CLI generates a number of packages for each Operator project. The following sections describes a basic rundown of each generated file and directory.

4.10.1.1. Go-based projects

Go-based Operator projects (the default type) generated using the operator-sdk new command contain the following directories and files:

File/foldersPurpose

cmd/

Contains manager/main.go file, which is the main program of the Operator. This instantiates a new manager which registers all Custom Resource Definitions under pkg/apis/ and starts all controllers under pkg/controllers/.

pkg/apis/

Contains the directory tree that defines the APIs of the Custom Resource Definitions (CRDs). Users are expected to edit the pkg/apis/<group>/<version>/<kind>_types.go files to define the API for each resource type and import these packages in their controllers to watch for these resource types.

pkg/controller

This pkg contains the controller implementations. Users are expected to edit the pkg/controller/<kind>/<kind>_controller.go files to define the controller’s reconcile logic for handling a resource type of the specified kind.

build/

Contains the Dockerfile and build scripts used to build the Operator.

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment.

Gopkg.toml
Gopkg.lock

The Go Dep manifests that describe the external dependencies of this Operator.

vendor/

The golang vendor folder that contains the local copies of the external dependencies that satisfy the imports of this project. Go Dep manages the vendor directly.

4.10.1.2. Helm-based projects

Helm-based Operator projects generated using the operator-sdk new --type helm command contain the following directories and files:

File/foldersPurpose

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment.

helm-charts/<kind>

Contains a Helm chart initialized using the equivalent of the helm create command.

build/

Contains the Dockerfile and build scripts used to build the Operator.

watches.yaml

Contains Group, Version, Kind, and Helm chart location.

Chapter 5. Red Hat Operators

5.1. Cloud Credential Operator

Purpose

The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). The CCO syncs on credentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run.

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string (""), the CCO operates in its default mode.

Default behavior

For platforms where multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process credentialsRequest CRs.

By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process credentialsRequest CRs.

Note

The CCO cannot verify whether Azure credentials are sufficient for passthrough mode. If Azure credentials are insufficient for mint mode, the CCO operates with the assumption that the credentials are sufficient for passthrough mode.

If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered.

If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new credentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials.

To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new credentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS, Azure, or GCP. For details, see the Manually creating IAM section of the installation content for AWS, Azure, or GCP.

Modes

By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint, passthrough, or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process credentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers.

Mint mode

Mint mode is supported for AWS, Azure, and GCP.

Mint mode is the default and recommended best practice setting for the CCO to use. In this mode, the CCO uses the provided admin-level cloud credential to run the cluster.

If the credential is not removed after installation, it is stored and used by the CCO to process credentialsRequest CRs for components in the cluster and create new credentials for each with only the specific permissions that are required. The continuous reconciliation of cloud credentials in mint mode allows actions that require additional credentials or permissions, such as upgrading, to proceed.

The requirement that mint mode stores the admin-level credential in the cluster kube-system namespace might not suit the security requirements of every organization.

When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user.

Table 5.1. Mint mode credential requirements

CloudPermissions

AWS

  • iam:CreateAccessKey
  • iam:CreateUser
  • iam:DeleteAccessKey
  • iam:DeleteUser
  • iam:DeleteUserPolicy
  • iam:GetUser
  • iam:GetUserPolicy
  • iam:ListAccessKeys
  • iam:PutUserPolicy
  • iam:TagUser
  • iam:SimulatePrincipalPolicy

Azure

Service principal with the permissions specified in the Creating a service principal section of the Configuring an Azure account content.

GCP

  • resourcemanager.projects.get
  • serviceusage.services.list
  • iam.serviceAccountKeys.create
  • iam.serviceAccountKeys.delete
  • iam.serviceAccounts.create
  • iam.serviceAccounts.delete
  • iam.serviceAccounts.get
  • iam.roles.get
  • resourcemanager.projects.getIamPolicy
  • resourcemanager.projects.setIamPolicy

Mint mode with removal or rotation of the admin-level credential

Mint mode with removal or rotation of the admin-level credential is supported for AWS in OpenShift Container Platform version 4.4 and later.

This option requires the presence of the admin-level credential during installation, but the credential is not stored in the cluster permanently and does not need to be long-lived.

After installing OpenShift Container Platform in mint mode, you can remove the admin-level credential Secret from the cluster. If you remove the Secret, the CCO uses a previously minted read-only credential that allows it to verify whether all credentialsRequest CRs have their required permissions. Once removed, the associated credential can be destroyed on the underlying cloud if desired.

The admin-level credential is not required unless something that requires an admin-level credential needs to be changed, for instance during an upgrade. Prior to each upgrade, you must reinstate the credential Secret with the admin-level credential. If the credential is not present, the upgrade might be blocked.

Passthrough mode

Passthrough mode is supported for AWS, Azure, GCP, Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere.

In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode.

Passthrough mode permissions requirements

When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a credentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for.

The credential you provide for passthrough mode in AWS, Azure, or GCP must have all the requested permissions for all credentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the credentialsRequest CRs that are required for your cloud provider, see the Manually creating IAM section of the installation content for AWS, Azure, or GCP.

To install an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP), the CCO requires a credential with the permissions of a member user role.

To install an OpenShift Container Platform cluster on Red Hat Virtualization (RHV), the CCO requires a credential with the following privileges:

  • DiskOperator
  • DiskCreator
  • UserTemplateBasedVm
  • TemplateOwner
  • TemplateCreator
  • ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment

To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges:

Table 5.2. Required vSphere privileges

CategoryPrivileges

Datastore

Allocate space

Folder

Create folder, Delete folder

vSphere Tagging

All privileges

Network

Assign network

Resource

Assign virtual machine to resource pool

Profile-driven storage

All privileges

vApp

All privileges

Virtual machine

All privileges

Passthrough mode credential maintenance

If credentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the credentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the credentialsRequest CRs that are required for your cloud provider, see the Manually creating IAM section of the installation content for AWS, Azure, or GCP.

Reducing permissions after installation

When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer.

After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the credentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using.

To locate the credentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see the Manually creating IAM section of the installation content for AWS, Azure, or GCP.

Manual mode

Manual mode is supported for AWS.

In manual mode, a user manages cloud credentials instead of the CCO. To use this mode, you must examine the credentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all credentialsRequest CRs for the cluster’s cloud provider.

Using manual mode allows each cluster component to have only the permissions it requires, without storing an admin-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade.

For information about configuring AWS to use manual mode, see Manually creating IAM for AWS.

Disabled CCO

Disabled CCO is supported for Azure and GCP.

To manually manage credentials for Azure or GCP, you must disable the CCO. Disabling the CCO has many of the same configuration and maintenance requirements as running the CCO in manual mode, but is accomplished by a different process. For more information, see the Manually creating IAM section of the installation content for Azure or GCP.

Project

openshift-cloud-credential-operator

CRDs

  • credentialsrequests.cloudcredential.openshift.io

    • Scope: Namespaced
    • CR: credentialsrequest
    • Validation: Yes

Configuration objects

No configuration required.

5.2. Cluster Authentication Operator

Purpose

The Cluster Authentication Operator installs and maintains the Authentication Custom Resource in a cluster and can be viewed with:

$ oc get clusteroperator authentication -o yaml

Project

cluster-authentication-operator

5.3. Cluster Autoscaler Operator

Purpose

The Cluster Autoscaler Operator manages deployments of the OpenShift Cluster Autoscaler using the cluster-api provider.

Project

cluster-autoscaler-operator

CRDs

  • ClusterAutoscaler: This is a singleton resource, which controls the configuration of the cluster’s autoscaler instance. The Operator will only respond to the ClusterAutoscaler resource named default in the managed namespace, the value of the WATCH_NAMESPACE environment variable.
  • MachineAutoscaler: This resource targets a node group and manages the annotations to enable and configure autoscaling for that group, the min and max size. Currently only MachineSet objects can be targeted.

5.4. Cluster Image Registry Operator

Purpose

The Cluster Image Registry Operator manages a singleton instance of the OpenShift Container Platform registry. It manages all configuration of the registry, including creating storage.

On initial start up, the Operator creates a default image-registry resource instance based on the configuration detected in the cluster. This indicates what cloud storage type to use based on the cloud provider.

If insufficient information is available to define a complete image-registry resource, then an incomplete resource is defined and the Operator updates the resource status with information about what is missing.

The Cluster Image Registry Operator runs in the openshift-image-registry namespace and it also manages the registry instance in that location. All configuration and workload resources for the registry reside in that namespace.

Project

cluster-image-registry-operator

5.5. Cluster Monitoring Operator

Purpose

The Cluster Monitoring Operator manages and updates the Prometheus-based cluster monitoring stack deployed on top of OpenShift Container Platform.

Project

openshift-monitoring

CRDs

  • alertmanagers.monitoring.coreos.com

    • Scope: Namespaced
    • CR: alertmanager
    • Validation: Yes
  • prometheuses.monitoring.coreos.com

    • Scope: Namespaced
    • CR: prometheus
    • Validation: Yes
  • prometheusrules.monitoring.coreos.com

    • Scope: Namespaced
    • CR: prometheusrule
    • Validation: Yes
  • servicemonitors.monitoring.coreos.com

    • Scope: Namespaced
    • CR: servicemonitor
    • Validation: Yes

Configuration objects

$ oc -n openshift-monitoring edit cm cluster-monitoring-config

5.6. Cluster Network Operator

Purpose

The Cluster Network Operator installs and upgrades the networking components on an OpenShift Kubernetes cluster.

5.7. OpenShift Controller Manager Operator

Purpose

The OpenShift Controller Manager Operator installs and maintains the OpenShiftControllerManager Custom Resource in a cluster and can be viewed with:

$ oc get clusteroperator openshift-controller-manager -o yaml

The Custom Resource Definition openshiftcontrollermanagers.operator.openshift.io can be viewed in a cluster with:

$ oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml

Project

cluster-openshift-controller-manager-operator

5.8. Cluster Samples Operator

Purpose

The Cluster Samples Operator manages the sample imagestreams and templates stored in the openshift namespace.

On initial start up, the Operator creates the default samples configuration resource to initiate the creation of the imagestreams and templates. The configuration object is a cluster scoped object with the key cluster and type configs.samples.

The imagestreams are the Red Hat Enterprise Linux CoreOS (RHCOS)-based OpenShift Container Platform imagestreams pointing to images on registry.redhat.io. Similarly, the templates are those categorized as OpenShift Container Platform templates.

The Cluster Samples Operator deployment is contained within the openshift-cluster-samples-operator namespace. On start up, the install pull secret is used by the imagestream import logic in the internal registry and API server to authenticate with registry.redhat.io. An administrator can create any additional secrets in the openshift namespace if they change the registry used for the sample imagestreams. If created, those secrets contain the content of a Docker config.json needed to facilitate image import.

The image for the Cluster Samples Operator contains imagestream and template definitions for the associated OpenShift Container Platform release. After the Cluster Samples Operator creates a sample, it adds an annotation that denotes the OpenShift Container Platform version that it is compatible with. The Operator uses this annotation to ensure that each sample matches it’s release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator are allowed as long as the version annotation is not modified or deleted. However, on an upgrade, as the version annotation will change, those modifications can get replaced as the sample will be updated with the newer version. The Jenkins images are part of the image payload from the installation and are tagged into the imagestreams directly.

The samples resource includes a finalizer, which cleans up the following upon its deletion:

  • Operator-managed imagestreams
  • Operator-managed templates
  • Operator-generated configuration resources
  • Cluster status resources

Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration.

Project

cluster-samples-operator

5.9. Cluster Storage Operator

Purpose

The Cluster Storage Operator sets OpenShift Container Platform cluster-wide storage defaults. It ensures a default storage class exists for OpenShift Container Platform clusters.

Project

cluster-storage-operator

Configuration

No configuration is required.

Notes

  • The Cluster Storage Operator supports Amazon Web Services (AWS) and Red Hat OpenStack Platform (RHOSP).
  • The created storage class can be made non-default by editing its annotation, but the storage class cannot be deleted as long as the Operator runs.

5.10. Cluster Version Operator

Purpose

Project

cluster-version-operator

5.11. Console Operator

Purpose

The Console Operator installs and maintains the OpenShift Container Platform web console on a cluster.

Project

console-operator

5.12. DNS Operator

Purpose

The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods that enables DNS-based Kubernetes Service discovery in OpenShift Container Platform.

The Operator creates a working default deployment based on the cluster’s configuration.

  • The default cluster domain is cluster.local.
  • Configuration of the CoreDNS Corefile or Kubernetes plug-in is not yet supported.

The DNS Operator manages CoreDNS as a Kubernetes daemon set exposed as a service with a static IP. CoreDNS runs on all nodes in the cluster.

Project

cluster-dns-operator

5.13. etcd cluster Operator

Purpose

The etcd cluster Operator automates etcd cluster scaling, enables etcd monitoring and metrics, and simplifies disaster recovery procedures.

Project

cluster-etcd-operator

CRDs

  • etcds.operator.openshift.io

    • Scope: Cluster
    • CR: etcd
    • Validation: Yes

Configuration objects

$ oc edit etcd cluster

5.14. Ingress Operator

Purpose

The Ingress Operator configures and manages the OpenShift Container Platform router.

Project

openshift-ingress-operator

CRDs

  • clusteringresses.ingress.openshift.io

    • Scope: Namespaced
    • CR: clusteringresses
    • Validation: No

Configuration objects

  • Cluster config

    • Type Name: clusteringresses.ingress.openshift.io
    • Instance Name: default
    • View Command:

      $ oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml

Notes

The Ingress Operator sets up the router in the openshift-ingress project and creates the deployment for the router:

$ oc get deployment -n openshift-ingress

The Ingress Operator uses the clusterNetwork[].cidr from the network/cluster status to determine what mode (IPv4, IPv6, or dual stack) the managed ingress controller (router) should operate in. For example, if clusterNetwork contains only a v6 cidr, then the ingress controller will operate in v6-only mode. In the following example, ingress controllers managed by the Ingress Operator will run in v4-only mode because only one cluster network exists and the network is a v4 cidr:

$ oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'

Example output

map[cidr:10.128.0.0/14 hostPrefix:23]

5.15. Kubernetes API Server Operator

Purpose

The Kubernetes API Server Operator manages and updates the Kubernetes API server deployed on top of OpenShift Container Platform. The Operator is based on the OpenShift library-go framework and it is installed using the Cluster Version Operator (CVO).

Project

openshift-kube-apiserver-operator

CRDs

  • kubeapiservers.operator.openshift.io

    • Scope: Cluser
    • CR: kubeapiserver
    • Validation: Yes

Configuration objects

$ oc edit kubeapiserver

5.16. Kubernetes Controller Manager Operator

Purpose

The Kubernetes Controller Manager Operator manages and updates the Kubernetes Controller Manager deployed on top of OpenShift Container Platform. The Operator is based on OpenShift library-go framework and it is installed via the Cluster Version Operator (CVO).

It contains the following components:

  • Operator
  • Bootstrap manifest renderer
  • Installer based on static pods
  • Configuration observer

By default, the Operator exposes Prometheus metrics through the metrics service.

Project

cluster-kube-controller-manager-operator

5.17. Kubernetes Scheduler Operator

Purpose

The Kubernetes Scheduler Operator manages and updates the Kubernetes Scheduler deployed on top of OpenShift Container Platform. The operator is based on the OpenShift Container Platform library-go framework and it is installed with the Cluster Version Operator (CVO).

The Kubernetes Scheduler Operator contains the following components:

  • Operator
  • Bootstrap manifest renderer
  • Installer based on static pods
  • Configuration observer

By default, the Operator exposes Prometheus metrics through the metrics service.

Project

cluster-kube-scheduler-operator

Configuration

The configuration for the Kubernetes Scheduler is the result of merging:

  • a default configuration.
  • an observed configuration from the spec schedulers.config.openshift.io.

All of these are sparse configurations, invalidated JSON snippets which are merged in order to form a valid configuration at the end.

5.18. Machine API Operator

Purpose

The Machine API Operator manages the lifecycle of specific purpose CRDs, controllers, and RBAC objects that extend the Kubernetes API. This declares the desired state of machines in a cluster.

Project

machine-api-operator

CRDs

  • MachineSet
  • Machine
  • MachineHealthCheck

5.19. Machine Config Operator

Purpose

The Machine Config Operator manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet.

There are four components:

  • machine-config-server: Provides Ignition configuration to new machines joining the cluster.
  • machine-config-controller: Coordinates the upgrade of machines to the desired configurations defined by a MachineConfig object. Options are provided to control the upgrade for sets of machines individually.
  • machine-config-daemon: Applies new machine configuration during update. Validates and verifies the machine’s state to the requested machine configuration.
  • machine-config: Provides a complete source of machine configuration at installation, first start up, and updates for a machine.

Project

openshift-machine-config-operator

5.20. Marketplace Operator

Purpose

The Marketplace Operator is a conduit to bring off-cluster Operators to your cluster.

Project

operator-marketplace

5.21. Node Tuning Operator

Purpose

The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized Tuned daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node.

Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal.

The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later.

Project

cluster-node-tuning-operator

5.22. Operator Lifecycle Manager Operators

Purpose

Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of all Operators and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.

Figure 5.1. Operator Lifecycle Manager workflow

olm workflow

OLM runs by default in OpenShift Container Platform 4.6, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.

CRDs

Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator.

Each of these Operators is responsible for managing the Custom Resource Definitions (CRDs) that are the basis for the OLM framework:

Table 5.3. CRDs managed by OLM and Catalog Operators

ResourceShort nameOwnerDescription

ClusterServiceVersion

csv

OLM

Application metadata: name, version, icon, required resources, installation, and so on.

InstallPlan

ip

Catalog

Calculated list of resources to be created to automatically install or upgrade a CSV.

CatalogSource

catsrc

Catalog

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Catalog

Used to keep CSVs up to date by tracking a channel in a package.

OperatorGroup

og

OLM

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

Each of these Operators is also responsible for creating resources:

Table 5.4. Resources created by OLM and Catalog Operators

ResourceOwner

Deployments

OLM

ServiceAccounts

(Cluster)Roles

(Cluster)RoleBindings

Custom Resource Definitions (CRDs)

Catalog

ClusterServiceVersions (CSVs)

OLM Operator

The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster.

The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application.

The OLM Operator uses the following workflow:

  1. Watch for ClusterServiceVersion (CSVs) in a namespace and check that requirements are met.
  2. If requirements are met, run the install strategy for the CSV.

    Note

    A CSV must be an active member of an OperatorGroup for the install strategy to run.

Catalog Operator

The Catalog Operator is responsible for resolving and installing CSVs and the required resources they specify. It is also responsible for watching CatalogSources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions.

To track a package in a channel, you can create a Subscription resource configuring the desired package, channel, and the CatalogSource you want to pull updates from. When updates are found, an appropriate InstallPlan is written into the namespace on behalf of the user.

The Catalog Operator uses the following workflow:

  1. Connect to each CatalogSource in the cluster.
  2. Watch for unresolved InstallPlans created by a user, and if found:

    1. Find the CSV matching the name requested and add the CSV as a resolved resource.
    2. For each managed or required CRD, add the CRD as a resolved resource.
    3. For each required CRD, find the CSV that manages it.
  3. Watch for resolved InstallPlans and create all of the discovered resources for it, if approved by a user or automatically.
  4. Watch for CatalogSources and Subscriptions and create InstallPlans based on them.

Catalog Registry

The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels.

A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version.

Additional resources

For more information, see the sections on understanding Operator Lifecycle Manager (OLM).

5.23. OpenShift API Server Operator

Purpose

The OpenShift API Server Operator installs and maintains the openshift-apiserver on a cluster.

Project

openshift-apiserver-operator

CRDs

  • openshiftapiservers.operator.openshift.io

    • Scope: Cluster
    • CR: openshiftapiserver
    • Validation: Yes

5.24. Prometheus Operator

Purpose

The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.

Once installed, the Prometheus Operator provides the following features:

  • Create and Destroy: Easily launch a Prometheus instance for your Kubernetes namespace, a specific application or team easily using the Operator.
  • Simple Configuration: Configure the fundamentals of Prometheus like versions, persistence, retention policies, and replicas from a native Kubernetes resource.
  • Target Services via Labels: Automatically generate monitoring target configurations based on familiar Kubernetes label queries; no need to learn a Prometheus specific configuration language.

Project

prometheus-operator

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.