Chapter 11. Operator SDK

11.1. Getting started with the Operator SDK

This guide outlines the basics of the Operator SDK and walks Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) through an example of building a simple Go-based Memcached Operator and managing its lifecycle from installation to upgrade.

This is accomplished using two centerpieces of the Operator Framework: the Operator SDK (the operator-sdk CLI tool and controller-runtime library API) and the Operator Lifecycle Manager (OLM).

Note

OpenShift Container Platform 4 supports Operator SDK v0.7.0 or later.

11.1.1. Architecture of the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Operators take advantage of Kubernetes' extensibility to deliver the automation advantages of cloud services like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run.

Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication.

The Operator SDK is a framework designed to make writing Operators easier by providing:

  • High-level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to quickly bootstrap a new project
  • Extensions to cover common Operator use cases

11.1.1.1. Workflow

The Operator SDK provides the following workflow to develop a new Operator:

  1. Create a new Operator project using the Operator SDK command line interface (CLI).
  2. Define new resource APIs by adding Custom Resource Definitions (CRDs).
  3. Specify resources to watch using the Operator SDK API.
  4. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources.
  5. Use the Operator SDK CLI to build and generate the Operator deployment manifests.

Figure 11.1. Operator SDK workflow

osdk workflow

At a high level, an Operator using the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application.

11.1.1.2. Manager file

The main program for the Operator is the manager file at cmd/manager/main.go. The manager automatically registers the scheme for all Custom Resources (CRs) defined under pkg/apis/ and runs all controllers under pkg/controller/.

The manager can restrict the namespace that all controllers watch for resources:

mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})

By default, this is the namespace that the Operator is running in. To watch all namespaces, you can leave the namespace option empty:

mgr, err := manager.New(cfg, manager.Options{Namespace: ""})

11.1.1.3. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

11.1.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

11.1.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    RELEASE_VERSION=v0.8.0
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the maintainer’s public key on your workstation, you will get the following error:

      $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key
      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.1.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.1.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • dep v0.5.0+
  • Git
  • Go v1.10+
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.1.3. Building a Go-based Memcached Operator using the Operator SDK

The Operator SDK makes it easier to build Kubernetes native applications, a process that can require deep, application-specific operational knowledge. The SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code needed for many common management capabilities, such as metering or monitoring.

This procedure walks through an example of building a simple Memcached Operator using tools and libraries provided by the SDK.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Operator Lifecycle Manager (OLM) installed on a Kubernetes-based cluster (v1.8 or above to support the apps/v1beta2 API group), for example OpenShift Container Platform 4.2
  • Access to the cluster using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.1+ installed

Procedure

  1. Create a new project.

    Use the CLI to create a new memcached-operator project:

    $ mkdir -p $GOPATH/src/github.com/example-inc/
    $ cd $GOPATH/src/github.com/example-inc/
    $ operator-sdk new memcached-operator --dep-manager dep
    $ cd memcached-operator
  2. Add a new Custom Resource Definition (CRD).

    1. Use the CLI to add a new CRD API called Memcached, with APIVersion set to cache.example.com/v1apha1 and Kind set to Memcached:

      $ operator-sdk add api \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached

      This scaffolds the Memcached resource API under pkg/apis/cache/v1alpha1/.

    2. Modify the spec and status of the Memcached Custom Resource (CR) at the pkg/apis/cache/v1alpha1/memcached_types.go file:

      type MemcachedSpec struct {
      	// Size is the size of the memcached deployment
      	Size int32 `json:"size"`
      }
      type MemcachedStatus struct {
      	// Nodes are the names of the memcached pods
      	Nodes []string `json:"nodes"`
      }
    3. After modifying the *_types.go file, always run the following command to update the generated code for that resource type:

      $ operator-sdk generate k8s
  3. Add a new Controller.

    1. Add a new Controller to the project to watch and reconcile the Memcached resource:

      $ operator-sdk add controller \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached

      This scaffolds a new Controller implementation under pkg/controller/memcached/.

    2. For this example, replace the generated controller file pkg/controller/memcached/memcached_controller.go with the example implementation.

      The example controller executes the following reconciliation logic for each Memcached CR:

      • Create a Memcached Deployment if it does not exist.
      • Ensure that the Deployment size is the same as specified by the Memcached CR spec.
      • Update the Memcached CR status with the names of the Memcached Pods.

      The next two sub-steps inspect how the Controller watches resources and how the reconcile loop is triggered. You can skip these steps to go directly to building and running the Operator.

    3. Inspect the Controller implementation at the pkg/controller/memcached/memcached_controller.go file to see how the Controller watches resources.

      The first watch is for the Memcached type as the primary resource. For each Add, Update, or Delete event, the reconcile loop is sent a reconcile Request (a <namespace>:<name> key) for that Memcached object:

      err := c.Watch(
        &source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})

      The next watch is for Deployments, but the event handler maps each event to a reconcile Request for the owner of the Deployment. In this case, this is the Memcached object for which the Deployment was created. This allows the controller to watch Deployments as a secondary resource:

      err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
      		IsController: true,
      		OwnerType:    &cachev1alpha1.Memcached{},
      	})
    4. Every Controller has a Reconciler object with a Reconcile() method that implements the reconcile loop. The reconcile loop is passed the Request argument which is a <namespace>:<name> key used to lookup the primary resource object, Memcached, from the cache:

      func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
        // Lookup the Memcached instance for this reconcile request
        memcached := &cachev1alpha1.Memcached{}
        err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
        ...
      }

      Based on the return value of Reconcile() the reconcile Request may be requeued and the loop may be triggered again:

      // Reconcile successful - don't requeue
      return reconcile.Result{}, nil
      // Reconcile failed due to error - requeue
      return reconcile.Result{}, err
      // Requeue for any reason other than error
      return reconcile.Result{Requeue: true}, nil
  4. Build and run the Operator.

    1. Before running the Operator, the CRD must be registered with the Kubernetes API server:

      $ oc create \
          -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
    2. After registering the CRD, there are two options for running the Operator:

      • As a Deployment inside a Kubernetes cluster
      • As Go program outside a cluster

      Choose one of the following methods.

      1. Option A: Running as a Deployment inside the cluster.

        1. Build the memcached-operator image and push it to a registry:

          $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        2. The Deployment manifest is generated at deploy/operator.yaml. Update the Deployment image as follows since the default is just a placeholder:

          $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
        3. Ensure you have an account on Quay.io for the next step, or substitute your preferred container registry. On the registry, create a new public image repository named memcached-operator.
        4. Push the image to the registry:

          $ docker push quay.io/example/memcached-operator:v0.0.1
        5. Setup RBAC and deploy memcached-operator:

          $ oc create -f deploy/role.yaml
          $ oc create -f deploy/role_binding.yaml
          $ oc create -f deploy/service_account.yaml
          $ oc create -f deploy/operator.yaml
        6. Verify that memcached-operator is up and running:

          $ oc get deployment
          NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
          memcached-operator       1         1         1            1           1m
      2. Option B: Running locally outside the cluster.

        This method is preferred during development cycle to deploy and test faster.

        Run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk up local --namespace=default

        You can use a specific kubeconfig using the flag --kubeconfig=<path/to/kubeconfig>.

  5. Verify that the Operator can deploy a Memcached application by creating a Memcached CR.

    1. Create the example Memcached CR that was generated at deploy/crds/cache_v1alpha1_memcached_cr.yaml:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3
      
      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Ensure that memcached-operator creates the Deployment for the CR:

      $ oc get deployment
      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m
    3. Check the Pods and CR status to confirm the status is updated with the memcached Pod names:

      $ oc get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
      
      $ oc get memcached/example-memcached -o yaml
      apiVersion: cache.example.com/v1alpha1
      kind: Memcached
      metadata:
        clusterName: ""
        creationTimestamp: 2018-03-31T22:51:08Z
        generation: 0
        name: example-memcached
        namespace: default
        resourceVersion: "245453"
        selfLink: /apis/cache.example.com/v1alpha1/namespaces/default/memcacheds/example-memcached
        uid: 0026cc97-3536-11e8-bd83-0800274106a1
      spec:
        size: 3
      status:
        nodes:
        - example-memcached-6fd7c98d8-7dqdr
        - example-memcached-6fd7c98d8-g5k7v
        - example-memcached-6fd7c98d8-m7vn7
  6. Verify that the Operator can manage a deployed Memcached application by updating the size of the deployment.

    1. Change the spec.size field in the memcached CR from 3 to 4:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4
    2. Apply the change:

      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    3. Confirm that the Operator changes the Deployment size:

      $ oc get deployment
      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m
  7. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/service_account.yaml

11.1.4. Managing a Memcached Operator using the Operator Lifecycle Manager

The previous section has covered manually running an Operator. In the next sections, we will explore using the Operator Lifecycle Manager (OLM), which is what enables a more robust deployment model for Operators being run in production environments.

The OLM helps you to install, update, and generally manage the lifecycle of all of the Operators (and their associated services) on a Kubernetes cluster. It runs as an Kubernetes extension and lets you use oc for all the lifecycle management functions without any additional tools.

Prerequisites

  • OLM installed on a Kubernetes-based cluster (v1.8 or above to support the apps/v1beta2 API group), for example OpenShift Container Platform 4.2 Preview OLM enabled
  • Memcached Operator built

Procedure

  1. Generate an Operator manifest.

    An Operator manifest describes how to display, create, and manage the application, in this case Memcached, as a whole. It is defined by a ClusterServiceVersion (CSV) object and is required for the OLM to function.

    You can use the following command to generate CSV manifests:

    $ operator-sdk olm-catalog gen-csv --csv-version 0.0.1
    Note

    This command is run from the memcached-operator/ directory that was created when you built the Memcached Operator.

    For the purpose of this guide, we will continue with this predefined manifest file for the next steps. You can alter the image field within this manifest to reflect the image you built in previous steps, but it is unnecessary.

    Note

    See Building a CSV for the Operator Framework for more information on manually defining a manifest file.

  2. Deploy the Operator.

    1. Create an OperatorGroup that specifies the namespaces that the Operator will target. Create the following OperatorGroup in the namespace where you will create the CSV. In this example, the default namespace is used:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: memcached-operator-group
        namespace: default
      spec:
        targetNamespaces:
        - default
    2. Apply the Operator’s CSV manifest to the specified namespace in the cluster:

      $ curl -Lo memcachedoperator.0.0.1.csv.yaml https://raw.githubusercontent.com/operator-framework/getting-started/master/memcachedoperator.0.0.1.csv.yaml
      $ oc apply -f memcachedoperator.0.0.1.csv.yaml
      $ oc get csv memcachedoperator.v0.0.1 -n default -o json | jq '.status'

      When you apply this manifest, the cluster does not immediately update because it does not yet meet the requirements specified in the manifest.

    3. Create the role, role binding, and service account to grant resource permissions to the Operator, and the Custom Resource Definition (CRD) to create the Memcached type that the Operator manages:

      $ oc create -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
      $ oc create -f deploy/service_account.yaml
      $ oc create -f deploy/role.yaml
      $ oc create -f deploy/role_binding.yaml
      Note

      These files were generated into the deploy/ directory by the Operator SDK when you built the Memcached Operator.

      Because the OLM creates Operators in a particular namespace when a manifest is applied, administrators can leverage the native Kubernetes RBAC permission model to restrict which users are allowed to install Operators.

  3. Create an application instance.

    The Memcached Operator is now running in the default namespace. Users interact with Operators via instances of CustomResources; in this case, the resource has the kind Memcached. Native Kubernetes RBAC also applies to CustomResources, providing administrators control over who can interact with each Operator.

    Creating instances of Memcached in this namespace will now trigger the Memcached Operator to instantiate pods running the memcached server that are managed by the Operator. The more CustomResources you create, the more unique instances of Memcached are managed by the Memcached Operator running in this namespace.

    $ cat <<EOF | oc apply -f -
    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "memcached-for-wordpress"
    spec:
      size: 1
    EOF
    
    $ cat <<EOF | oc apply -f -
    apiVersion: "cache.example.com/v1alpha1"
    kind: "Memcached"
    metadata:
      name: "memcached-for-drupal"
    spec:
      size: 1
    EOF
    
    $ oc get Memcached
    NAME                      AGE
    memcached-for-drupal      22s
    memcached-for-wordpress   27s
    
    $ oc get pods
    NAME                                       READY     STATUS    RESTARTS   AGE
    memcached-app-operator-66b5777b79-pnsfj    1/1       Running   0          14m
    memcached-for-drupal-5476487c46-qbd66      1/1       Running   0          3s
    memcached-for-wordpress-65b75fd8c9-7b9x7   1/1       Running   0          8s
  4. Update an application.

    Manually apply an update to the Operator by creating a new Operator manifest with a replaces field that references the old Operator manifest. The OLM ensures that all resources being managed by the old Operator have their ownership moved to the new Operator without fear of any programs stopping execution. It is up to the Operators themselves to execute any data migrations required to upgrade resources to run under a new version of the Operator.

    The following command demonstrates applying a new Operator manifest file using a new version of the Operator and shows that the pods remain executing:

    $ curl -Lo memcachedoperator.0.0.2.csv.yaml https://raw.githubusercontent.com/operator-framework/getting-started/master/memcachedoperator.0.0.2.csv.yaml
    $ oc apply -f memcachedoperator.0.0.2.csv.yaml
    $ oc get pods
    NAME                                       READY     STATUS    RESTARTS   AGE
    memcached-app-operator-66b5777b79-pnsfj    1/1       Running   0          3s
    memcached-for-drupal-5476487c46-qbd66      1/1       Running   0          14m
    memcached-for-wordpress-65b75fd8c9-7b9x7   1/1       Running   0          14m

11.1.5. Additional resources

11.2. Creating Ansible-based Operators

This guide outlines Ansible support in the Operator SDK and walks Operator authors through examples building and running Ansible-based Operators with the operator-sdk CLI tool that use Ansible playbooks and modules.

11.2.1. Ansible support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK’s options for generating an Operator project includes leveraging existing Ansible playbooks and modules to deploy Kubernetes resources as a unified application, without having to write any Go code.

11.2.1.1. Custom Resource files

Operators use the Kubernetes' extension mechanism, Custom Resource Definitions (CRDs), so your Custom Resource (CR) looks and acts just like the built-in, native Kubernetes objects.

The CR file format is a Kubernetes resource file. The object has mandatory and optional fields:

Table 11.1. Custom Resource fields

FieldDescription

apiVersion

Version of the CR to be created.

kind

Kind of the CR to be created.

metadata

Kubernetes-specific metadata to be created.

spec (optional)

Key-value list of variables which are passed to Ansible. This field is empty by default.

status

Summarizes the current state of the object. For Ansible-based Operators, the status subresource is enabled for CRDs and managed by the k8s_status Ansible module by default, which includes condition information to the CR’s status.

annotations

Kubernetes-specific annotations to be appended to the CR.

The following list of CR annotations modify the behavior of the Operator:

Table 11.2. Ansible-based Operator annotations

AnnotationDescription

ansible.operator-sdk/reconcile-period

Specifies the reconciliation interval for the CR. This value is parsed using the standard Golang package time. Specifically, ParseDuration is used which applies the default suffix of s, giving the value in seconds.

Example Ansible-based Operator annotation

apiVersion: "foo.example.com/v1alpha1"
kind: "Foo"
metadata:
  name: "example"
annotations:
  ansible.operator-sdk/reconcile-period: "30s"

11.2.1.2. Watches file

The Watches file contains a list of mappings from Custom Resources (CRs), identified by its Group, Version, and Kind, to an Ansible role or playbook. The Operator expects this mapping file in a predefined location, /opt/ansible/watches.yaml.

Table 11.3. Watches file mappings

FieldDescription

group

Group of CR to watch.

version

Version of CR to watch.

kind

Kind of CR to watch

role (default)

Path to the Ansible role added to the container. For example, if your roles directory is at /opt/ansible/roles/ and your role is named busybox, this value would be /opt/ansible/roles/busybox. This field is mutually exclusive with the playbook field.

playbook

Path to the Ansible playbook added to the container. This playbook is expected to be simply a way to call roles. This field is mutually exclusive with the role field.

reconcilePeriod (optional)

The reconciliation interval, how often the role or playbook is run, for a given CR.

manageStatus (optional)

When set to true (default), the Operator manages the status of the CR generically. When set to false, the status of the CR is managed elsewhere, by the specified role or playbook or in a separate controller.

Example Watches file

- version: v1alpha1 1
  group: foo.example.com
  kind: Foo
  role: /opt/ansible/roles/Foo

- version: v1alpha1 2
  group: bar.example.com
  kind: Bar
  playbook: /opt/ansible/playbook.yml

- version: v1alpha1 3
  group: baz.example.com
  kind: Baz
  playbook: /opt/ansible/baz.yml
  reconcilePeriod: 0
  manageStatus: false

1
Simple example mapping Foo to the Foo role.
2
Simple example mapping Bar to a playbook.
3
More complex example for the Baz kind. Disables re-queuing and managing the CR status in the playbook.
11.2.1.2.1. Advanced options

Advanced features can be enabled by adding them to your Watches file per GVK (group, version, and kind). They can go below the group, version, kind and playbook or role fields.

Some features can be overridden per resource using an annotation on that Custom Resource (CR). The options that can be overridden have the annotation specified below.

Table 11.4. Advanced Watches file options

FeatureYAML keyDescriptionAnnotation for overrideDefault value

Reconcile period

reconcilePeriod

Time between reconcile runs for a particular CR.

ansbile.operator-sdk/reconcile-period

1m

Manage status

manageStatus

Allows the Operator to manage the conditions section of each CR’s status section.

 

true

Watch dependent resources

watchDependentResources

Allows the Operator to dynamically watch resources that are created by Ansible.

 

true

Watch cluster-scoped resources

watchClusterScopedResources

Allows the Operator to watch cluster-scoped resources that are created by Ansible.

 

false

Max runner artifacts

maxRunnerArtifacts

Manages the number of artifact directories that Ansible Runner keeps in the Operator container for each individual resource.

ansible.operator-sdk/max-runner-artifacts

20

Example Watches file with advanced options

- version: v1alpha1
  group: app.example.com
  kind: AppService
  playbook: /opt/ansible/playbook.yml
  maxRunnerArtifacts: 30
  reconcilePeriod: 5s
  manageStatus: False
  watchDependentResources: False

11.2.1.3. Extra variables sent to Ansible

Extra variables can be sent to Ansible, which are then managed by the Operator. The spec section of the Custom Resource (CR) passes along the key-value pairs as extra variables. This is equivalent to extra variables passed in to the ansible-playbook command.

The Operator also passes along additional variables under the meta field for the name of the CR and the namespace of the CR.

For the following CR example:

apiVersion: "app.example.com/v1alpha1"
kind: "Database"
metadata:
  name: "example"
spec:
  message:"Hello world 2"
  newParameter: "newParam"

The structure passed to Ansible as extra variables is:

{ "meta": {
        "name": "<cr_name>",
        "namespace": "<cr_namespace>",
  },
  "message": "Hello world 2",
  "new_parameter": "newParam",
  "_app_example_com_database": {
     <full_crd>
   },
}

The message and newParameter fields are set in the top level as extra variables, and meta provides the relevant metadata for the CR as defined in the Operator. The meta fields can be accessed using dot notation in Ansible, for example:

- debug:
    msg: "name: {{ meta.name }}, {{ meta.namespace }}"

11.2.1.4. Ansible Runner directory

Ansible Runner keeps information about Ansible runs in the container. This is located at /tmp/ansible-operator/runner/<group>/<version>/<kind>/<namespace>/<name>.

Additional resources

11.2.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

11.2.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    RELEASE_VERSION=v0.8.0
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the maintainer’s public key on your workstation, you will get the following error:

      $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key
      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.2.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.2.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • dep v0.5.0+
  • Git
  • Go v1.10+
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.2.3. Building an Ansible-based Operator using the Operator SDK

This procedure walks through an example of building a simple Memcached Operator powered by Ansible playbooks and modules using tools and libraries provided by the Operator SDK.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.2) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.1+ installed
  • ansible v2.6.0+
  • ansible-runner v1.1.0+
  • ansible-runner-http v1.0.0+

Procedure

  1. Create a new Operator project, either namespace-scoped or cluster-scoped, using the operator-sdk new command. Choose one of the following:

    1. A namespace-scoped Operator (the default) watches and manages resources in a single namespace. Namespace-scoped operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

      To create a new Ansible-based, namespace-scoped memcached-operator project and change to its directory, use the following commands:

      $ operator-sdk new memcached-operator \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached \
          --type=ansible
      $ cd memcached-operator

      This creates the memcached-operator project specifically for watching the Memcached resource with APIVersion example.com/v1apha1 and Kind Memcached.

    2. A cluster-scoped Operator watches and manages resources cluster-wide, which can be useful in certain cases. For example, the cert-manager operator is often deployed with cluster-scoped permissions and watches so that it can manage issuing certificates for an entire cluster.

      To create your memcached-operator project to be cluster-scoped and change to its directory, use the following commands:

      $ operator-sdk new memcached-operator \
          --cluster-scoped \
          --api-version=cache.example.com/v1alpha1 \
          --kind=Memcached \
          --type=ansible
      $ cd memcached-operator

      Using the --cluster-scoped flag scaffolds the new Operator with the following modifications:

      • deploy/operator.yaml: Set WATCH_NAMESPACE="" instead of setting it to the Pod’s namespace.
      • deploy/role.yaml: Use ClusterRole instead of Role.
      • deploy/role_binding.yaml:

        • Use ClusterRoleBinding instead of RoleBinding.
        • Set the subject namespace to REPLACE_NAMESPACE. This must be changed to the namespace in which the Operator is deployed.
  2. Customize the Operator logic.

    For this example, the memcached-operator executes the following reconciliation logic for each Memcached Custom Resource (CR):

    • Create a memcached Deployment if it does not exist.
    • Ensure that the Deployment size is the same as specified by the Memcached CR.

    By default, the memcached-operator watches Memcached resource events as shown in the watches.yaml file and executes the Ansible role Memcached:

    - version: v1alpha1
      group: cache.example.com
      kind: Memcached

    You can optionally customize the following logic in the watches.yaml file:

    1. Specifying a role option configures the Operator to use this specified path when launching ansible-runner with an Ansible role. By default, the new command fills in an absolute path to where your role should go:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        role: /opt/ansible/roles/memcached
    2. Specifying a playbook option in the watches.yaml file configures the Operator to use this specified path when launching ansible-runner with an Ansible playbook:

      - version: v1alpha1
        group: cache.example.com
        kind: Memcached
        playbook: /opt/ansible/playbook.yaml
  3. Build the Memcached Ansible role.

    Modify the generated Ansible role under the roles/memcached/ directory. This Ansible role controls the logic that is executed when a resource is modified.

    1. Define the Memcached spec.

      Defining the spec for an Ansible-based Operator can be done entirely in Ansible. The Ansible Operator passes all key-value pairs listed in the CR spec field along to Ansible as variables. The names of all variables in the spec field are converted to snake case (lowercase with an underscore) by the Operator before running Ansible. For example, serviceAccount in the spec becomes service_account in Ansible.

      Tip

      You should perform some type validation in Ansible on the variables to ensure that your application is receiving expected input.

      In case the user does not set the spec field, set a default by modifying the roles/memcached/defaults/main.yml file:

      size: 1
    2. Define the Memcached Deployment.

      With the Memcached spec now defined, you can define what Ansible is actually executed on resource changes. Because this is an Ansible role, the default behavior executes the tasks in the roles/memcached/tasks/main.yml file.

      The goal is for Ansible to create a Deployment if it does not exist, which runs the memcached:1.4.36-alpine image. Ansible 2.7+ supports the k8s Ansible module, which this example leverages to control the Deployment definition.

      Modify the roles/memcached/tasks/main.yml to match the following:

      - name: start memcached
        k8s:
          definition:
            kind: Deployment
            apiVersion: apps/v1
            metadata:
              name: '{{ meta.name }}-memcached'
              namespace: '{{ meta.namespace }}'
            spec:
              replicas: "{{size}}"
              selector:
                matchLabels:
                  app: memcached
              template:
                metadata:
                  labels:
                    app: memcached
                spec:
                  containers:
                  - name: memcached
                    command:
                    - memcached
                    - -m=64
                    - -o
                    - modern
                    - -v
                    image: "docker.io/memcached:1.4.36-alpine"
                    ports:
                      - containerPort: 11211
      Note

      This example used the size variable to control the number of replicas of the Memcached Deployment. This example sets the default to 1, but any user can create a CR that overwrites the default.

  4. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new Custom Resource Definition (CRD) the Operator will be watching. Deploy the Memcached CRD:

    $ oc create -f deploy/crds/cache_v1alpha1_memcached_crd.yaml
  5. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a Pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the memcached-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/memcached-operator:v0.0.1
        $ podman push quay.io/example/memcached-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
      3. If you created your Operator using the --cluster-scoped=true flag, update the service account namespace in the generated ClusterRoleBinding to match where you are deploying your Operator:

        $ export OPERATOR_NAMESPACE=$(oc config view --minify -o jsonpath='{.contexts[0].context.namespace}')
        $ sed -i "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml

        If you are performing these steps on OSX, use the following commands instead:

        $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
        $ sed -i "" "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml
      4. Deploy the memcached-operator:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      5. Verify that the memcached-operator is up and running:

        $ oc get deployment
        NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        memcached-operator       1         1         1            1           1m
    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      Ensure that Ansible Runner and Ansible Runner HTTP Plug-in are installed or else you will see unexpected errors from Ansible Runner when a CR is created.

      It is also important that the role path referenced in the watches.yaml file exists on your machine. Because normally a container is used where the role is put on disk, the role must be manually copied to the configured Ansible roles path (for example /etc/ansible/roles).

      1. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk up local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk up local --kubeconfig=config
  6. Create a Memcached CR.

    1. Modify the deploy/crds/cache_v1alpha1_memcached_cr.yaml file as shown and create a Memcached CR:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 3
      
      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Ensure that the memcached-operator creates the Deployment for the CR:

      $ oc get deployment
      NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      memcached-operator       1         1         1            1           2m
      example-memcached        3         3         3            3           1m
    3. Check the Pods to confirm three replicas were created:

      $ oc get pods
      NAME                                  READY     STATUS    RESTARTS   AGE
      example-memcached-6fd7c98d8-7dqdr     1/1       Running   0          1m
      example-memcached-6fd7c98d8-g5k7v     1/1       Running   0          1m
      example-memcached-6fd7c98d8-m7vn7     1/1       Running   0          1m
      memcached-operator-7cc7cfdf86-vvjqk   1/1       Running   0          2m
  7. Update the size.

    1. Change the spec.size field in the memcached CR from 3 to 4 and apply the change:

      $ cat deploy/crds/cache_v1alpha1_memcached_cr.yaml
      apiVersion: "cache.example.com/v1alpha1"
      kind: "Memcached"
      metadata:
        name: "example-memcached"
      spec:
        size: 4
      
      $ oc apply -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    2. Confirm that the Operator changes the Deployment size:

      $ oc get deployment
      NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
      example-memcached    4         4         4            4           5m
  8. Clean up the resources:

    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/cache_v1alpha1_memcached_crd.yaml

11.2.4. Managing application lifecycle using the k8s Ansible module

To manage the lifecycle of your application on Kubernetes using Ansible, you can use the k8s Ansible module. This Ansible module allows a developer to either leverage their existing Kubernetes resource files (written in YAML) or express the lifecycle management in native Ansible.

One of the biggest benefits of using Ansible in conjunction with existing Kubernetes resource files is the ability to use Jinja templating so that you can customize resources with the simplicity of a few variables in Ansible.

This section goes into detail on usage of the k8s Ansible module. To get started, install the module on your local workstation and test it using a playbook before moving on to using it within an Operator.

11.2.4.1. Installing the k8s Ansible module

To install the k8s Ansible module on your local workstation:

Procedure

  1. Install Ansible 2.6+:

    $ sudo yum install ansible
  2. Install the OpenShift python client package using pip:

    $ pip install openshift

11.2.4.2. Testing the k8s Ansible module locally

Sometimes, it is beneficial for a developer to run the Ansible code from their local machine as opposed to running and rebuilding the Operator each time.

Procedure

  1. Initialize a new Ansible-based Operator project:

    $ operator-sdk new --type ansible --kind Foo --api-version foo.example.com/v1alpha1 foo-operator
    Create foo-operator/tmp/init/galaxy-init.sh
    Create foo-operator/tmp/build/Dockerfile
    Create foo-operator/tmp/build/test-framework/Dockerfile
    Create foo-operator/tmp/build/go-test.sh
    Rendering Ansible Galaxy role [foo-operator/roles/Foo]...
    Cleaning up foo-operator/tmp/init
    Create foo-operator/watches.yaml
    Create foo-operator/deploy/rbac.yaml
    Create foo-operator/deploy/crd.yaml
    Create foo-operator/deploy/cr.yaml
    Create foo-operator/deploy/operator.yaml
    Run git init ...
    Initialized empty Git repository in /home/dymurray/go/src/github.com/dymurray/opsdk/foo-operator/.git/
    Run git init done
    $ cd foo-operator
  2. Modify the roles/Foo/tasks/main.yml file with the desired Ansible logic. This example creates and deletes a namespace with the switch of a variable.

    - name: set test namespace to {{ state }}
      k8s:
        api_version: v1
        kind: Namespace
        state: "{{ state }}"
      ignore_errors: true 1
    1
    Setting ignore_errors: true ensures that deleting a nonexistent project does not fail.
  3. Modify the roles/Foo/defaults/main.yml file to set state to present by default.

    state: present
  4. Create an Ansible playbook playbook.yml in the top-level directory, which includes the Foo role:

    - hosts: localhost
      roles:
        - Foo
  5. Run the playbook:

    $ ansible-playbook playbook.yml
     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    TASK [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [Foo : set test namespace to present]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0
  6. Check that the namespace was created:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s
  7. Rerun the playbook setting state to absent:

    $ ansible-playbook playbook.yml --extra-vars state=absent
     [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] ***************************************************************************
    
    TASK [Gathering Facts] *********************************************************************
    ok: [localhost]
    
    Task [Foo : set test namespace to absent]
    changed: [localhost]
    
    PLAY RECAP *********************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0
  8. Check that the namespace was deleted:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d

11.2.4.3. Testing the k8s Ansible module inside an Operator

After you are familiar using the k8s Ansible module locally, you can trigger the same Ansible logic inside of an Operator when a Custom Resource (CR) changes. This example maps an Ansible role to a specific Kubernetes resource that the Operator watches. This mapping is done in the Watches file.

11.2.4.3.1. Testing an Ansible-based Operator locally

After getting comfortable testing Ansible workflows locally, you can test the logic inside of an Ansible-based Operator running locally.

To do so, use the operator-sdk up local command from the top-level directory of your Operator project. This command reads from the ./watches.yaml file and uses the ~/.kube/config file to communicate with a Kubernetes cluster just as the k8s Ansible module does.

Procedure

  1. Because the up local command reads from the ./watches.yaml file, there are options available to the Operator author. If role is left alone (by default, /opt/ansible/roles/<name>) you must copy the role over to the /opt/ansible/roles/ directory from the Operator directly.

    This is cumbersome because changes are not reflected from the current directory. Instead, change the role field to point to the current directory and comment out the existing line:

    - version: v1alpha1
      group: foo.example.com
      kind: Foo
      #  role: /opt/ansible/roles/Foo
      role: /home/user/foo-operator/Foo
  2. Create a Custom Resource Definiton (CRD) and proper role-based access control (RBAC) definitions for the Custom Resource (CR) Foo. The operator-sdk command autogenerates these files inside of the deploy/ directory:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
  3. Run the up local command:

    $ operator-sdk up local
    [...]
    INFO[0000] Starting to serve on 127.0.0.1:8888
    INFO[0000] Watching foo.example.com/v1alpha1, Foo, default
  4. Now that the Operator is watching the resource Foo for events, the creation of a CR triggers your Ansible role to execute. View the deploy/cr.yaml file:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"

    Because the spec field is not set, Ansible is invoked with no extra variables. The next section covers how extra variables are passed from a CR to Ansible. This is why it is important to set sane defaults for the Operator.

  5. Create a CR instance of Foo with the default variable state set to present:

    $ oc create -f deploy/cr.yaml
  6. Check that the namespace test was created:

    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
    test          Active    3s
  7. Modify the deploy/cr.yaml file to set the state field to absent:

    apiVersion: "foo.example.com/v1alpha1"
    kind: "Foo"
    metadata:
      name: "example"
    spec:
      state: "absent"
  8. Apply the changes and confirm that the namespace is deleted:

    $ oc apply -f deploy/cr.yaml
    
    $ oc get namespace
    NAME          STATUS    AGE
    default       Active    28d
    kube-public   Active    28d
    kube-system   Active    28d
11.2.4.3.2. Testing an Ansible-based Operator on a cluster

After getting familiar running Ansible logic inside of an Ansible-based Operator locally, you can test the Operator inside of a Pod on a Kubernetes cluster, such as OpenShift Container Platform. Running as a Pod on a cluster is preferred for production use.

Procedure

  1. Build the foo-operator image and push it to a registry:

    $ operator-sdk build quay.io/example/foo-operator:v0.0.1
    $ podman push quay.io/example/foo-operator:v0.0.1
  2. Deployment manifests are generated in the deploy/operator.yaml file. The Deployment image in this file must be modified from the placeholder REPLACE_IMAGE to the previously-built image. To do so, run the following command:

    $ sed -i 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml

    If you are performing these steps on OSX, use the following command instead:

    $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/foo-operator:v0.0.1|g' deploy/operator.yaml
  3. Deploy the foo-operator:

    $ oc create -f deploy/crds/foo_v1alpha1_foo_crd.yaml # if CRD doesn't exist already
    $ oc create -f deploy/service_account.yaml
    $ oc create -f deploy/role.yaml
    $ oc create -f deploy/role_binding.yaml
    $ oc create -f deploy/operator.yaml
  4. Verify that the foo-operator is up and running:

    $ oc get deployment
    NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    foo-operator       1         1         1            1           1m

11.2.5. Managing Custom Resource status using the k8s_status Ansible module

Ansible-based Operators automatically update Custom Resource (CR) status subresources with generic information about the previous Ansible run. This includes the number of successful and failed tasks and relevant error messages as shown:

status:
  conditions:
    - ansibleResult:
      changed: 3
      completion: 2018-12-03T13:45:57.13329
      failures: 1
      ok: 6
      skipped: 0
    lastTransitionTime: 2018-12-03T13:45:57Z
    message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno
      113] No route to host>'
    reason: Failed
    status: "True"
    type: Failure
  - lastTransitionTime: 2018-12-03T13:46:13Z
    message: Running reconciliation
    reason: Running
    status: "True"
    type: Running

Ansible-based Operators also allow Operator authors to supply custom status values with the k8s_status Ansible module. This allows the author to update the status from within Ansible with any key-value pair as desired.

By default, Ansible-based Operators always include the generic Ansible run output as shown above. If you would prefer your application did not update the status with Ansible output, you can track the status manually from your application.

Procedure

  1. To track CR status manually from your application, update the Watches file with a manageStatus field set to false:

    - version: v1
      group: api.example.com
      kind: Foo
      role: /opt/ansible/roles/Foo
      manageStatus: false
  2. Then, use the k8s_status Ansible module to update the subresource. For example, to update with key foo and value bar, k8s_status can be used as shown:

    - k8s_status:
      api_version: app.example.com/v1
      kind: Foo
      name: "{{ meta.name }}"
      namespace: "{{ meta.namespace }}"
      status:
        foo: bar

Additional resources

11.2.5.1. Using the k8s_status Ansible module when testing locally

If your Operator takes advantage of the k8s_status Ansible module and you want to test the Operator locally with the operator-sdk up local command, you must install the module in a location that Ansible expects. This is done with the library configuration option for Ansible.

For this example, assume the user is placing third-party Ansible modules in the /usr/share/ansible/library/ directory.

Procedure

  1. To install the k8s_status module, set the ansible.cfg file to search in the /usr/share/ansible/library/ directory for installed Ansible modules:

    $ echo "library=/usr/share/ansible/library/" >> /etc/ansible/ansible.cfg
  2. Add the k8s_status.py file to the /usr/share/ansible/library/ directory:

    $ wget https://raw.githubusercontent.com/openshift/ocp-release-operator-sdk/master/library/k8s_status.py -O /usr/share/ansible/library/k8s_status.py

11.2.6. Additional resources

11.3. Creating Helm-based Operators

This guide outlines Helm chart support in the Operator SDK and walks Operator authors through an example of building and running an Nginx Operator with the operator-sdk CLI tool that uses an existing Helm chart.

11.3.1. Helm chart support in the Operator SDK

The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. This framework includes the Operator SDK, which assists developers in bootstrapping and building an Operator based on their expertise without requiring knowledge of Kubernetes API complexities.

One of the Operator SDK’s options for generating an Operator project includes leveraging an existing Helm chart to deploy Kubernetes resources as a unified application, without having to write any Go code. Such Helm-based Operators are designed to excel at stateless applications that require very little logic when rolled out, because changes should be applied to the Kubernetes objects that are generated as part of the chart. This may sound limiting, but can be sufficient for a surprising amount of use-cases as shown by the proliferation of Helm charts built by the Kubernetes community.

The main function of an Operator is to read from a custom object that represents your application instance and have its desired state match what is running. In the case of a Helm-based Operator, the object’s spec field is a list of configuration options that are typically described in Helm’s values.yaml file. Instead of setting these values with flags using the Helm CLI (for example, helm install -f values.yaml), you can express them within a Custom Resource (CR), which, as a native Kubernetes object, enables the benefits of RBAC applied to it and an audit trail.

For an example of a simple CR called Tomcat:

apiVersion: apache.org/v1alpha1
kind: Tomcat
metadata:
  name: example-app
spec:
  replicaCount: 2

The replicaCount value, 2 in this case, is propagated into the chart’s templates where following is used:

{{ .Values.replicaCount }}

After an Operator is built and deployed, you can deploy a new instance of an app by creating a new instance of a CR, or list the different instances running in all environments using the oc command:

$ oc get Tomcats --all-namespaces

There is no requirement use the Helm CLI or install Tiller; Helm-based Operators import code from the Helm project. All you have to do is have an instance of the Operator running and register the CR with a Custom Resource Definition (CRD). And because it obeys RBAC, you can more easily prevent production changes.

11.3.2. Installing the Operator SDK CLI

The Operator SDK has a CLI tool that assists developers in creating, building, and deploying a new Operator project. You can install the SDK CLI on your workstation so you are prepared to start authoring your own Operators.

Note

This guide uses minikube v0.25.0+ as the local Kubernetes cluster and Quay.io for the public registry.

11.3.2.1. Installing from GitHub release

You can download and install a pre-built release binary of the SDK CLI from the project on GitHub.

Prerequisites

  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Set the release version variable:

    RELEASE_VERSION=v0.8.0
  2. Download the release binary.

    • For Linux:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  3. Verify the downloaded release binary.

    1. Download the provided ASC file.

      • For Linux:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
    2. Place the binary and corresponding ASC file into the same directory and run the following command to verify the binary:

      • For Linux:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu.asc
      • For macOS:

        $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc

      If you do not have the maintainer’s public key on your workstation, you will get the following error:

      $ gpg --verify operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin.asc
      $ gpg: assuming signed data in 'operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin'
      $ gpg: Signature made Fri Apr  5 20:03:22 2019 CEST
      $ gpg:                using RSA key <key_id> 1
      $ gpg: Can't check signature: No public key
      1
      RSA key string.

      To download the key, run the following command, replacing <key_id> with the RSA key string provided in the output of the previous command:

      $ gpg [--keyserver keys.gnupg.net] --recv-key "<key_id>" 1
      1
      If you do not have a key server configured, specify one with the --keyserver option.
  4. Install the release binary in your PATH:

    • For Linux:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
    • For macOS:

      $ chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
      $ sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin /usr/local/bin/operator-sdk
      $ rm operator-sdk-${RELEASE_VERSION}-x86_64-apple-darwin
  5. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.3.2.2. Installing from Homebrew

You can install the SDK CLI using Homebrew.

Prerequisites

  • Homebrew
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Install the SDK CLI using the brew command:

    $ brew install operator-sdk
  2. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.3.2.3. Compiling and installing from source

You can obtain the Operator SDK source code to compile and install the SDK CLI.

Prerequisites

  • dep v0.5.0+
  • Git
  • Go v1.10+
  • docker v17.03+
  • OpenShift CLI (oc) v4.1+ installed
  • Access to a cluster based on Kubernetes v1.11.3+
  • Access to a container registry

Procedure

  1. Clone the operator-sdk repository:

    $ mkdir -p $GOPATH/src/github.com/operator-framework
    $ cd $GOPATH/src/github.com/operator-framework
    $ git clone https://github.com/operator-framework/operator-sdk
    $ cd operator-sdk
  2. Check out the desired release branch:

    $ git checkout master
  3. Compile and install the SDK CLI:

    $ make dep
    $ make install

    This installs the CLI binary operator-sdk at $GOPATH/bin.

  4. Verify that the CLI tool was installed correctly:

    $ operator-sdk version

11.3.3. Building a Helm-based Operator using the Operator SDK

This procedure walks through an example of building a simple Nginx Operator powered by a Helm chart using tools and libraries provided by the Operator SDK.

Tip

It is best practice to build a new Operator for each chart. This can allow for more native-behaving Kubernetes APIs (for example, oc get Nginx) and flexibility if you ever want to write a fully-fledged Operator in Go, migrating away from a Helm-based Operator.

Prerequisites

  • Operator SDK CLI installed on the development workstation
  • Access to a Kubernetes-based cluster v1.11.3+ (for example OpenShift Container Platform 4.2) using an account with cluster-admin permissions
  • OpenShift CLI (oc) v4.1+ installed

Procedure

  1. Create a new Operator project, either namespace-scoped or cluster-scoped, using the operator-sdk new command. Choose one of the following:

    1. A namespace-scoped Operator (the default) watches and manages resources in a single namespace. Namespace-scoped operators are preferred because of their flexibility. They enable decoupled upgrades, namespace isolation for failures and monitoring, and differing API definitions.

      To create a new Helm-based, namespace-scoped nginx-operator project, use the following command:

      $ operator-sdk new nginx-operator \
        --api-version=example.com/v1alpha1 \
        --kind=Nginx \
        --type=helm
      $ cd nginx-operator

      This creates the nginx-operator project specifically for watching the Nginx resource with APIVersion example.com/v1apha1 and Kind Nginx.

    2. A cluster-scoped Operator watches and manages resources cluster-wide, which can be useful in certain cases. For example, the cert-manager operator is often deployed with cluster-scoped permissions and watches so that it can manage issuing certificates for an entire cluster.

      To create your nginx-operator project to be cluster-scoped, use the following command:

      $ operator-sdk new nginx-operator \
          --cluster-scoped \
          --api-version=example.com/v1alpha1 \
          --kind=Nginx \
          --type=helm

      Using the --cluster-scoped flag scaffolds the new Operator with the following modifications:

      • deploy/operator.yaml: Set WATCH_NAMESPACE="" instead of setting it to the Pod’s namespace.
      • deploy/role.yaml: Use ClusterRole instead of Role.
      • deploy/role_binding.yaml:

        • Use ClusterRoleBinding instead of RoleBinding.
        • Set the subject namespace to REPLACE_NAMESPACE. This must be changed to the namespace in which the Operator is deployed.
  2. Customize the Operator logic.

    For this example, the nginx-operator executes the following reconciliation logic for each Nginx Custom Resource (CR):

    • Create a Nginx Deployment if it does not exist.
    • Create a Nginx Service if it does not exist.
    • Create a Nginx Ingress if it is enabled and does not exist.
    • Ensure that the Deployment, Service, and optional Ingress match the desired configuration (for example, replica count, image, service type) as specified by the Nginx CR.

    By default, the nginx-operator watches Nginx resource events as shown in the watches.yaml file and executes Helm releases using the specified chart:

    - version: v1alpha1
      group: example.com
      kind: Nginx
      chart: /opt/helm/helm-charts/nginx
    1. Review the Nginx Helm chart.

      When a Helm Operator project is created, the Operator SDK creates an example Helm chart that contains a set of templates for a simple Nginx release.

      For this example, templates are available for Deployment, Service, and Ingress resources, along with a NOTES.txt template, which Helm chart developers use to convey helpful information about a release.

      If you are not already familiar with Helm Charts, take a moment to review the Helm Chart developer documentation.

    2. Understand the Nginx CR spec.

      Helm uses a concept called values to provide customizations to a Helm chart’s defaults, which are defined in the Helm chart’s values.yaml file.

      Override these defaults by setting the desired values in the CR spec. You can use the number of replicas as an example:

      1. First, inspect the helm-charts/nginx/values.yaml file to find that the chart has a value called replicaCount and it is set to 1 by default. To have 2 Nginx instances in your deployment, your CR spec must contain replicaCount: 2.

        Update the deploy/crds/example_v1alpha1_nginx_cr.yaml file to look like the following:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
      2. Similarly, the default service port is set to 80. To instead use 8080, update the deploy/crds/example_v1alpha1_nginx_cr.yaml file again by adding the service port override:

        apiVersion: example.com/v1alpha1
        kind: Nginx
        metadata:
          name: example-nginx
        spec:
          replicaCount: 2
          service:
            port: 8080

        The Helm Operator applies the entire spec as if it was the contents of a values file, just like the helm install -f ./overrides.yaml command works.

  3. Deploy the CRD.

    Before running the Operator, Kubernetes needs to know about the new custom resource definition (CRD) the operator will be watching. Deploy the following CRD:

    $ oc create -f deploy/crds/example_v1alpha1_nginx_crd.yaml
  4. Build and run the Operator.

    There are two ways to build and run the Operator:

    • As a Pod inside a Kubernetes cluster.
    • As a Go program outside the cluster using the operator-sdk up command.

    Choose one of the following methods:

    1. Run as a Pod inside a Kubernetes cluster. This is the preferred method for production use.

      1. Build the nginx-operator image and push it to a registry:

        $ operator-sdk build quay.io/example/nginx-operator:v0.0.1
        $ docker push quay.io/example/nginx-operator:v0.0.1
      2. Deployment manifests are generated in the deploy/operator.yaml file. The deployment image in this file needs to be modified from the placeholder REPLACE_IMAGE to the previous built image. To do this, run:

        $ sed -i 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
      3. If you created your Operator using the --cluster-scoped=true flag, update the service account namespace in the generated ClusterRoleBinding to match where you are deploying your Operator:

        $ export OPERATOR_NAMESPACE=$(oc config view --minify -o jsonpath='{.contexts[0].context.namespace}')
        $ sed -i "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml

        If you are performing these steps on OSX, use the following commands instead:

        $ sed -i "" 's|REPLACE_IMAGE|quay.io/example/nginx-operator:v0.0.1|g' deploy/operator.yaml
        $ sed -i "" "s|REPLACE_NAMESPACE|$OPERATOR_NAMESPACE|g" deploy/role_binding.yaml
      4. Deploy the nginx-operator:

        $ oc create -f deploy/service_account.yaml
        $ oc create -f deploy/role.yaml
        $ oc create -f deploy/role_binding.yaml
        $ oc create -f deploy/operator.yaml
      5. Verify that the nginx-operator is up and running:

        $ oc get deployment
        NAME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
        nginx-operator       1         1         1            1           1m
    2. Run outside the cluster. This method is preferred during the development cycle to speed up deployment and testing.

      It is important that the chart path referenced in the watches.yaml file exists on your machine. By default, the watches.yaml file is scaffolded to work with an Operator image built with the operator-sdk build command. When developing and testing your operator with the operator-sdk up local command, the SDK looks in your local file system for this path.

      1. Create a symlink at this location to point to your Helm chart’s path:

        $ sudo mkdir -p /opt/helm/helm-charts
        $ sudo ln -s $PWD/helm-charts/nginx /opt/helm/helm-charts/nginx
      2. To run the Operator locally with the default Kubernetes configuration file present at $HOME/.kube/config:

        $ operator-sdk up local

        To run the Operator locally with a provided Kubernetes configuration file:

        $ operator-sdk up local --kubeconfig=<path_to_config>
  5. Deploy the Nginx CR.

    Apply the Nginx CR that you modified earlier:

    $ oc apply -f deploy/crds/example_v1alpha1_nginx_cr.yaml

    Ensure that the nginx-operator creates the Deployment for the CR:

    $ oc get deployment
    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        2         2         2            2           1m

    Check the Pods to confirm two replicas were created:

    $ oc get pods
    NAME                                                      READY     STATUS    RESTARTS   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-fjcr9   1/1       Running   0          1m
    example-nginx-b9phnoz9spckcrua7ihrbkrt1-f8f9c875d-ljbzl   1/1       Running   0          1m

    Check that the Service port is set to 8080:

    $ oc get service
    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        8080/TCP   1m
  6. Update the replicaCount and remove the port.

    Change the spec.replicaCount field from 2 to 3, remove the spec.service field, and apply the change:

    $ cat deploy/crds/example_v1alpha1_nginx_cr.yaml
    apiVersion: "example.com/v1alpha1"
    kind: "Nginx"
    metadata:
      name: "example-nginx"
    spec:
      replicaCount: 3
    
    $ oc apply -f deploy/crds/example_v1alpha1_nginx_cr.yaml

    Confirm that the Operator changes the Deployment size:

    $ oc get deployment
    NAME                                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1        3         3         3            3           1m

    Check that the Service port is set to the default 80:

    $ oc get service
    NAME                                      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)  AGE
    example-nginx-b9phnoz9spckcrua7ihrbkrt1   ClusterIP   10.96.26.3   <none>        80/TCP   1m
  7. Clean up the resources:

    $ oc delete -f deploy/crds/example_v1alpha1_nginx_cr.yaml
    $ oc delete -f deploy/operator.yaml
    $ oc delete -f deploy/role_binding.yaml
    $ oc delete -f deploy/role.yaml
    $ oc delete -f deploy/service_account.yaml
    $ oc delete -f deploy/crds/example_v1alpha1_nginx_crd.yaml

11.3.4. Additional resources

11.4. Generating a ClusterServiceVersion (CSV)

A ClusterServiceVersion (CSV) is a YAML manifest created from Operator metadata that assists the Operator Lifecycle Manager (OLM) in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information like its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which Custom Resources (CRs) it manages or depends on.

The Operator SDK includes the olm-catalog gen-csv subcommand to generate a ClusterServiceVersion (CSV) for the current Operator project customized using information contained in manually-defined YAML manifests and Operator source files.

A CSV-generating command removes the responsibility of Operator authors having in-depth Operator Lifecycle Manager (OLM) knowledge in order for their Operator to interact with OLM or publish metadata to the Catalog Registry. Further, because the CSV spec will likely change over time as new Kubernetes and OLM features are implemented, the Operator SDK is equipped to easily extend its update system to handle new CSV features going forward.

The CSV version is the same as the Operator’s, and a new CSV is generated when upgrading Operator versions. Operator authors can use the --csv-version flag to have their Operators' state encapsulated in a CSV with the supplied semantic version:

$ operator-sdk olm-catalog gen-csv --csv-version <version>

This action is idempotent and only updates the CSV file when a new version is supplied, or a YAML manifest or source file is changed. Operator authors should not have to directly modify most fields in a CSV manifest. Those that require modification are defined in this guide. For example, the CSV version must be included in metadata.name.

11.4.1. How CSV generation works

An Operator project’s deploy/ directory is the standard location for all manifests required to deploy an Operator. The Operator SDK can use data from manifests in deploy/ to write a CSV. The following command:

$ operator-sdk olm-catalog gen-csv --csv-version <version>

writes a CSV YAML file to the deploy/olm-catalog/ directory by default.

Exactly three types of manifests are required to generate a CSV:

  • operator.yaml
  • *_{crd,cr}.yaml
  • RBAC role files, for example role.yaml

Operator authors may have different versioning requirements for these files and can configure which specific files are included in the deploy/olm-catalog/csv-config.yaml file.

Workflow

Depending on whether an existing CSV is detected, and assuming all configuration defaults are used, the olm-catalog gen-csv subcommand either:

  • Creates a new CSV, with the same location and naming convention as exists currently, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is not found, it creates a ClusterServiceVersion object, referred to here as a cache, and populates fields easily derived from Operator metadata, such as Kubernetes API ObjectMeta.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.

or:

  • Updates an existing CSV at the currently pre-defined location, using available data in YAML manifests and source files.

    1. The update mechanism checks for an existing CSV in deploy/. When one is found, the CSV YAML file contents are marshaled into a ClusterServiceVersion cache.
    2. The update mechanism searches deploy/ for manifests that contain data a CSV uses, such as a Deployment resource, and sets the appropriate CSV fields in the cache with this data.
    3. After the search completes, every cache field populated is written back to a CSV YAML file.
Note

Individual YAML fields are overwritten and not the entire file, as descriptions and other non-generated parts of a CSV should be preserved.

11.4.2. CSV composition configuration

Operator authors can configure CSV composition by populating several fields in the deploy/olm-catalog/csv-config.yaml file:

FieldDescription

operator-path (string)

The Operator resource manifest file path. Defaults to deploy/operator.yaml.

crd-cr-path-list (string(, string)*)

A list of CRD and CR manifest file paths. Defaults to [deploy/crds/*_{crd,cr}.yaml].

rbac-path-list (string(, string)*)

A list of RBAC role manifest file paths. Defaults to [deploy/role.yaml].

11.4.3. Manually-defined CSV fields

Many CSV fields cannot be populated using generated, non-SDK-specific manifests. These fields are mostly human-written, English metadata about the Operator and various Custom Resource Definitions (CRDs).

Operator authors must directly modify their CSV YAML file, adding personalized data to the following required fields. The Operator SDK gives a warning CSV generation when a lack of data in any of the required fields is detected.

Table 11.5. Required

FieldDescription

metadata.name

A unique name for this CSV. Operator version should be included in the name to ensure uniqueness, for example app-operator.v0.1.1.

metadata.capabilities

The Operator’s capability level according to the Operator maturity model, for example Seamless Upgrades.

spec.displayName

A public name to identify the Operator.

spec.description

A short description of the Operator’s functionality.

spec.keywords

Keywords describing the operator.

spec.maintainers

Human or organizational entities maintaining the Operator, with a name and email.

spec.provider

The Operators' provider (usually an organization), with a name.

spec.labels

Key-value pairs to be used by Operator internals.

spec.version

Semantic version of the Operator, for example 0.1.1.

spec.customresourcedefinitions

Any CRDs the Operator uses. This field is populated automatically by the Operator SDK if any CRD YAML files are present in deploy/. However, several fields not in the CRD manifest spec require user input:

  • description: description of the CRD.
  • resources: any Kubernetes resources leveraged by the CRD, for example Pods and StatefulSets.
  • specDescriptors: UI hints for inputs and outputs of the Operator.

Table 11.6. Optional

FieldDescription

spec.replaces

The name of the CSV being replaced by this CSV.

spec.links

URLs (for example, websites and documentation) pertaining to the Operator or application being managed, each with a name and url.

spec.selector

Selectors by which the Operator can pair resources in a cluster.

spec.icon

A base64-encoded icon unique to the Operator, set in a base64data field with a mediatype.

spec.maturity

The level of maturity the software has achieved at this version, for example alpha, beta, stable.

Further details on what data each field above should hold are found in the CSV spec.

Note

Several YAML fields currently requiring user intervention can potentially be parsed from Operator code; such Operator SDK functionality will be addressed in a future design document.

Additional resources

11.4.4. Generating a CSV

Prerequisites

  • An Operator project generated using the Operator SDK

Procedure

  1. In your Operator project, configure your CSV composition by modifying the deploy/olm-catalog/csv-config.yaml file, if desired.
  2. Generate the CSV:

    $ operator-sdk olm-catalog gen-csv --csv-version <version>
  3. In the new CSV generated in the deploy/olm-catalog/ directory, ensure all required, manually-defined fields are set appropriately.

11.4.5. Understanding your Custom Resource Definitions (CRDs)

There are two types of Custom Resource Definitions (CRDs) that your Operator may use: ones that are owned by it and ones that it depends on, which are required.

11.4.5.1. Owned CRDs

The CRDs owned by your Operator are the most important part of your CSV. This establishes the link between your Operator and the required RBAC rules, dependency management, and other Kubernetes concepts.

It is common for your Operator to use multiple CRDs to link together concepts, such as top-level database configuration in one object and a representation of ReplicaSets in another. Each one should be listed out in the CSV file.

Table 11.7. Owned CRD fields

FieldDescriptionRequired/Optional

Name

The full name of your CRD.

Required

Version

The version of that object API.

Required

Kind

The machine readable name of your CRD.

Required

DisplayName

A human readable version of your CRD name, for example MongoDB Standalone.

Required

Description

A short description of how this CRD is used by the Operator or a description of the functionality provided by the CRD.

Required

Group

The API group that this CRD belongs to, for example database.example.com.

Optional

Resources

Your CRDs own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

These Descriptors are a way to hint UIs with certain inputs or outputs of your Operator that are most important to an end user. If your CRD contains the name of a Secret or ConfigMap that the user must provide, you can specify that here. These items are linked and highlighted in compatible UIs.

There are three types of descriptors:

  • SpecDescriptors: A reference to fields in the spec block of an object.
  • StatusDescriptors: A reference to fields in the status block of an object.
  • ActionDescriptors: A reference to actions that can be performed on an object.

All Descriptors accept the following fields:

  • DisplayName: A human readable name for the Spec, Status, or Action.
  • Description: A short description of the Spec, Status, or Action and how it is used by the Operator.
  • Path: A dot-delimited path of the field on the object that this descriptor describes.
  • X-Descriptors: Used to determine which "capabilities" this descriptor has and which UI component to use. See the openshift/console project for a canonical list of React UI X-Descriptors for OpenShift Container Platform.

Also see the openshift/console project for more information on Descriptors in general.

Optional

The following example depicts a MongoDB Standalone CRD that requires some user input in the form of a Secret and ConfigMap, and orchestrates Services, StatefulSets, Pods and ConfigMaps:

Example owned CRD

      - displayName: MongoDB Standalone
        group: mongodb.com
        kind: MongoDbStandalone
        name: mongodbstandalones.mongodb.com
        resources:
          - kind: Service
            name: ''
            version: v1
          - kind: StatefulSet
            name: ''
            version: v1beta2
          - kind: Pod
            name: ''
            version: v1
          - kind: ConfigMap
            name: ''
            version: v1
        specDescriptors:
          - description: Credentials for Ops Manager or Cloud Manager.
            displayName: Credentials
            path: credentials
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret'
          - description: Project this deployment belongs to.
            displayName: Project
            path: project
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap'
          - description: MongoDB version to be installed.
            displayName: Version
            path: version
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:label'
        statusDescriptors:
          - description: The status of each of the Pods for the MongoDB cluster.
            displayName: Pod Status
            path: pods
            x-descriptors:
              - 'urn:alm:descriptor:com.tectonic.ui:podStatuses'
        version: v1
        description: >-
          MongoDB Deployment consisting of only one host. No replication of
          data.

11.4.5.2. Required CRDs

Relying on other required CRDs is completely optional and only exists to reduce the scope of individual Operators and provide a way to compose multiple Operators together to solve an end-to-end use case.

An example of this is an Operator that might set up an application and install an etcd cluster (from an etcd Operator) to use for distributed locking and a Postgres database (from a Postgres Operator) for data storage.

The Operator Lifecycle Manager (OLM) checks against the available CRDs and Operators in the cluster to fulfill these requirements. If suitable versions are found, the Operators are started within the desired namespace and a Service Account created for each Operator to create, watch, and modify the Kubernetes resources required.

Table 11.8. Required CRD fields

FieldDescriptionRequired/Optional

Name

The full name of the CRD you require.

Required

Version

The version of that object API.

Required

Kind

The Kubernetes object kind.

Required

DisplayName

A human readable version of the CRD.

Required

Description

A summary of how the component fits in your larger architecture.

Required

Example required CRD

    required:
    - name: etcdclusters.etcd.database.coreos.com
      version: v1beta2
      kind: EtcdCluster
      displayName: etcd Cluster
      description: Represents a cluster of etcd nodes.

11.4.5.3. CRD templates

Users of your Operator will need to be aware of which options are required versus optional. You can provide templates for each of your CRDs with a minimum set of configuration as an annotation named alm-examples. Compatible UIs will pre-fill this template for users to further customize.

The annotation consists of a list of the kind, for example, the CRD name and the corresponding metadata and spec of the Kubernetes object.

The following full example provides templates for EtcdCluster, EtcdBackup and EtcdRestore:

metadata:
  annotations:
    alm-examples: >-
      [{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]

11.4.6. Understanding your API services

As with CRDs, there are two types of APIServices that your Operator may use: owned and required.

11.4.6.1. Owned APIServices

When a CSV owns an APIService, it is responsible for describing the deployment of the extension api-server that backs it and the group-version-kinds it provides.

An APIService is uniquely identified by the group-version it provides and can be listed multiple times to denote the different kinds it is expected to provide.

Table 11.9. Owned APIService fields

FieldDescriptionRequired/Optional

Group

Group that the APIService provides, for example database.example.com.

Required

Version

Version of the APIService, for example v1alpha1.

Required

Kind

A kind that the APIService is expected to provide.

Required

Name

The plural name for the APIService provided

Required

DeploymentName

Name of the deployment defined by your CSV that corresponds to your APIService (required for owned APIServices). During the CSV pending phase, the OLM Operator searches your CSV’s InstallStrategy for a deployment spec with a matching name, and if not found, does not transition the CSV to the install ready phase.

Required

DisplayName

A human readable version of your APIService name, for example MongoDB Standalone.

Required

Description

A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService.

Required

Resources

Your APIServices own one or more types of Kubernetes objects. These are listed in the resources section to inform your users of the objects they might need to troubleshoot or how to connect to the application, such as the Service or Ingress rule that exposes a database.

It is recommended to only list out the objects that are important to a human, not an exhaustive list of everything you orchestrate. For example, ConfigMaps that store internal state that should not be modified by a user should not appear here.

Optional

SpecDescriptors, StatusDescriptors, and ActionDescriptors

Essentially the same as for owned CRDs.

Optional

11.4.6.1.1. APIService Resource Creation

The Operator Lifecycle Manager (OLM) is responsible for creating or replacing the Service and APIService resources for each unique owned APIService:

  • Service Pod selectors are copied from the CSV deployment matching the APIServiceDescription’s DeploymentName.
  • A new CA key/cert pair is generated for each installation and the base64-encoded CA bundle is embedded in the respective APIService resource.
11.4.6.1.2. APIService Serving Certs

The OLM handles generating a serving key/cert pair whenever an owned APIService is being installed. The serving certificate has a CN containing the host name of the generated Service resource and is signed by the private key of the CA bundle embedded in the corresponding APIService resource.

The cert is stored as a type kubernetes.io/tls Secret in the deployment namespace, and a Volume named apiservice-cert is automatically appended to the Volumes section of the deployment in the CSV matching the APIServiceDescription’s DeploymentName field.

If one does not already exist, a VolumeMount with a matching name is also appended to all containers of that deployment. This allows users to define a VolumeMount with the expected name to accommodate any custom path requirements. The generated VolumeMount’s path defaults to /apiserver.local.config/certificates and any existing VolumeMounts with the same path are replaced.

11.4.6.2. Required APIServices

The OLM ensures all required CSVs have an APIService that is available and all expected group-version-kinds are discoverable before attempting installation. This allows a CSV to rely on specific kinds provided by APIServices it does not own.

Table 11.10. Required APIService fields

FieldDescriptionRequired/Optional

Group

Group that the APIService provides, for example database.example.com.

Required

Version

Version of the APIService, for example v1alpha1.

Required

Kind

A kind that the APIService is expected to provide.

Required

DisplayName

A human readable version of your APIService name, for example MongoDB Standalone.

Required

Description

A short description of how this APIService is used by the Operator or a description of the functionality provided by the APIService.

Required

11.5. Configuring built-in monitoring with Prometheus

This guide describes the built-in monitoring support provided by the Operator SDK using the Prometheus Operator and details usage for Operator authors.

11.5.1. Prometheus Operator support

Prometheus is an open-source systems monitoring and alerting toolkit. The Prometheus Operator creates, configures, and manages Prometheus clusters running on Kubernetes-based clusters, such as OpenShift Container Platform.

Helper functions exist in the Operator SDK by default to automatically set up metrics in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed.

11.5.2. Metrics helper

In Go-based Operators generated using the Operator SDK, the following function exposes general metrics about the running program:

func ExposeMetricsPort(ctx context.Context, port int32) (*v1.Service, error)

These metrics are inherited from the controller-runtime library API. By default, the metrics are served on 0.0.0.0:8383/metrics.

A Service object is created with the metrics port exposed, which can be then accessed by Prometheus. The Service object is garbage collected when the leader Pod’s root owner is deleted.

The following example is present in the cmd/manager/main.go file in all Operators generated using the Operator SDK:

import(
    "github.com/operator-framework/operator-sdk/pkg/metrics"
    "machine.openshift.io/controller-runtime/pkg/manager"
)

var (
    // Change the below variables to serve metrics on a different host or port.
    metricsHost       = "0.0.0.0" 1
    metricsPort int32 = 8383 2
)
...
func main() {
    ...
    // Pass metrics address to controller-runtime manager
    mgr, err := manager.New(cfg, manager.Options{
        Namespace:          namespace,
        MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort),
    })

    ...
    // Create Service object to expose the metrics port.
    _, err = metrics.ExposeMetricsPort(ctx, metricsPort)
    if err != nil {
        // handle error
        log.Info(err.Error())
    }
    ...
}
1
The host that the metrics are exposed on.
2
The port that the metrics are exposed on.

11.5.2.1. Modifying the metrics port

Operator authors can modify the port that metrics are exposed on.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • In the generated Operator’s cmd/manager/main.go file, change the value of metricsPort in the line var metricsPort int32 = 8383.

11.5.3. ServiceMonitor resources

A ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator that discovers the Endpoints in Service objects and configures Prometheus to monitor those Pods.

In Go-based Operators generated using the Operator SDK, the GenerateServiceMonitor() helper function can take a Service object and generate a ServiceMonitor Custom Resource (CR) based on it.

Additional resources

11.5.3.1. Creating ServiceMonitor resources

Operator authors can add Service target discovery of created monitoring Services using the metrics.CreateServiceMonitor() helper function, which accepts the newly created Service.

Prerequisites

  • Go-based Operator generated using the Operator SDK
  • Kubernetes-based cluster with the Prometheus Operator deployed

Procedure

  • Add the metrics.CreateServiceMonitor() helper function to your Operator code:

    import(
        "k8s.io/api/core/v1"
        "github.com/operator-framework/operator-sdk/pkg/metrics"
        "machine.openshift.io/controller-runtime/pkg/client/config"
    )
    func main() {
    
        ...
        // Populate below with the Service(s) for which you want to create ServiceMonitors.
        services := []*v1.Service{}
        // Create one ServiceMonitor per application per namespace.
        // Change the below value to name of the Namespace you want the ServiceMonitor to be created in.
        ns := "default"
        // restConfig is used for talking to the Kubernetes apiserver
        restConfig := config.GetConfig()
    
        // Pass the Service(s) to the helper function, which in turn returns the array of ServiceMonitor objects.
        serviceMonitors, err := metrics.CreateServiceMonitors(restConfig, ns, services)
        if err != nil {
            // Handle errors here.
        }
        ...
    }

11.6. Configuring leader election

During the lifecycle of an Operator, it is possible that there may be more than one instance running at any given time, for example when rolling out an upgrade for the Operator. In such a scenario, it is necessary to avoid contention between multiple Operator instances using leader election. This ensures only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.

There are two different leader election implementations to choose from, each with its own trade-off:

  • Leader-for-life: The leader Pod only gives up leadership (using garbage collection) when it is deleted. This implementation precludes the possibility of two instances mistakenly running as leaders (split brain). However, this method can be subject to a delay in electing a new leader. For example, when the leader Pod is on an unresponsive or partitioned node, the pod-eviction-timeout dictates how it takes for the leader Pod to be deleted from the node and step down (default 5m). See the Leader-for-life Go documentation for more.
  • Leader-with-lease: The leader Pod periodically renews the leader lease and gives up leadership when it cannot renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations. See the Leader-with-lease Go documentation for more.

By default, the Operator SDK enables the Leader-for-life implementation. Consult the related Go documentation for both approaches to consider the trade-offs that make sense for your use case,

The following examples illustrate how to use the two options.

11.6.1. Using Leader-for-life election

With the Leader-for-life election implementation, a call to leader.Become() blocks the Operator as it retries until it can become the leader by creating the ConfigMap named memcached-operator-lock:

import (
  ...
  "github.com/operator-framework/operator-sdk/pkg/leader"
)

func main() {
  ...
  err = leader.Become(context.TODO(), "memcached-operator-lock")
  if err != nil {
    log.Error(err, "Failed to retry for leader lock")
    os.Exit(1)
  }
  ...
}

If the Operator is not running inside a cluster, leader.Become() simply returns without error to skip the leader election since it cannot detect the Operator’s namespace.

11.6.2. Using Leader-with-lease election

The Leader-with-lease implementation can be enabled using the Manager Options for leader election:

import (
  ...
  "sigs.k8s.io/controller-runtime/pkg/manager"
)

func main() {
  ...
  opts := manager.Options{
    ...
    LeaderElection: true,
    LeaderElectionID: "memcached-operator-lock"
  }
  mgr, err := manager.New(cfg, opts)
  ...
}

When the Operator is not running in a cluster, the Manager returns an error when starting since it cannot detect the Operator’s namespace in order to create the ConfigMap for leader election. You can override this namespace by setting the Manager’s LeaderElectionNamespace option.

11.7. Operator SDK CLI reference

This guide documents the Operator SDK CLI commands and their syntax:

$ operator-sdk <command> [<subcommand>] [<argument>] [<flags>]

11.7.1. build

The operator-sdk build command compiles the code and builds the executables. After build completes, the image is built locally in docker. It must then be pushed to a remote registry.

Table 11.11. build arguments

ArgumentDescription

<image>

The container image to be built, e.g., quay.io/example/operator:v0.0.1.

Table 11.12. build flags

FlagDescription

--enable-tests (bool)

Enable in-cluster testing by adding test binary to the image.

--namespaced-manifest (string)

Path of namespaced resources manifest for tests. Default: deploy/operator.yaml.

--test-location (string)

Location of tests. Default: ./test/e2e

-h, --help

Usage help output.

If --enable-tests is set, the build command also builds the testing binary, adds it to the container image, and generates a deploy/test-pod.yaml file that allows a user to run the tests as a Pod on a cluster.

Example output

$ operator-sdk build quay.io/example/operator:v0.0.1

building example-operator...

building container quay.io/example/operator:v0.0.1...
Sending build context to Docker daemon  163.9MB
Step 1/4 : FROM alpine:3.6
 ---> 77144d8c6bdc
Step 2/4 : ADD tmp/_output/bin/example-operator /usr/local/bin/example-operator
 ---> 2ada0d6ca93c
Step 3/4 : RUN adduser -D example-operator
 ---> Running in 34b4bb507c14
Removing intermediate container 34b4bb507c14
 ---> c671ec1cff03
Step 4/4 : USER example-operator
 ---> Running in bd336926317c
Removing intermediate container bd336926317c
 ---> d6b58a0fcb8c
Successfully built d6b58a0fcb8c
Successfully tagged quay.io/example/operator:v0.0.1

11.7.2. completion

The operator-sdk completion command generates shell completions to make issuing CLI commands quicker and easier.

Table 11.13. completion subcommands

SubcommandDescription

bash

Generate bash completions.

zsh

Generate zsh completions.

Table 11.14. completion flags

FlagDescription

-h, --help

Usage help output.

Example output

$ operator-sdk completion bash

# bash completion for operator-sdk                         -*- shell-script -*-
...
# ex: ts=4 sw=4 et filetype=sh

11.7.3. print-deps

The operator-sdk print-deps command prints the most recent Golang packages and versions required by Operators. It prints in columnar format by default.

Table 11.15. print-deps flags

FlagDescription

--as-file

Print packages and versions in Gopkg.toml format.

Example output

$ operator-sdk print-deps --as-file
required = [
  "k8s.io/code-generator/cmd/defaulter-gen",
  "k8s.io/code-generator/cmd/deepcopy-gen",
  "k8s.io/code-generator/cmd/conversion-gen",
  "k8s.io/code-generator/cmd/client-gen",
  "k8s.io/code-generator/cmd/lister-gen",
  "k8s.io/code-generator/cmd/informer-gen",
  "k8s.io/code-generator/cmd/openapi-gen",
  "k8s.io/gengo/args",
]

[[override]]
  name = "k8s.io/code-generator"
  revision = "6702109cc68eb6fe6350b83e14407c8d7309fd1a"
...

11.7.4. generate

The operator-sdk generate command invokes a specific generator to generate code as needed.

Table 11.16. generate subcommands

SubcommandDescription

k8s

Runs the Kubernetes code-generators for all CRD APIs under pkg/apis/. Currently, k8s only runs deepcopy-gen to generate the required DeepCopy() functions for all Custom Resource (CR) types.

Note

This command must be run every time the API (spec and status) for a custom resource type is updated.

Example output

$ tree pkg/apis/app/v1alpha1/
pkg/apis/app/v1alpha1/
├── appservice_types.go
├── doc.go
├── register.go

$ operator-sdk generate k8s
Running code-generation for Custom Resource (CR) group versions: [app:v1alpha1]
Generating deepcopy funcs

$ tree pkg/apis/app/v1alpha1/
pkg/apis/app/v1alpha1/
├── appservice_types.go
├── doc.go
├── register.go
└── zz_generated.deepcopy.go

11.7.5. olm-catalog

The operator-sdk olm-catalog is the parent command for all Operator Lifecycle Manager (OLM) Catalog-related commands.

11.7.5.1. gen-csv

The gen-csv subcommand writes a Cluster Service Version (CSV) manifest and optionally Custom Resource Definition (CRD) files to deploy/olm-catalog/<operator_name>/<csv_version>.

Table 11.17. olm-catalog gen-csv flags

FlagDescription

--csv-version (string)

Semantic version of the CSV manifest. Required.

--from-version (string)

Semantic version of CSV manifest to use as a base for a new version.

--csv-config (string)

Path to CSV configuration file. Default: deploy/olm-catalog/csv-config.yaml.

--update-crds

Updates CRD manifests in deploy/<operator_name>/<csv_version> using the latest CRD manifests.

Example output

$ operator-sdk olm-catalog gen-csv --csv-version 0.1.0 --update-crds
INFO[0000] Generating CSV manifest version 0.1.0
INFO[0000] Fill in the following required fields in file deploy/olm-catalog/operator-name/0.1.0/operator-name.v0.1.0.clusterserviceversion.yaml:
	spec.keywords
	spec.maintainers
	spec.provider
	spec.labels
INFO[0000] Created deploy/olm-catalog/operator-name/0.1.0/operator-name.v0.1.0.clusterserviceversion.yaml

11.7.6. new

The operator-sdk new command creates a new Operator application and generates (or scaffolds) a default project directory layout based on the input <project_name>.

Table 11.18. new arguments

ArgumentDescription

<project_name>

Name of the new project.

Table 11.19. new flags

FlagDescription

--api-version

CRD APIVersion in the format $GROUP_NAME/$VERSION, for example app.example.com/v1alpha1. Used with ansible or helm types.

--dep-manager [dep|modules]

Dependency manager the new project will use. Used with go type. (Default: modules)

--generate-playbook

Generate an Ansible playbook skeleton. Used with ansible type.

--header-file <string>

Path to file containing headers for generated Go files. Copied to hack/boilerplate.go.txt.

--helm-chart <string>

Initialize Helm operator with existing Helm chart: <url>, <repo>/<name>, or local path.

--helm-chart-repo <string>

Chart repository URL for the requested Helm chart.

--helm-chart-version <string>

Specific version of the Helm chart. (Default: latest version)

--help, -h

Usage and help output.

--kind <string>

CRD Kind, for example AppService. Used with ansible or helm types.

--skip-git-init

Do not initialize the directory as a Git repository.

--type

Type of Operator to initialize: go, ansible or helm. (Default: go)

Example usage for Go project

$ mkdir $GOPATH/src/github.com/example.com/
$ cd $GOPATH/src/github.com/example.com/
$ operator-sdk new app-operator

Example usage for Ansible project

$ operator-sdk new app-operator \
    --type=ansible \
    --api-version=app.example.com/v1alpha1 \
    --kind=AppService

11.7.7. add

The operator-sdk add command adds a controller or resource to the project. The command must be run from the Operator project root directory.

Table 11.20. add subcommands

SubcommandDescription

api

Adds a new API definition for a new Custom Resource (CR) under pkg/apis and generates the Customer Resource Definition (CRD) and Custom Resource (CR) files under deploy/crds/. If the API already exists at pkg/apis/<group>/<version>, then the command does not overwrite and returns an error.

controller

Adds a new controller under pkg/controller/<kind>/. The controller expects to use the CR type that should already be defined under pkg/apis/<group>/<version> via the operator-sdk add api --kind=<kind> --api-version=<group/version> command. If the controller package for that Kind already exists at pkg/controller/<kind>, then the command does not overwrite and returns an error.

crd

Adds a CRD and the CR files. The <project-name>/deploy path must already exist. The --api-version and --kind flags are required to generate the new Operator application.

  • Generated CRD filename: <project-name>/deploy/crds/<group>_<version>_<kind>_crd.yaml
  • Generated CR filename: <project-name>/deploy/crds/<group>_<version>_<kind>_cr.yaml

Table 11.21. add api flags

FlagDescription

--api-version (string)

CRD APIVersion in the format $GROUP_NAME/$VERSION (e.g., app.example.com/v1alpha1).

--kind (string)

CRD Kind (e.g., AppService).

Example add api output

$ operator-sdk add api --api-version app.example.com/v1alpha1 --kind AppService
Create pkg/apis/app/v1alpha1/appservice_types.go
Create pkg/apis/addtoscheme_app_v1alpha1.go
Create pkg/apis/app/v1alpha1/register.go
Create pkg/apis/app/v1alpha1/doc.go
Create deploy/crds/app_v1alpha1_appservice_cr.yaml
Create deploy/crds/app_v1alpha1_appservice_crd.yaml
Running code-generation for Custom Resource (CR) group versions: [app:v1alpha1]
Generating deepcopy funcs

$ tree pkg/apis
pkg/apis/
├── addtoscheme_app_appservice.go
├── apis.go
└── app
	└── v1alpha1
		├── doc.go
		├── register.go
		├── types.go

Example add controller output

$ operator-sdk add controller --api-version app.example.com/v1alpha1 --kind AppService
Create pkg/controller/appservice/appservice_controller.go
Create pkg/controller/add_appservice.go

$ tree pkg/controller
pkg/controller/
├── add_appservice.go
├── appservice
│   └── appservice_controller.go
└── controller.go

Example add crd output

$ operator-sdk add crd --api-version app.example.com/v1alpha1 --kind AppService
Generating Custom Resource Definition (CRD) files
Create deploy/crds/app_v1alpha1_appservice_crd.yaml
Create deploy/crds/app_v1alpha1_appservice_cr.yaml

11.7.8. test

The operator-sdk test command can test the Operator locally.

11.7.8.1. local

The local subcommand runs Go tests built using the Operator SDK’s test framework locally.

Table 11.22. test local arguments

ArgumentsDescription

<test_location> (string)

Location of e2e test files (e.g., ./test/e2e/).

Table 11.23. test local flags

FlagsDescription

--kubeconfig (string)

Location of kubeconfig for a cluster. Default: ~/.kube/config.

--global-manifest (string)

Path to manifest for global resources. Default: deploy/crd.yaml.

--namespaced-manifest (string)

Path to manifest for per-test, namespaced resources. Default: combines deploy/service_account.yaml, deploy/rbac.yaml, and deploy/operator.yaml.

--namespace (string)

If non-empty, a single namespace to run tests in (e.g., operator-test). Default: ""

--go-test-flags (string)

Extra arguments to pass to go test (e.g., -f "-v -parallel=2").

--up-local

Enable running the Operator locally with go run instead of as an image in the cluster.

--no-setup

Disable test resource creation.

--image (string)

Use a different Operator image from the one specified in the namespaced manifest.

-h, --help

Usage help output.

Example output

$ operator-sdk test local ./test/e2e/

# Output:
ok  	github.com/operator-framework/operator-sdk-samples/memcached-operator/test/e2e	20.410s

11.7.9. up

The operator-sdk up command has subcommands that can launch the Operator in various ways.

11.7.9.1. local

The local subcommand launches the Operator on the local machine by building the Operator binary with the ability to access a Kubernetes cluster using a kubeconfig file.

Table 11.24. up local arguments

ArgumentsDescription

--kubeconfig (string)

The file path to a Kubernetes configuration file. Defaults: $HOME/.kube/config

--namespace (string)

The namespace where the Operator watches for changes. Default: default

--operator-flags

Flags that the local Operator may need. Example: --flag1 value1 --flag2=value2

-h, --help

Usage help output.

Example output

$ operator-sdk up local \
  --kubeconfig "mycluster.kubecfg" \
  --namespace "default" \
  --operator-flags "--flag1 value1 --flag2=value2"

The following example uses the default kubeconfig, the default namespace environment variable, and passes in flags for the Operator. To use the Operator flags, your Operator must know how to handle the option. For example, for an Operator that understands the resync-interval flag:

$ operator-sdk up local --operator-flags "--resync-interval 10"

If you are planning on using a different namespace than the default, use the --namespace flag to change where the Operator is watching for Custom Resources (CRs) to be created:

$ operator-sdk up local --namespace "testing"

For this to work, your Operator must handle the WATCH_NAMESPACE environment variable. This can be accomplished using the utility functionk8sutil.GetWatchNamespace in your Operator.

11.8. Appendices

11.8.1. Operator project scaffolding layout

The operator-sdk CLI generates a number of packages for each Operator project. The following sections describes a basic rundown of each generated file and directory.

11.8.1.1. Go-based projects

Go-based Operator projects (the default type) generated using the operator-sdk new command contain the following directories and files:

File/foldersPurpose

cmd/

Contains manager/main.go file, which is the main program of the Operator. This instantiates a new manager which registers all Custom Resource Definitions under pkg/apis/ and starts all controllers under pkg/controllers/.

pkg/apis/

Contains the directory tree that defines the APIs of the Custom Resource Definitions (CRDs). Users are expected to edit the pkg/apis/<group>/<version>/<kind>_types.go files to define the API for each resource type and import these packages in their controllers to watch for these resource types.

pkg/controller

This pkg contains the controller implementations. Users are expected to edit the pkg/controller/<kind>/<kind>_controller.go files to define the controller’s reconcile logic for handling a resource type of the specified kind.

build/

Contains the Dockerfile and build scripts used to build the Operator.

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment.

Gopkg.toml
Gopkg.lock

The Go Dep manifests that describe the external dependencies of this Operator.

vendor/

The golang vendor folder that contains the local copies of the external dependencies that satisfy the imports of this project. Go Dep manages the vendor directly.

11.8.1.2. Helm-based projects

Helm-based Operator projects generated using the operator-sdk new --type helm command contain the following directories and files:

File/foldersPurpose

deploy/

Contains various YAML manifests for registering CRDs, setting up RBAC, and deploying the Operator as a Deployment.

helm-charts/<kind>

Contains a Helm chart initialized using the equivalent of the helm create command.

build/

Contains the Dockerfile and build scripts used to build the Operator.

watches.yaml

Contains Group, Version, Kind, and Helm chart location.