Chapter 2. Deploying AMQ Broker on OpenShift Container Platform using an Operator

2.1. Overview of the AMQ Broker Operator

Kubernetes - and, by extension, OpenShift Container Platform - includes features such as secret handling, load balancing, service discovery, and autoscaling that enable you to build complex distributed systems. Operators are programs that enable you to package, deploy, and manage Kubernetes applications. Often, Operators automate common or complex tasks.

Commonly, Operators are intended to provide:

  • Consistent, repeatable installations
  • Health checks of system components
  • Over-the-air (OTA) updates
  • Managed upgrades

Operators use Kubernetes extension mechanisms called Custom Resource Definitions and corresponding Custom Resources to ensure that your custom objects look and act just like native, built-in Kubernetes objects. Custom Resource Definitions and Custom Resources are how you specify the configuration of the OpenShift objects that you plan to deploy.

Previously, you could use only application templates to deploy AMQ Broker on OpenShift Container Platform. While templates are effective for creating an initial deployment, they do not provide a mechanism for updating the deployment. Operators enable you to make changes while your broker instances are running, because they are always listening for changes to your Custom Resources, where you specify your configuration. When you make changes to a Custom Resource, the Operator reconciles the changes with the existing broker installation in your project, and makes it reflect the changes you have made.

2.2. Overview of Custom Resource Definitions

In general, a Custom Resource Definition (CRD) is a schema of configuration items that you can modify for a custom OpenShift object deployed with an Operator. An accompanying Custom Resource (CR) file enables you to specify values for configuration items in the CRD. If you are an Operator developer, what you expose through a CRD essentially becomes the API for how a deployed object is configured and used. You can directly access the CRD through regular HTTP curl commands, because the CRD gets exposed automatically through Kubernetes. The Operator also interacts with Kubernetes via the kubectl command using HTTP requests.

The main broker CRD for AMQ Broker 7.6 is the broker_activemqartemis_crd file in the deploy/crds directory of the archive that you download and extract when installing the Operator. This CRD enables you to configure a broker deployment in a given OpenShift project. The other CRDs in the deploy/crds directory are for configuring addresses and for the Operator to use when instantiating a scaledown controller .

When deployed, each CRD is a separate controller, running independently within the Operator.

For a complete configuration reference for each CRD see:

2.2.1. Sample broker Custom Resources

The AMQ Broker Operator archive that you download and extract during installation includes sample Custom Resource (CR) files in the deploy/crs directory. These sample CR files enable you to:

  • Deploy a minimal broker without SSL or clustering.
  • Define addresses.

The broker Operator archive that you download and extract also includes CRs for example deployments in the deploy/examples directory, as listed below.

artemis-basic-deployment.yaml
Basic broker deployment.
artemis-persistence-deployment.yaml
Broker deployment with persistent storage.
artemis-cluster-deployment.yaml
Deployment of clustered brokers.
artemis-persistence-cluster-deployment.yaml
Deployment of clustered brokers with persistent storage.
artemis-ssl-deployment.yaml
Broker deployment with SSL security.
artemis-ssl-persistence-deployment.yaml
Broker deployment with SSL security and persistent storage.
artemis-aio-journal.yaml
Use of asynchronous I/O (AIO) with the broker journal.
address-queue-create.yaml
Address and queue creation.

The procedures in the following sections show you how to use the Operator and CRs to create some container-based broker deployments on OpenShift Container Platform. When you have successfully completed the procedures, you will have the Operator running in an individual Pod. Each broker instance that you create will run in a separate StatefulSet containing a Pod in the project. You will use a dedicated CR to define addresses in your broker deployments.

Note

You cannot create more than one broker deployment in a given OpenShift project by deploying multiple broker CR instances. However, when you have created a broker deployment in a project, you can deploy multiple CR instances for addresses.

2.3. Installing the AMQ Broker Operator using the CLI

The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Operator for AMQ Broker 7.6 in a given OpenShift project. In subsequent procedures, you use this Operator to deploy some broker instances.

For an alternative method of installing the AMQ Broker Operator that uses the OperatorHub graphical interface, see Section 2.4, “Managing the AMQ Broker Operator using the Operator Lifecycle Manager”.

Important
  • Deploying the Custom Resource Definitions (CRDs) that accompany the AMQ Broker Operator requires cluster administrator privileges for your OpenShift cluster. When the Operator is deployed, non-administrator users can create broker instances via corresponding Custom Resources (CRs). To enable regular users to deploy CRs, the cluster administrator must first assign roles and permissions to the CRDs. For more information, see Creating cluster roles for Custom Resource Definitions in the OpenShift Container Platform documentation.
  • If you previously deployed the Operator for AMQ Broker 7.4 (that is, version 0.6), you must remove all CRDs in your cluster and the main broker Custom Resource (CR) in your OpenShift project before installing the latest version of the Operator for AMQ Broker 7.6. When you have installed the latest version of the Operator in your project, you can deploy CRs that correspond to the latest CRDs to create a new broker deployment. These steps are described in the procedures in this section.
  • If you previously deployed the Operator for AMQ Broker 7.5 (that is, version 0.9), you can upgrade your project to use the latest version of the AMQ Broker Operator without removing the existing CRDs in your cluster. However, as part of this upgrade, you must delete the main broker CR from your project. These steps are described in Section 4.1.2, “Upgrading the Operator using the CLI”.
  • When you update your cluster with the latest CRDs (required as part of a new Operator installation or an upgrade), this update affects all projects in the cluster. In particular, any broker Pods previously deployed from version 0.6 or version 0.9 of the AMQ Broker Operator become unable to update their status in the OpenShift Container Platform web console. When you click the Logs tab of a running broker Pod, you see messages in the log indicating that 'UpdatePodStatus' has failed. Despite this update failure, the broker Pods and broker Operator in that project continue to work as expected. To fix this issue for an affected project, that project must also be updated to use the latest version of the AMQ Broker Operator, either through a new installation or an upgrade.

2.3.1. Getting the Operator code

This procedure shows you how to access and prepare the code you need to install the latest version of the Operator for AMQ Broker 7.6.

Procedure

  1. In your web browser, navigate to the AMQ Broker Software Downloads page.
  2. In the Version drop-down box, ensure that the value is set to the latest Broker version, 7.6.0.
  3. Next to AMQ Broker 7.6 Operator Installation Files, click Download.

    Download of the amq-broker-operator-7.6.0-ocp-install-examples.zip compressed archive automatically begins.

  4. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called /broker/operator.

    sudo mv amq-broker-operator-7.6.0-ocp-install-examples.zip /broker/operator
  5. In your chosen installation directory, extract the contents of the archive. For example:

    cd /broker/operator
    sudo unzip amq-broker-operator-7.6.0-ocp-install-examples.zip
  6. Log in to OpenShift Container Platform as a cluster administrator. For example:

    $ oc login -u system:admin
  7. Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one.

    1. Create a new project:

      $ oc new-project <project_name>
    2. Or, switch to an existing project:

      $ oc project <project_name>
  8. Specify a service account to use with the Operator.

    1. In the deploy directory of the Operator archive that you extracted, open the service_account.yaml file.
    2. Ensure that the kind element is set to ServiceAccount.
    3. In the metadata section, assign a custom name to the service account, or use the default name. The default name is amq-broker-operator.
    4. Create the service account in your project.

      $ oc create -f deploy/service_account.yaml
  9. Specify a role name for the Operator.

    1. Open the role.yaml file. This file specifies the resources that the Operator can use and modify.
    2. Ensure that the kind element is set to Role.
    3. In the metadata section, assign a custom name to the role, or use the default name. The default name is amq-broker-operator.
    4. Create the role in your project.

      $ oc create -f deploy/role.yaml
  10. Specify a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified.

    1. Open the role_binding.yaml file. Ensure that the name values for ServiceAccount and Role match those specified in the service_account.yaml and role.yaml files. For example:

      metadata:
          name: amq-broker-operator
      subjects:
          kind: ServiceAccount
          name: amq-broker-operator
      roleRef:
          kind: Role
          name: amq-broker-operator
    2. Create the role binding in your project.

      $ oc create -f deploy/role_binding.yaml

2.3.2. Deploying the AMQ Broker Operator using the CLI

The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Broker 7.6 in your OpenShift project.

Note

If you previously deployed the Operator for AMQ Broker 7.5, you can upgrade your project to use the latest version of the Operator for AMQ Broker 7.6 without performing a new deployment. For more information, see Section 4.1.2, “Upgrading the Operator using the CLI”.

Prerequisites

  • You have already prepared your OpenShift project for the Operator deployment. See Section 2.3.1, “Getting the Operator code”.
  • Starting in AMQ Broker 7.3, you use a new version of the Red Hat Container Registry to access container images. This new version of the registry requires you to become an authenticated user before you can access images. Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication.
  • If you intend to deploy brokers with persistent storage and do not have container-native storage in your OpenShift cluster, you need to manually provision persistent volumes and ensure that they are available to be claimed by the Operator. For example, if you want to create a cluster of two brokers with persistent storage (that is, by setting persistenceEnabled=true in your Custom Resource), you need to have two persistent volumes available. By default, each broker instance requires storage of 2 GiB.

    If you specify persistenceEnabled=false in your Custom Resource, the deployed brokers uses ephemeral storage. Ephemeral storage means that that every time you restart the broker Pods, any existing data is lost.

    For more information about provisioning persistent storage in OpenShift Container Platform, see Understanding persistent storage in the OpenShift Container Platform documentation.

Procedure

  1. In the OpenShift Container Platform web console, open the project in which you want your broker deployment.

    If you created a new project, it is currently empty. Observe that there are no deployments, StatefulSets, Pods, Services, or Routes.

  2. If you deployed an earlier version of the AMQ Broker Operator in the project, remove the main broker Custom Resource (CR) from the project. Deleting the main CR removes the existing broker deployment in the project. For example:

    oc delete -f deploy/crs/broker_v1alpha1_activemqartemis_cr.yaml.
  3. If you deployed an earlier version of the AMQ Broker Operator in the project, delete this Operator instance. For example:

    $ oc delete -f deploy/operator.yaml
  4. If you deployed Custom Resource Definitions (CRDs) in your OpenShift cluster for an earlier version of the AMQ Broker Operator, remove these CRDs from the cluster. For example:

    oc delete -f deploy/crds/broker_v1alpha1_activemqartemis_crd.yaml
    oc delete -f deploy/crds/broker_v1alpha1_activemqartemisaddress_crd.yaml
    oc delete -f deploy/crds/broker_v1alpha1_activemqartemisscaledown_crd.yaml
  5. Deploy the CRDs that are included in the deploy/crds directory of the Operator archive that you downloaded and extracted. You must install the latest CRDs in your OpenShift cluster before deploying and starting the Operator.

    1. Deploy the main broker CRD.

      $ oc create -f deploy/crds/broker_activemqartemis_crd.yaml
    2. Deploy the addressing CRD.

      $ oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml
    3. Deploy the scaledown controller CRD.

      $ oc create -f deploy/crds/broker_activemqartemisscaledown_crd.yaml
  6. Link the pull secret associated with the account used for authentication in the Red Hat Container Registry with the default, deployer, and builder service accounts for your OpenShift project.

    $ oc secrets link --for=pull default <secret-name>
    $ oc secrets link --for=pull deployer <secret-name>
    $ oc secrets link --for=pull builder <secret-name>
    Note

    In OpenShift Container Platform 4.1 or later, you can also use the web console to associate a pull secret with a project in which you want to deploy container images such as the AMQ Broker Operator. To do this, click AdministrationService Accounts. Specify the pull secret associated with the account that you use for authentication in the Red Hat Container Registry.

  7. In the deploy directory of the Operator archive that you downloaded and extracted, open the operator.yaml file. Update spec.containers.image with the full path to the latest Operator image for AMQ Broker 7.6 in the Red Hat Container Registry.

    spec:
        template:
            spec:
                containers:
                    image: registry.redhat.io/amq7/amq-broker-rhel7-operator:0.13
  8. Deploy the Operator.

    $ oc create -f deploy/operator.yaml

    In your OpenShift project, the amq-broker-operator image that you deployed starts in a new Pod.

    The information on the Events tab of the new Pod confirms that OpenShift has deployed the Operator image you specified, assigned a new container to a node in your OpenShift cluster, and started the new container.

    In addition, if you click the Logs tab within the Pod, the output should include lines resembling the following:

    ...
    {"level":"info","ts":1553619035.8302743,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"activemqartemisaddress-controller"}
    {"level":"info","ts":1553619035.830541,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"activemqartemis-controller"}
    {"level":"info","ts":1553619035.9306898,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"activemqartemisaddress-controller","worker count":1}
    {"level":"info","ts":1553619035.9311671,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"activemqartemis-controller","worker count":1}

    The preceding output confirms that the newly-deployed Operator is communicating with Kubernetes, that the controllers for the broker and addressing are running, and that these controllers have started some workers.

Note

It is recommended that you deploy only a single instance of the AMQ Broker Operator in a given OpenShift project. Specifically, setting the replicas element of your Operator deployment to a value greater than 1, or deploying the Operator more than once in the same project is not recommended.

Additional resources

2.4. Managing the AMQ Broker Operator using the Operator Lifecycle Manager

2.4.1. Overview of the Operator Lifecycle Manager

In OpenShift Container Platform 4.0 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.

The OLM runs by default in OpenShift Container Platform 4.0, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments.

When you have installed the AMQ Broker Operator in OperatorHub, you can deploy Custom Resources (CRs) to create broker deployments such as standalone and clustered brokers.

2.4.2. Installing the AMQ Broker Operator in OperatorHub

If you do not see the latest version of the Operator for AMQ Broker 7.6 automatically available in OperatorHub, follow this procedure to manually install the latest version of the Operator in OperatorHub.

Procedure

  1. In your web browser, navigate to the AMQ Broker Software Downloads page.
  2. In the Version drop-down box, ensure the value is set to the latest Broker version, 7.6.0.
  3. Next to AMQ Broker 7.6 Operator Installation Files, click Download.

    Download of the amq-broker-operator-7.6.0-ocp-install-examples.zip compressed archive automatically begins.

  4. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called /broker/operator.

    sudo mv amq-broker-operator-7.6.0-ocp-install-examples.zip /broker/operator
  5. In your chosen installation directory, extract the contents of the archive. For example:

    cd /broker/operator
    unzip amq-broker-operator-7.6.0-ocp-install-examples.zip
  6. Switch to the directory for the Operator archive that you extracted. For example:

    cd amq-broker-operator-7.6.0-ocp-install-examples
  7. Log in to OpenShift Container Platform as a cluster administrator. For example:

    $ oc login -u system:admin
  8. Install the Operator in OperatorHub.

    $ oc create -f deploy/catalog_resources/amq-broker-operatorsource.yaml

    After a few minutes, the AMQ Broker Operator is available in the OperatorHub section of the OpenShift Container Platform web console.

2.4.3. Deploying the AMQ Broker Operator from OperatorHub

This procedure shows how to deploy the latest version of the Operator for AMQ Broker 7.6 from OperatorHub to a specified OpenShift project.

Prerequisites

Procedure

  1. From the Projects drop-down menu in the OpenShift Container Platform web console, select the project in which you want to install the AMQ Broker Operator.
  2. In the web console, click OperatorsOperatorHub.
  3. In OperatorHub, click the AMQ Broker Operator.

    If more than one instance of the AMQ Broker Operator is installed in OperatorHub, click each instance and review the information pane that opens. The major version of the latest Operator for AMQ Broker 7.6 is 0.13.

  4. To install the Operator, click Install.
  5. On the Operator Subscription page:

    1. Under Installation Mode, ensure that the radio button entitled A specific namespace on the cluster is selected. From the drop-down menu, select the project in which you want to install the AMQ Broker Operator.
    2. Under Update Channel, ensure that the radio button entitled current-76 is selected. This option specifies the channel used to track and receive updates for the Operator. The current-76 value specifies that the AMQ Broker 7.6 channel is used.
    3. Under Approval Strategy, ensure that the radio button entitled Automatic is selected. This option specifies that updates to the Operator do not require manual approval for installation to take place.
    4. Click Subscribe.

When the Operator installation is complete, the Installed Operators page opens. You should see that the AMQ Broker Operator is installed in the project namespace that you specified.

Additional resources

2.5. Deploying a basic broker

The following procedure shows how to deploy a basic broker instance in your OpenShift project when you have installed the AMQ Broker Operator.

Note

You cannot create more than one broker deployment in a given OpenShift project by deploying multiple Custom Resource (CR) instances. However, when you have created a broker deployment in a project, you can deploy multiple CR instances for addresses.

Prerequisites

Procedure

When you have successfully installed the Operator, the Operator is running and listening for changes related to your CRs. This example procedure shows how to use a CR to deploy a basic broker in your project.

  1. In the deploy/crs directory of the Operator archive that you downloaded and extracted, open the broker_activemqartemis_cr.yaml file. This file is an instance of a basic broker CR. The contents of the file look as follows:

    apiVersion: broker.amq.io/v2alpha2
    kind: ActiveMQArtemis
    metadata:
      name: ex-aao
      application: ex-aao-app
    spec:
        deploymentPlan:
            size: 2
            image: registry.redhat.io/amq7/amq-broker:7.6
            ...
  2. The deploymentPlan.size value specifies the number of brokers to deploy. The default value of 2 specifies a clustered broker deployment of two brokers. However, to deploy a single broker instance, change the value to 1.
  3. The deploymentPlan.image value specifies the container image to use to launch the broker. Ensure that this value specifies the latest version of the AMQ Broker 7.6 broker container image in the Red Hat Container Registry, as shown below.

    image: registry.redhat.io/amq7/amq-broker:7.6
  4. Save any changes that you made to the CR.
  5. Deploy the CR.

    $ oc create -f deploy/crs/broker_activemqartemis_cr.yaml
  6. In the OpenShift Container Platform web console, click WorkloadsStateful Sets (OpenShift Container Platform 4.1 or later) or ApplicationsStateful Sets (OpenShift Container Platform 3.11). You see a new Stateful Set called ex-aao-ss.

    Expand the ex-aao-ss Stateful Set section. You see that there is one Pod, corresponding to the single broker that you defined in the CR.

    On the Events tab of the running Pod, you see that the broker container has started. The Logs tab shows that the broker itself is running.

Additional resources

2.6. Applying Custom Resource changes to running broker deployments

The following are some important things to note about applying Custom Resource (CR) changes to running broker deployments:

  • You cannot dynamically update the persistenceEnabled attribute in your CR. To change this attribute, scale your cluster down to zero brokers. Delete the existing CR. Then, recreate and redeploy the CR with your changes, also specifying a deployment size.
  • If the image attribute in your CR uses a floating tag such as 7.6, then your deployment automatically pulls new image versions as they become available in the Red Hat Container Registry, provided that the imagePullPolicy attribute in your Stateful Set is set to Always. For example, if your deployment currently uses a broker image version, 7.6-2, and a newer broker image version, 7.6-3, becomes available, then your deployment automatically pulls and uses the new image version. To use the new image, each broker in the deployment is restarted. If you have multiple brokers in your deployment, brokers are restarted one at a time.
  • The value of the deploymentPlan.size attribute in your CR overrides any change you make to size of your broker deployment via the oc scale command. For example, suppose you use oc scale to change the size of a deployment from three brokers to two, but the value of deploymentPlan.size in your CR is still 3. In this case, OpenShift initially scales the deployment down to two brokers. However, when the scaledown operation is complete, the Operator restores the deployment to three brokers, as specified in the CR.
  • As described in Section 2.3.2, “Deploying the AMQ Broker Operator using the CLI”, if you create a broker deployment with persistent storage (that is, by setting persistenceEnabled=true in your CR), you might need to provision persistent volumes (PVs) for the AMQ Broker Operator to claim for your broker Pods. If you scale down the size of your broker deployment, the Operator releases any PVs that it previously claimed for the broker Pods that are now shut down. However, if you remove your broker deployment by deleting your CR, AMQ Broker Operator does not release persistent volume claims for any broker Pods that are still in the deployment. In addition, these unreleased PVs are unavailable to any new deployment. In this case, you need to manually release the volumes. For more information, see Releasing volumes in the OpenShift documentation.
  • During an active scaling event, any further changes that you apply are queued by the Operator and executed only when scaling is complete. For example, suppose that you scale the size of your deployment down from four brokers to one. Then, while scaledown is taking place, you also change the values of the broker administrator user name and password. In this case, the Operator queues the user name and password changes until the deployment is running with one active broker.
  • All Custom Resource changes – apart from changing the size of your deployment, or changing the value of the expose attribute for acceptors, connectors, or the console – cause existing brokers to be restarted. If you have multiple brokers in your deployment, only one broker restarts at a time.

2.7. Configuring Operator-based broker deployments for client connections

2.7.1. Configuring acceptors

To enable client connections to a broker Pod in your OpenShift deployment, you define acceptors on the Pod. Acceptors define how the broker accepts connections. You define acceptors in the Custom Resource (CR) used for your broker deployment. When you create an acceptor, you specify information such as the messaging protocols to enable on the acceptor, and the port on the broker Pod to use for these protocols.

The following procedure shows how to define a new acceptor in the CR for your broker deployment.

Prerequisites

Procedure

  1. In the deploy/crs directory of the Operator archive that you downloaded and extracted during your initial installation, open the broker_activemqartemis_cr.yaml Custom Resource (CR).
  2. In the acceptors element, add a named acceptor. Typically, you also specify a minimal set of attributes such as the protocols to be used by the acceptor and the port on the broker Pod to expose for those protocols. An example is shown below.

    spec:
    ...
      acceptors:
      - name: amqp_acceptor
        protocols: amqp
        port: 5672
        sslEnabled: false
    ...

    The preceding example shows configuration of a simple AMQP acceptor. The acceptor exposes port 5672 to AMQP clients.

2.7.1.1. Additional acceptor configuration notes

This section describes some additional things to note about acceptor configurations.

  • You can define acceptors either for internal clients (that is, client applications in the same OpenShift cluster as the broker Pod), or for both internal and external clients (that is, applications outside OpenShift). To also expose an acceptor to external clients, set the expose parameter of the acceptor configuration to true. The default value of this parameter is false.
  • A single acceptor can accept multiple client connections, up to a maximum limit specified by the connectionsAllowed parameter of your acceptor configuration.
  • If you do not define any acceptors in your CR, the broker Pods in your deployment use a single acceptor, created by default, on port 61616. This default acceptor has only the Core protocol specified.
  • Port 8161 is automatically exposed on the broker Pod for use by the AMQ Broker management console. Within the OpenShift network, this port can be accessed via the headless service that runs in your broker deployment. For more information, see Section 2.7.5.1, “Accessing the broker management console”.
  • You can enable SSL on the acceptor by setting sslEnabled to true. You can specify additional information such as:

    • The secret name used to store SSL credentials (required).
    • The cipher suites and and protocols to use for SSL communication.
    • Whether the acceptor uses two-way SSL, that is, mutual authentication between the broker and the client.

    If the acceptor that you define uses SSL, then the SSL credentials used by the acceptor must be stored in a secret. You can create your own secret and specify this secret name in the sslSecret parameter of your acceptor configuration. If you do not specify a custom secret name in the sslSecret parameter, the acceptor uses a default secret. The default secret name uses the format <CustomResourceName>-<AcceptorName>-secret. For example, ex-aao-amqp-secret.

    The SSL credentials required in the secret are broker.ks, which must be a base64-encoded keystore, client.ts, which must be a base64-encoded truststore, and keyStorePassword and trustStorePassword, which are passwords specified in raw text. This requirement is the same for any connectors that you configure. For information about generating credentials for SSL connections, see Section 2.7.3, “Generating credentials for SSL connections”.

Additional resources

2.7.2. Connecting to the broker from internal and external clients

  • An internal client can connect to the broker Pod by specifying an address in the format <PodName>:<AcceptorPortNumber>. OpenShift DNS successfully resolves addresses in this format because the Stateful Sets created by Operator-based broker deployments provide stable Pod names.
  • When you expose an acceptor to external clients, a dedicated Service and Route are automatically created. To see the Routes configured on a given broker Pod, select the Pod in the OpenShift Container Platform web console and click the Routes tab. An external client can connect to the broker by specifying the full host name of the Route created for the acceptor. You can use a curl command to test external access to this full host name. For example:

    $ curl https://ex-aao-0-svc-my_project.my_openshift_domain

    The full host name for the Route must resolve to the node that’s hosting the OpenShift router. The OpenShift router uses the host name to determine where to send the traffic inside the OpenShift internal network.

    By default, the OpenShift router listens to port 80 for non-secured (that is, non-SSL) traffic and port 443 for secured (that is, SSL-encrypted) traffic. For an HTTP connection, the router automatically directs traffic to port 443 if you specify a secure connection URL (that is, https), or to port 80 if you specify a non-secure connection URL (that is, http).

    By contrast, a messaging client that uses TCP must explicitly specify the port number as part of the connection URL. For example:

    tcp://ex-aao-0-svc-my_project.my_openshift_domain:443
  • As an alternative to using a Route, an OpenShift administrator can configure a NodePort to connect to a broker Pod from a client outside OpenShift. The NodePort should map to one of the protocol-specifc ports specified by the acceptors configured for the broker. By default, NodePorts are in the range 30000 to 32767, which means that a NodePort typically does not match the intended port on the broker Pod. To connect from a client outside OpenShift to the broker via a NodePort, you specify a URI in the format <Protocol>://<OCPNodeIP>:<NodePortNumber>.

Additional resources

2.7.3. Generating credentials for SSL connections

For SSL connections, AMQ Broker requires a broker keystore, a client keystore, and a client truststore that includes the broker keystore. This procedure shows you how to generate the credentials. The procedure uses Java Keytool, a package included with the Java Development Kit.

Procedure

  1. Generate a self-signed certificate for the broker keystore.

    $ keytool -genkey -alias broker -keyalg RSA -keystore broker.ks
  2. Export the certificate, so that it can be shared with clients.

    $ keytool -export -alias broker -keystore broker.ks -file broker_cert
  3. Generate a self-signed certificate for the client keystore.

    $ keytool -genkey -alias client -keyalg RSA -keystore client.ks
  4. Create a client truststore that imports the broker certificate.

    $ keytool -import -alias broker -keystore client.ts -file broker_cert
  5. Use the broker keystore file to create a secret to store the SSL credentials, as shown in the example below.

    $ oc secrets new amq-app-secret broker.ks client.ts
  6. Add the secret to the service account that you created when installing the Operator, as shown in the example below.

    $ oc secrets add sa/amq-broker-operator secret/amq-app-secret

2.7.4. Networking services in your broker deployments

On the Networking pane of the OpenShift Container Platform web console for your broker deployment, there are two running services; a headless service and a ping service. The default name of the headless service uses the format <Custom Resource name>-hdls-svc, for example, ex-aao-hdls-svc. The default name of the ping service uses a format of <Custom Resource name>-ping-svc, for example, ex-aao-ping-svc.

The headless service provides access to ports 8161 and 61616 on each broker Pod. Port 8161 is used by the broker management console, and port 61616 is used for broker clustering.

The ping service is a service used by the brokers for discovery, and enables brokers to form a cluster within the OpenShift environment. Internally, this service exposes the 8888 port.

2.7.5. Connecting to the AMQ Broker management console

The broker hosts its own management console at port 8161. Each broker Pod in your deployment has a Service and Route that provide access to the console.

The following procedure shows how to connect a running broker instance to the AMQ Broker management console.

Prerequisites

2.7.5.1. Accessing the broker management console

Each broker Pod in your deployment has a service that provides access to the console. The default name of this service uses the format <Custom Resource name>-wconsj-<broker Pod ordinal>-svc. For example, ex-aao-wconsj-0-svc. Each Service has a corresponding Route that uses the format `<Custom Resource name>-wconsj-<broker Pod ordinal>-svc-rte. For example, ex-aao-wconsj-0-svc-rte.

This procedure shows you how to access the AMQ Broker management console for a running broker instance.

Procedure

  1. In the OpenShift Container Platform web console, click NetworkingRoutes (OpenShift Container Platform 4.1 or later) or ApplicationsRoutes (OpenShift Container Platform 3.11).

    On the Routes pane, you see a Route corresponding to the wconsj Service.

  2. Under Hostname, note the complete URL. You need to specify this URL to access the console.
  3. In a web browser, enter the host name URL.

    1. If your console configuration does not use SSL, specify http in the URL. In this case, DNS resolution of the host name directs traffic to port 80 of the OpenShift router.
    2. If your console configuration uses SSL, specify https in the URL. In this case, your browser defaults to port 443 of the OpenShift router. This enables a successful connection to the console if the OpenShift router also uses port 443 for SSL traffic, which the router does by default.
  4. To log in to the management console, enter the user name and password specified in the adminUser and adminPassword parameters of your broker deployment Custom Resource. If there are no values specified for adminUser and adminPassword, follow the instructions in Accessing management console login credentials to retrieve the credentials required to log in to the console.

2.7.5.2. Accessing management console login credentials

If you did not specify a value for adminUser and adminPassword in your broker Custom Resource (CR), the Operator automatically generates the broker user name and password (required to log in to the AMQ Broker management console) and stores these credentials in a secret. The default secret name has a format of <Custom Resource name>-credentials-secret, for example, ex-aao-credentials-secret.

This procedure shows you how to access the login credentials required to log in to the management console.

Procedure

  1. See the complete list of secrets in your OpenShift project.

    1. From the OpenShift Container Platform web console, click WorkloadSecrets (OpenShift Container Platform 4.1 or later) or ResourcesSecrets (OpenShift Container Platform 3.11).
    2. From the command line:

      $ oc get secrets
  2. Open the appropriate secret to reveal the console login credentials.

    1. From the OpenShift Container Platform web console, click the secret that includes your broker Custom Resource instance in its name. To see the encrypted user name and password values, click the YAML tab (OpenShift Container Platform 4.1 or later) or ActionsEdit YAML (OpenShift Container Platform 3.11).
    2. From the command line:

      $ oc edit secret <my_custom_resource_name-credentials-secret>

2.8. Operator-based broker deployment examples

2.8.1. Deploying clustered brokers

If there are two or more broker Pods running in your project, the Pods automatically form a broker cluster. A clustered configuration enables brokers to connect to each other and redistribute messages as needed, for load balancing.

The following procedure shows you how to deploy clustered brokers. By default, the brokers in this deployment use on demand load balancing, meaning that brokers will forward messages only to other brokers that have matching consumers.

Prerequisites

Procedure

  1. In the deploy/crs directory of the Operator archive that you downloaded and extracted, open the broker_activemqartemis_cr.yaml Custom Resource file.
  2. For a minimally-sized clustered deployment, ensure that the value of deploymentPlan.size is 2.
  3. At the command line, apply the change:

    $ oc apply -f deploy/crs/broker_activemqartemis_cr.yaml

    In the OpenShift Container Platform web console, a second Pod starts in your project, for the additional broker that you specified in your CR. By default, the two brokers running in your project are clustered.

  4. Open the Logs tab of each Pod. The logs show that OpenShift has established a cluster connection bridge on each broker. Specifically, the log output includes a line like the following:

    targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6f13fb88

2.8.2. Creating queues in a broker cluster

The following procedure shows you how to use a Custom Resource Definition (CRD) and example Custom Resource (CR) to add and remove a queue from a broker cluster deployed using an Operator.

Prerequisites

Procedure

  1. Deploy the addressing CRD.

    $ oc create -f deploy/crds/broker_activemqartemisaddress_crd.yaml
  2. An example CR file, broker_activemqartemisaddress_cr.yaml, was included in the Operator archive that you downloaded and extracted. The example Custom Resource includes the following:

    spec:
      # Add fields here
      spec:
        addressName: myAddress0
        queueName: myQueue0
        routingType: anycast

    With your broker cluster already already deployed and running via the Operator, use the example Custom Resource to create an address on every running broker in your cluster.

    $ oc create -f deploy/crs/broker_activemqartemisaddress_cr.yaml

    Deploying the example CR creates an address myAddress0 with a queue named myQueue0 that has an anycast routing type. This address is created on every running broker.

    Note

    To create multiple addresses and/or queues in your broker cluster, you need to create separate CR files and deploy them individually, specifying new address and/or queue names in each case.

    Note

    If you add brokers to your cluster after deploying the addressing CR, the new brokers will not have the address you previously created. In this case, you need to delete the addresses and redeploy the addressing CR.

  3. To delete queues created from the example CR, use the following command:

    $ oc delete -f deploy/crs/broker_activemqartemisaddress_cr.yaml

2.9. Migrating messages upon scaledown

To migrate messages upon scaledown of your broker deployment, use the main broker Custom Resource (CR) to enable message migration. The AMQ Broker Operator automatically runs a dedicated scaledown controller to execute message migration when you scale down a clustered broker deployment.

With message migration enabled, the scaledown controller within the Operator detects shutdown of a broker Pod and starts a drainer Pod to execute message migration. The drainer Pod connects to one of the other live broker Pods in the cluster and migrates messages over to that live broker Pod. After migration is complete, the scaledown controller shuts down.

Note

A scaledown controller operates only within a single OpenShift project. The controller cannot migrate messages between brokers in separate projects.

Note

If you scale a broker deployment down to 0 (zero), message migration does not occur, since there is no running broker Pod to which the messaging data can be migrated. However, if you scale a deployment down to zero brokers and then back up to only some of the brokers that were in the original deployment, drainer Pods are started for the brokers that remain shut down.

The following example procedure shows the behavior of the scaledown controller.

Prerequisites

Procedure

  1. In the deploy/crs directory of the Operator repository that you originally downloaded and extracted, open the main broker CR, broker_activemqartemis_cr.yaml.
  2. In the main broker CR set messageMigration and persistenceEnabled to true.

    These settings mean that when you later scale down the size of your clustered broker deployment, the Operator automatically starts a scaledown controller and migrate messages to a broker Pod that is still running.

  3. In your existing broker deployment, verify which Pods are running.

    $ oc get pods

    You see output that looks like the following.

    activemq-artemis-operator-8566d9bf58-9g25l   1/1   Running   0   3m38s
    ex-aao-ss-0                                  1/1   Running   0   112s
    ex-aao-ss-1                                  1/1   Running   0   8s

    The preceding output shows that there are three Pods running; one for the broker Operator itself, and a separate Pod for each broker in the deployment.

  4. Log into each Pod and send some messages to each broker.

    1. Supposing that Pod ex-aao-ss-0 has a cluster IP address of 172.17.0.6, run the following command:

      $ /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.6:61616 --user admin --password admin
    2. Supposing that Pod ex-aao-ss-1 has a cluster IP address of 172.17.0.7, run the following command:

      $ /opt/amq-broker/bin/artemis producer --url tcp://172.17.0.7:61616 --user admin --password admin

      The preceding commands create a queue called TEST on each broker and add 1000 messages to each queue.

  5. Scale the cluster down from two brokers to one.

    1. Open the main broker CR, broker_activemqartemis_cr.yaml.
    2. In the CR, set deploymentPlan.size to 1.
    3. At the command line, apply the change:

      $ oc apply -f deploy/crs/broker_activemqartemis_cr.yaml

      You see that the Pod ex-aao-ss-1 starts to shut down. The scaledown controller starts a new drainer Pod of the same name. This drainer Pod also shuts down after it migrates all messages from broker Pod ex-aao-ss-1 to the other broker Pod in the cluster, ex-aao-ss-0.

  6. When the drainer Pod is shut down, check the message count on the TEST queue of broker Pod ex-aao-ss-0. You see that the number of messages in the queue is 2000, indicating that the drainer Pod successfully migrated 1000 messages from the broker Pod that shut down.