Chapter 4. Get Started

4.1. Installing the A-MQ for OpenShift Image Streams and Application Templates

If you are using an older version of OpenShift, you might find that the Red Hat AMQ 6.3 image is not installed or the image is not up to date. To install the Red Hat AMQ 6.3 image for the first time or to update an existing image, perform the following steps:

  1. Log in to OpenShift as a cluster administrator (or as a user that has project administrator access to the global openshift project), for example:

    $ oc login -u system:admin
  2. Run the following commands to update the core Red Hat AMQ 6.3 OpenShift image stream in the openshift project:

    $ oc create -n openshift -f \
     https://raw.githubusercontent.com/jboss-openshift/application-templates/ose-v1.4.15/amq/amq63-image-stream.json
    
    $ oc replace -n openshift -f \
     https://raw.githubusercontent.com/jboss-openshift/application-templates/ose-v1.4.15/amq/amq63-image-stream.json
    
    $ oc -n openshift import-image jboss-amq-63:1.4
    Note

    It is normal to see error messages saying some image streams already exist while invoking the create command. The oc client does not have a command to create or replace in one single action.

  3. Run the following command to update the A-MQ templates:

    $ for template in amq63-persistent-ssl.json \
     amq63-basic.json \
     amq63-ssl.json \
     amq63-persistent.json;
     do
     oc create -n openshift -f \
     https://raw.githubusercontent.com/jboss-openshift/application-templates/ose-v1.4.15/amq/${template}
     oc replace -n openshift -f \
     https://raw.githubusercontent.com/jboss-openshift/application-templates/ose-v1.4.15/amq/${template}
     done
    Note

    It is normal to see "already exists" error messages while invoking the create command.

4.2. Deployment Considerations for the A-MQ for OpenShift Image

4.2.1. Service Accounts

The A-MQ for OpenShift image requires a service account for deployments. Service accounts are API objects that exists within each project. Three service accounts are created automatically in every project: builder, deployer, and default.

  • builder: This service account is used by build pods. It has system:image-builder role which allows pushing images to any image stream in the project using the internal Docker registry.
  • deployer: This service account is used by deployment pods. It has system:deployer role which allows viewing and modifying replication controllers and pods in the project.
  • default: This service account used to run all other pods unless you specify a different service account.

4.2.2. Creating the Service Account

Service accounts are API objects that exists within each project and can be created or deleted like any other API object. For multiple node deployments, the service account must have the view role enabled so that it can discover and manage the various pods in the cluster. In addition, you will need to configure SSL to enable connections to A-MQ from outside of the OpenShift instance. There are two types of discovery protocols that can be possibly used for discovering of AMQ mesh endpoints. To use OpenShift DNS service, DNS based discovery protocol is used and in case of Kubernets REST API, Kubernetes based discovery protocol is used. To use the Kubernetes based discovery protocol, create a new service account and grant a ‘view’ role for the newly created service account.

  1. Create the service account:

    $ echo '{"kind": "ServiceAccount", "apiVersion": "v1", "metadata": {"name": "<service-account-name>"}}' | oc create -f -

    OpenShift 3.2 users can use the following command to create the service account:

    $ oc create serviceaccount <service-account-name>
  2. Add the view role to the service account:

    $ oc policy add-role-to-user view system:serviceaccount:<project-name>:<service-account-name>
  3. Edit the deployment configuration to run the AMQ pod with newly created service account.

    $ oc edit dc/<deployment_config>

    Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use.

    spec:
          securityContext: {}
          serviceAccount: <service_account>
          serviceAccountName: <service_account>

4.2.3. Configuring SSL

For a minimal SSL configuration to allow for connections outside of OpenShift, A-MQ requires a broker keyStore, a client keyStore, and a client trustStore that includes the broker keyStore. The broker keyStore is also used to create a secret for the A-MQ for OpenShift image, which is added to the service account.
The following example commands use keytool, a package included with the Java Development Kit, to generate the necessary certificates and stores:

  1. Generate a self-signed certificate for the broker keyStore:

    $ keytool -genkey -alias broker -keyalg RSA -keystore broker.ks
  2. Export the certificate so that it can be shared with clients:

    $ keytool -export -alias broker -keystore broker.ks -file broker_cert
  3. Generate a self-signed certificate for the client keyStore:

    $ keytool -genkey -alias client -keyalg RSA -keystore client.ks
  4. Create a client trustStore that imports the broker certificate:

    $ keytool -import -alias broker -keystore client.ts -file broker_cert

4.2.4. Generating the A-MQ Secret

The broker keyStore can then be used to generate a secret for the namespace, which is also added to the service account so that the applications can be authorized:

$ oc secrets new <secret-name> <broker-keystore> <broker-truststore>
$ oc secrets add sa/<service-account-name> secret/<secret-name>

4.2.5. Creating a Route

After the A-MQ for OpenShift image has been deployed, an SSL route needs to be created for the A-MQ transport protocol port to allow connections to A-MQ outside of OpenShift.
In addition, selecting Passthrough for TLS Termination relays all communication to the A-MQ broker without the OpenShift router decrypting and resending it. Only SSL routes can be exposed because the OpenShift router requires SNI to send traffic to the correct service. See Secured Routes for more information.
The default ports for the various A-MQ transport protocols are:
61616/TCP (OpenWire)
61617/TCP (OpenWire+SSL)
5672/TCP (AMQP)
5671/TCP (AMQP+SSL)
1883/TCP (MQTT)
8883/TCP (MQTT+SSL)
61613/TCP (STOMP)
61612/TCP (STOMP+SSL)

4.2.6. Scaling Up and Persistent Storage Partitioning

There are two methods for deploying A-MQ with persistent storage: single-node and multi-node partitioning. Single-node partitioning stores the A-MQ logs and the kahadb store directory, with the message queue data, in the storage volume. Multi-node partitioning creates additional, independent split-n directories to store the messaging queue data for each broker, where n is an incremental integer. This communication is not altered if a broker pod is updated, goes down unexpectedly, or is redeployed. When the broker pod is operational again, it reconnects to the associated split directory and continues as before. If a new broker pod is added, a corresponding split-n directory is created for that broker.

Note

In order to enable a multi-node configuration it is necessary to set the AMQ_SPLIT parameter to true, this will result in the server creating independent split-n directories for each instance within the Persistent Volume which can then be used as their data store. This is now the default setting in all persistent templates.

Important

Due to the different storage methods of single-node and multi-node partitioning, changing a deployment from single-node to multi-node results in the application losing all previously stored messages. This is also true if changing a deployment from multi-node to single-node, as the storage paths will not match.

Similarly, if a Rolling Strategy is implemented, the maxSurge parameter must be set to 0%, otherwise the new broker creates a new partition and be unable to connect to the stored messages.

In multi-node partitioning, OpenShift routes new connections to the broker pod with the least amount of connections. Once this connection has been made, messages from that client are sent to the same broker every time, even if the client is run multiple times. This is because the OpenShift router is set to route requests from a client with the same IP to the same pod.

You can see which broker pod is connected to which split directory by viewing the logs for the pod, or by connecting to the broker console. In the ActiveMQ tab of the console, the PersistenceAdapter shows the KahaDBPersistenceAdapter, which includes the split directory as part of it’s name.

4.2.7. Scaling Down and Message Migration

When A-MQ is deployed using a multi-node configuration it is possible for messages to be left in the kahadb store directory of a terminating pod should the cluster be scaled down. In order to prevent messages from remaining within the kahadb store of the terminating pod until the cluster next scales up, each A-MQ persistent template creates a second deployment containing a drainer pod which is responsible for managing the migration of messages. The drainer pod will scan each independent split-n directory within the A-MQ persistent volume, identify data stores associated with those pods which are terminating, and execute an application to migrate the remaining messages from those pods to other active members of the cluster.

Important

Only messages sent through Message Queues will be migrated to other instances of the cluster when scaling down. Messages sent via topics will remain in storage until the cluster scales back up. Support for migrating Virtual Topics will be introduced in a future release.

4.2.8. Customizing A-MQ Configuration Files for Deployment

If using a template from an alternate repository, A-MQ configuration files such as user.properties can be included. When the image is downloaded for deployment, these files are copied to the <amq-home>/amq/conf/ directory on the broker, which are committed to the container and pushed to the registry.

Note

If using this method, it is important that the placeholders in the configuration files (such as ##### AUTHENTICATION #####) are not removed as these placeholders are necessary for building the A-MQ for OpenShift image.

4.2.9. Configuring Client Connections

Clients for the A-MQ for OpenShift image must specify the OpenShift router port (443) when setting the broker URL for SSL connections. Otherwise, A-MQ attempts to use the default SSL port (61617). Including the failover protocol in the URL preserves the client connection in case the pod is restarted or upgraded, or there is a disruption on the router.

...
factory.setBrokerURL("failover://ssl://<route-to-broker-pod>:443");
...

4.3. Upgrading the Image Repository

To upgrade the A-MQ image to the latest version, perform the following steps:

  1. On your master host(s), ensure you are logged into the CLI as a cluster administrator or user that has project administrator access to the global openshift project. For example:

    $ oc login -u system:admin
  2. Run the following command to update the core A-MQ OpenShift image stream in the openshift project:

    $ oc -n openshift import-image jboss-amq-63:VERSION

    Where VERSION is the new image version that you want to update to.

    Note

    To discover the latest available VERSION tag for jboss-amq-63, consult the Red Hat Container Catalog.

  3. To force OpenShift to deploy the latest image, enter the following command:
$ oc deploy amq63-amq --latest

Depending on the deployment configuration, OpenShift deletes one of the broker pods and start a new upgraded pod. The new pod connects to the same persistent storage so that no messages are lost in the process. Once the upgraded pod is running, the process is repeated for the next pod until all of the pods have been upgraded.

If a Rolling Strategy has been configured, OpenShift deletes and recreate pods based on the rolling update settings. Any new pod will only connect to the same persistent storage if the maxSurge parameter is set to 0%, otherwise the new pod creates a new partition and will not be able to connect to the stored messages in the previous partition.