Chapter 4. Creating the Environment
4.1. Overview
This reference architecture provides an accompanying OpenShift S2I image and template set for the deployment of pre-configured services, that can be used as a target for the included test suite in its current form. The code may be found in a publicly-available repository.
$ git clone git@github.com:RHsyseng/amq7.gitWhile a pre-configured OpenShift Container Platform project is provided for convenience, the reader may also elect to utilize a single Red Hat Enterprise Linux server to recreate the single-broker, symmetric cluster, and replication topologies individually. A cluster of RHEL servers may also serve to replicate the Interconnect topology in lieu of OpenShift. If the reader’s choice is to forego OpenShift Container Platform, you may skip ahead to Deployment on Red Hat Enterprise Linux at this time.
4.2. Deployment on OpenShift Container Platform
The AMQ/OpenShift configuration described and/or provided herein uses community software and thus is not an officially supported Red Hat configuration at the time of writing.
4.2.1. Prerequisites
A pre-existing OpenShift Container Platform installation, with a configured user called ocuser, capable of project ownership, is assumed in the following steps. More information on installation and configuration of the Red Hat Enterprise Linux Operating System, OpenShift Container Platform, and more can be found in Section 3: Creating the Environment of a previous Reference Architecture, Building JBoss EAP 7 Microservices on OpenShift Container Platform.
The URL cluster-ingress.example.com is used throughout the provided templates to allow external access to the brokers and management consoles. Add the URL to your /etc/hosts file so that it resolves to the proper OpenShift Container Platform node for desired routing.
10.1.2.300 cluster-ingress.example.com #link to OpenShift-routable node address
4.2.2. Create Project
Utilize remote or direct terminal access to log in to the OpenShift environment as the user who will create and have ownership of the new project:
$ oc login -u ocuserCreate the new project which will house the various builds, deployments and services:
$ oc new-project amq --display-name="AMQ 7 Reference Architecture Example" --description="Showcase of various AMQ 7 features and topologies"4.2.3. Base Image Template Deployment
Within the new project, execute the provided YAML template to configure and instantiate the base image stream and build config:
$ oc process -f https://raw.githubusercontent.com/RHsyseng/amq7/master/S2I-Base-Image/yaml_templates/amq_image_template.yaml | oc create -f -4.2.4. Image Build
Kick off a build of the S2I base image stream and monitor for completion:
$ oc start-build amq7-image --follow4.2.5. Topology Template Deployments
Once the initial build is complete, deploy the single-broker template:
$ oc process -f https://raw.githubusercontent.com/RHsyseng/amq7/master/S2I-Base-Image/yaml_templates/amq_single_template.yaml | oc create -f -The remainder of the topology templates have been separated for convenience of cherry-picking, but in order to execute the full test suite, all three must be applied:
$ oc process -f https://raw.githubusercontent.com/RHsyseng/amq7/master/S2I-Base-Image/yaml_templates/amq_symmetric_template.yaml | oc create -f - $ oc process -f https://raw.githubusercontent.com/RHsyseng/amq7/master/S2I-Base-Image/yaml_templates/amq_replicated_template.yaml | oc create -f - $ oc process -f https://raw.githubusercontent.com/RHsyseng/amq7/master/S2I-Base-Image/yaml_templates/amq_interconnect_template.yaml | oc create -f -
While any of the last three templates (symmetric, replication, and interconnect) can be skipped or deployed at-will, the single-broker service must be deployed for any of these to properly function, as the single-broker service provides ingress into the cluster for the various ports utilized by the remaining templates.
4.3. Deployment on Red Hat Enterprise Linux
4.3.1. Prerequisites
To install AMQ 7 to a Linux platform, you must first download the installation archive from the Red Hat Customer Portal.
4.3.2. Single Broker Configuration
The broker utilizes a few ports for inbound connectivity, including TCP ports 61616, 5672, 61613, and 1883 for the various message protocols and TCP port 8161 for access to the HTTP AMQ Console. If the server is running a firewall, these ports must be allowed connectivity prior to beginning installation and configuration. More information on inspecting and configuring the firewall for Red Hat Enterprise Linux can be found here.
1) Create a new user named amq-broker and provide a password.
$ sudo useradd amq-broker $ sudo passwd amq-broker
2) Create the directory /opt/redhat/amq-broker and make the new amq-broker user and group the owners.
$ sudo mkdir -p /opt/redhat/amq-broker $ sudo chown -R amq-broker:amq-broker /opt/redhat/amq-broker
3) Change the owner of the archive to the new user.
$ sudo chown amq-broker:amq-broker amq-broker-7.x.x.zip4) Move the installation archive to the directory you just created.
$ sudo mv amq-broker-7.x.x.zip /opt/redhat/amq-broker
5) As the new user amq-broker, extract the contents with a single unzip command.
$ su - amq-broker $ cd /opt/redhat/amq-broker $ unzip amq-broker-7.x.x.zip
6) A directory named something similar to amq-broker-7.x.x will be created, further referred to herein as the INSTALL_DIR. Next, create a directory location for the broker instance and assign the user you created during installation as its owner.
$ sudo mkdir /var/opt/amq-broker $ sudo chown -R amq-broker:amq-broker /var/opt/amq-broker
7) Navigate to the new directory and use the artemis create command to create a broker. Note in the example below, the user that was created during installation is the one to run the create command.
$ su - amq-broker $ cd /var/opt/amq-broker $ /opt/redhat/amq-broker/bin/artemis create /var/opt/amq-broker/mybroker --user amq-broker --password password --allow-anonymous
Use the artemis help create command to view a list/descriptions of available parameters for instance configuration.
8) If desired, view contents of the BROKER_INSTANCE_DIR/etc/broker.xml configuration file. A simple, single-broker localhost configuration has been already been configured by the artemis tool.
9) Next, use the artemis script in the BROKER_INSTANCE_DIR/bin directory to start the broker.
$ su - amq-broker $ /var/opt/amq-broker/mybroker/bin/artemis run
Use the artemis help run command to view a list/descriptions of available parameters for runtime configuration.
4.3.3. Symmetric Cluster Configuration
Typical production environments utilize multiple servers for clustering, however, for demonstration purposes, the following cluster is set up to reside on a single server utilizing unique, individual ports for inbound connections and a separate, shared UDP port for group connectivity.
4.3.3.1. Single-Server Cluster
To prepare multiple brokers on a single server, complete the follow steps.
1) As with the single-broker example, complete steps 1-6 to establish a user and prepare the AMQ artemis environment.
2) Next, create directories for each broker instance of the cluster:
$ mkdir /var/opt/amq-broker/run && cd "$_" $ mkdir artemis_1 artemis_2 artemis_3
3) Similar to before, the artemis create command will be used to populate each directory with a broker instance.
$ /opt/redhat/amq-broker/bin/artemis create artemis_1 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-1 --port-offset 10*
$ /opt/redhat/amq-broker/bin/artemis create artemis_2 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-2 --port-offset 20*
$ /opt/redhat/amq-broker/bin/artemis create artemis_3 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-3 --port-offset 30*4) Once each broker directory has been populated, start each broker as a service.
$ artemis_1/bin/artemis-service start Starting artemis-service artemis-service is now running (109195) $ artemis_2/bin/artemis-service start Starting artemis-service artemis-service is now running (109307) $ artemis_3/bin/artemis-service start Starting artemis-service artemis-service is now running (109432)
5) Examination of the logs reveals information relevant to the connectivity established between each of the servers.
$ cat artemis_3/log/artemis.log
...
AMQ221001: [truncated for brevity]...[artemis_3, nodeID=83c078d5-ad4f-11e7-a3e2-000c2906f4ae]
AMQ221027: Bridge ClusterConnectionBridge@7a5f143 [name=$.artemis.internal.sf.my-cluster.77770624-ad4f-11e7-8acd-000c2906f4ae
[truncated for brevity]?port=61626&host=localhost], discoveryGroupConfiguration=null]] is connected
AMQ221027: Bridge ClusterConnectionBridge@7241b488 [name=$.artemis.internal.sf.my-cluster.7d6ebb4d-ad4f-11e7-a379-000c2906f4ae
[truncated for brevity]?port=61636&host=localhost], discoveryGroupConfiguration=null]] is connected
...4.3.3.2. Multi-Server Cluster
To prepare multiple servers, each with its own broker, several ports must be utilized for intra-node and inbound communications. This includes 9876 for multicast UDP group connectivity, TCP via 61616, 5672, 61613, and 1883 for the various message protocols, and TCP via 8161 for access to the HTTP AMQ Console. If the servers are running a firewall, these ports must be allowed connectivity prior to beginning installation and configuration. More information on inspecting and configuring the firewall for Red Hat Enterprise Linux can be found here.
Ensure the network is properly configured for multicast and that multicast addresses have been verified during the configuration stage of the environment, otherwise HA functionality might not work as intended. If using Red Hat Enterprise Linux, also ensure that the http protocol has been enabled for viewing of AMQ Console, as RHEL does not allow HTTP traffic by default.
Each of the following steps should be performed on every server within the cluster.
1) Once port access has been configured, complete steps 1-6 to establish a user and prepare the AMQ artemis environment.
2) Next, create a directory for the broker instance:
$ mkdir /var/opt/amq-broker/run && cd "$_" $ mkdir artemis_1
3) Similar to before, the artemis create command will be used to populate the directory with a broker instance.
$ /opt/redhat/amq-broker/bin/artemis create artemis_1 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host 10.10.X.X \
--name artemis-broker4) Once the broker directory has been populated, start the broker as a service.
$ artemis_1/bin/artemis-service start
Starting artemis-service
artemis-service is now running (109195)5) As with the single-server configuration above, cluster connectivity can be monitored by examining the logs of one of the servers.
$ tail -f artemis_1/log/artemis.log4.3.4. Replication Cluster Configuration
Typical production environments utilize multiple servers for clustering, however, for demonstration purposes, the following cluster is set up to reside on a single server utilizing unique, individual ports for inbound connections and a separate, shared UDP or TCP (via JGroups) port for group connectivity.
4.3.4.1. Single-Server Cluster
To prepare multiple brokers on a single server, complete the follow steps.
1) As with the single-broker example, complete steps 1-6 to establish a user and prepare the AMQ artemis environment.
2) Next, create directories for each broker instance of the cluster:
$ mkdir /var/opt/amq-broker/run && cd "$_" $ mkdir artemis_1 artemis_2 artemis_3 artemis_4 artemis_5 artemis_6
3) Similar to before, the artemis create command will be used to populate each directory with a broker instance.
$ /opt/redhat/amq-broker/bin/artemis create artemis_1 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-1 --port-offset 10 --replicated
$ /opt/redhat/amq-broker/bin/artemis create artemis_2 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-2 --port-offset 20 --replicated
$ /opt/redhat/amq-broker/bin/artemis create artemis_3 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-3 --port-offset 30 --replicated
$ /opt/redhat/amq-broker/bin/artemis create artemis_4 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-4 --port-offset 10 --replicated --slave
$ /opt/redhat/amq-broker/bin/artemis create artemis_5 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-5 --port-offset 20 --replicated --slave
$ /opt/redhat/amq-broker/bin/artemis create artemis_6 \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-6 --port-offset 30 --replicated --slave4) Once broker directories have been populated, start each broker as a service.
$ artemis_1/bin/artemis-service start
Starting artemis-service
artemis-service is now running (109195)
[repeat for artemis_2-6]5) Examination of the logs reveals information relevant to group connectivity and replica information sent from master nodes to slave nodes.
$ cat artemis_3/log/artemis.log
...
AMQ221001: [truncated for brevity]...[artemis_2, nodeID=83c078d5-ad4f-11e7-a3e2-000c2906f4ae]
AMQ221027: Bridge ClusterConnectionBridge@7a5f143 [name=$.artemis.internal.sf.my-cluster.77770624-ad4f-11e7-8acd-000c2906f4ae
[truncated for brevity]?port=61626&host=localhost], discoveryGroupConfiguration=null]] is connected
AMQ221027: Bridge ClusterConnectionBridge@7241b488 [name=$.artemis.internal.sf.my-cluster.7d6ebb4d-ad4f-11e7-a379-000c2906f4ae
[truncated for brevity]?port=61646&host=localhost], discoveryGroupConfiguration=null]] is connected
...
AMQ221025: Replication: sending AIOSequentialFile:/var/opt/run/artemis_2/./data/journal/activemq-data-2.amq (size=10,485,760) to replica.
AMQ221025: Replication: sending NIOSequentialFile /var/opt/run/artemis_2/./data/bindings/activemq-bindings-4.bindings (size=1,048,576) to replica.
AMQ221025: Replication: sending NIOSequentialFile /var/opt/run/artemis_2/./data/bindings/activemq-bindings-2.bindings (size=1,048,576) to replica.4.3.4.1.1. Multi-Server Cluster
As with the multi-server symmetric cluster, once port access has been configured for all intended cluster members, the broker can be added to each server:
For master nodes, use the following artemis create command format, where N indicates the node number:
$ /opt/redhat/amq-broker/bin/artemis create artemis_N \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-N --replicated*
For slave nodes, use the following artemis create command format, where N indicates the node number:
$ /opt/redhat/amq-broker/bin/artemis create artemis_N \
--user amq-broker --password password --allow-anonymous --clustered \
--cluster-user amqUser --cluster-password password --host localhost \
--name artemis-N --replicated --slave*4.3.5. Interconnect Network Configuration
Given the nature of the Interconnect executable and the need to demonstrate various network proximity routing mechanisms, a multi-server configuration is recommended. However, the routers only utilize AMQP ports 5672, 5673, and 5674 for all communication needs, therefore multicast is not required and firewall rules are significantly simplified. Every server will need to expose the aforementioned three ports.
4.3.5.1. Installation
In order to install the Interconnect router on Red Hat Enterprise Linux, complete the following steps for each server intended to join the topology:
1) Ensure your subscription has been activated and your system is registered. For more information about using the customer portal to activate your Red Hat subscription and register your system for packages, refer to the documentation on using your subscription.
2) Subscribe to the required repositories:
$ sudo subscription-manager repos --enable=amq-interconnect-1-for-rhel-7-server-rpms --enable=a-mq-clients-1-for-rhel-7-server-rpms
3) Use the yum command to install the qpid-dispatch-router and qpid-dispatch-tools packages and libaio dependency for ASYNCIO journaling:
$ sudo yum install libaio qpid-dispatch-router qpid-dispatch-tools
4) Use the which command to verify that the qdrouterd executable is present.
$ which qdrouterd
/usr/sbin/qdrouterd
The qdrouterd executable should be located at /usr/sbin/qdrouterd.
4.3.5.2. Configuration
The configuration file for the router is located at etc/qpid-dispatch/qdrouterd.conf. The following is a complete example configuration as used in the included OpenShift Container Platform example:
router { 1
mode: interior
id: US
}
listener { 2
role: normal
host: 0.0.0.0
port: amqp
authenticatePeer: no
saslMechanisms: ANONYMOUS
}
listener { 3
role: normal
host: 0.0.0.0
port: 5673
http: yes
authenticatePeer: no
saslMechanisms: ANONYMOUS
}
listener { 4
role: inter-router
host: 0.0.0.0
port: 5674
authenticatePeer: no
saslMechanisms: ANONYMOUS
}
connector { 5
role: inter-router
host: interconnect-eu.amq.svc.cluster.local
port: 5674
saslMechanisms: ANONYMOUS
}
address { 6
prefix: multicast
distribution: multicast
}
log { 7
module: DEFAULT
enable: debug+
timestamp: yes
}- 1
- Basic router configuration. By default, a router operates in standalone mode, whereas this example indicates that the router is part of a network, or interior mode.
- 2
- Listener for inbound AMQP client connections.
- 3
- Listener for inbound AMQ Console connections allowing
HTTP. - 4
- Listener for other router entity connections.
- 5
- Connector request specifying another router entity that the router should attempt to connect with.
- 6
- Address configuration indicating that any address beginning with the prefix
multicast(such as a queue named should utilize themulticastrouting pattern in delivery. Other options includeclosestandbalanced. - 7
- Log module configuration: specifies that all Interconnect modules should log any debug or higher messages and include timestamp. Note that individual module configuration is possible.
Further information on Interconnect router configuration can be found in the Using AMQ Interconnect for Red Hat JBoss AMQ 7 Guide.
4.3.5.3. Execution
Once the router network has been configured as desired, start each with the following command:
$ systemctl start qdrouterd.service4.3.5.4. Log Monitoring
Router status and log output can be viewed with the following command:
$ qdstat --log

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.