Can you cluster JBoss nodes on OpenShift?
Environment
- OpenShift Online
Issue
- Is is possible to cluster JBoss nodes (gears) with OpenShift?
- Do Scaled JBoss (gears) get clustered together?
Resolution
"Cluster" consists of two basic things:
- Proxying / load balancing traffic out to the different members of the cluster and detecting outages.
- In OpenShift, this is covered by a generic HAproxy that just balances based on connection count.
- Session replication such that if one goes down, the app user's session information is available on a different cluster member.
- Session replication is handled by the members contacting each other directly using JGroups to transmit the session contents as they change; the OpenShift cartridge opens a port for each and manages an env var that tells them where to find each other.
This is done using the OPENSHIFT_JBOSSEAP_CLUSTER environment variable. By setting this variable to a list of your cluster members, the JBoss gear that is started will form a cluster and make other members aware.
<property name="initial_hosts">${env.OPENSHIFT_JBOSSEAP_CLUSTER}</property>
The hooks/set-jboss-cluster can also be used to aid in this process.
Red Hat OpenShift JBoss cartridges can be clustered together. The latest Openshift Online has came up with “JEE Full Profile on JBoss” cartridge which allows you to get benefits of JBoss Clustering. It simply uses HAProxy as a load balancer upfront and creates multiple replicas of you application. The scaling can be based on your choice automatic or manual.
For an example :
-
Create a scaled JBossAS7 app which creates the 2 Gears described above. Assume the Gears are on 2 nodes (node1.com and node2.com). Gear-1 binds to 127.250.0.1 (app) and 127.250.0.2 (haproxy) and Gear-2 binds to 127.250.0.3. The app-level HAProxy binds to 127.250.0.2 and is configured to load-balance HTTP traffic between 127.250.0.1:8080 and 127.250.0.3:8080.
-
JBoss needs to communicate over a port (7600) for clustering. So an ephemeral port is created (37600) and each Node-level HAProxy listens on this port (node1.com:37600 and node2:37600).
-
When the the Node1 JBoss instance communicates with the Node2 JBoss it sends the traffic to node2.com:37600 which hits the node2 Node-level HAProxy. The node2 Node-level HAProxy then routes the traffic to 127.250.0.3:7600. And vice versa for node2 to node1.
Note that only TCP clustering is possible with OpenShift right now.
Attached is a test application clusteredwebapp-master.zip that can be tested for session replication.
For more information also see https://access.redhat.com/site/articles/478923.
Root Cause
-
The JBoss EAP cartridge provided with OpenShift has support for clustering. This means that you can create numerous JBoss EAP gears and they will cluster together. If the applications are scaled, the entire cluster will scale together, as one.
-
This clustering is made possible by some handy work in the cartridge. First, each JBoss instance (gear) is hosted on a public loop-back port on the node that it listens on. This is how you would typical access the JBoss node if you were ssh'ed into the gear. In addition to this a public port is also exposed and this port forwards request to the internal locally hosted port. This is where it can receive requests from other gears.
-
You can see what these ports are by reviewing the Publisher and Subscriber or the Endpoint sections for the Cartridge.
-
When a JBoss node is added each one of the nodes in the cluster will run a script to determine what other gears are part of the scaled application. This script is denoted by the set-jboss-cluster subscriber lines in the cartridges configuration. This script determines the hostname and port of each node in the cluster.
-
This script then presents this as an instance environment variable. The standalone.conf then reads all of the environment variables presented to the gear (as JBoss starts up) as and creates java properties that JBoss can use from these environment variables. When each of the nodes is started, they are then able to talk to one another using the variable placed in standalone.xml.
Attachments
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
