HornetQ JMS Servers not clustering properly
Issue
-
The server1(101.XX.XX.10) is the hostname for node 1 and server2(101.XX.XX.11) is the hostname for node 2. The JBoss-EAP-6 servers are setup as a domain. I enabled JMX for the HornetQ subsystem and took a screen shot of the relevant attribute to show the error we see in JMS. Node 1 thinks server1:101.XX.XX.9:5445 is Node 2 but it actually is itself. So messages are going from Node 2 to Node 1 but not from Node 1 to Node 2.
-
The key here is setting the interface for messaging to global breaks the JMS clustering process. When we use public the cluster works fine. The reason we have a requirement to use global is that our load balancer will probe port 101.XX.XX.10:5445 to check if service is UP/DOWN on server1 and it probes port 101.XX.XX.11:5445 to check if service is UP/DOWN on server2. The client machines make there request to 101.XX.XX.9:5445 (The virtual IP that is load balanced) and redirects the request to either 101.XX.XX.10/11 based on a round robin approach. So server1 and server2 both have a Microsoft Loopback Adapter configured to 101.XX.XX.9 to acknowledge the forwarded requests. Since JMS is accesses at port 5445 on two IP addresses (1. Physical NIC, 2. Virtial MS Loopback) it needs to bind to global (any address).
-
I believe that the binding to any (0.0.0.0) provides the wrong information on the multicast from Node 1. If Node 1 is brought offline and then back online, it then correctly discovers Node 2 and the JMS cluster is fine.
<socket-binding name="messaging" interface="global" port="5445"/>
Environment
- Red Hat JBoss Enterprise Application Platform (EAP)
- 6.2.0
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
