JMS clustering over TCP

Latest response

In our existing configuration, we are unable to change the JMS clustering from UDP to TCP.

In existing activemq module we have the below configuration, which works on UDP.

<subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">
                <server name="default-server">
                    <security enabled="false"/>
                    <journal type="NIO"/>
                    <shared-store-master/>
                    <security-setting name="#">
                        <role name="guest" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>
                    </security-setting>
                    <address-setting name="xng/jms/queue/oss-core-nd/execution-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nd/execution-errors-queue"/>
                    <address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" address-full-policy="BLOCK" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/number-release-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/threshold-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/vanity-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/number-range-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/number-summary-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/number-return-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <address-setting name="xng/jms/queue/oss-core-nm/number-cancel-queue" max-delivery-attempts="4" dead-letter-address="xng/jms/queue/oss-core-nm/error-queue"/>
                    <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="https">
                        <param name="http-upgrade-endpoint" value="http-acceptor"/>
                        <param name="ssl-enabled" value="true"/>
                    </http-connector>
                    <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="https">
                        <param name="http-upgrade-endpoint" value="http-acceptor-throughput"/>
                        <param name="batch-delay" value="50"/>
                        <param name="ssl-enabled" value="true"/>
                    </http-connector>
                    <remote-connector name="netty" socket-binding="messaging"/>
                    <remote-connector name="netty-throughput" socket-binding="messaging-throughput">
                        <param name="batch-delay" value="50"/>
                    </remote-connector>
                    <in-vm-connector name="in-vm" server-id="0"/>
                    <connector name="netty-remote" factory-class="org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory">
                        <param name="host" value="host-1"/>
                        <param name="port" value="5445"/>
                    </connector>
                    <http-acceptor name="http-acceptor" http-listener="https"/>
                    <http-acceptor name="http-acceptor-throughput" http-listener="https">
                        <param name="batch-delay" value="50"/>
                        <param name="direct-deliver" value="false"/>
                    </http-acceptor>
                    <remote-acceptor name="netty" socket-binding="messaging"/>
                    <remote-acceptor name="netty-throughput" socket-binding="messaging-throughput">
                        <param name="batch-delay" value="50"/>
                        <param name="direct-deliver" value="false"/>
                    </remote-acceptor>
                    <in-vm-acceptor name="in-vm" server-id="0"/>
                    <broadcast-group name="bg-group1" connectors="netty" broadcast-period="5000" socket-binding="messaging-group"/>
                    <discovery-group name="dg-group1" refresh-timeout="10000" socket-binding="messaging-group"/>
                    <cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="netty" address="jms"/>

Here the cluster configuration is using the discovery group and netty which relies on the messaging group (multicast).
Following the solution https://access.redhat.com/solutions/2858391 , I modified the configuration as follows but the clustering is still relying on UDP.

  • Created a jms-remote connector
 <http-connector name="jms-remote-connector" endpoint="http-acceptor" socket-binding="jms-remote"/>
  • Commented out the cluster and discvery group as per the guide followed by the new cluster configuration
<cluster-connection name="my-cluster" static-connectors="jms-remote-connector" connector-name="http-connector" address="jms"/>
  • Added the socket binding to the remote node of the cluster
<outbound-socket-binding name="jms-remote">
         <remote-destination host="slave-node" port="8080"/>
       </outbound-socket-binding>

Performed the similar change in the slave node.

But on publishing to the JMS, I do not see the messages getting replicated in other nodes, which implies this cluster configuration is not working.

Sharing the domain.xml of both nodes as reference, any help will be highly apreciated.
***Version : JBoss EAP 7.0.8

Attachments

Responses

When you say that you were unable to switch from UDP to TCP what did happen when you tried?

I did not see any change in the message cluster behavior. The server logs shows that activemq using the netty connector by default instead of the jms-remote connector.