Chapter 8. Configuring the Messaging Transports

This section describes the concepts critical to understanding JBoss EAP messaging transports, specifically connectors and acceptors. Acceptors are used on the server to define how it can accept connections, while connectors are used by the client to define how it connects to a server. Each concept is discussed in turn and then a practical example shows how clients can make connections to a JBoss EAP messaging server, using JNDI or the Core API.

8.1. Acceptor and Connector Types

There are three main types of acceptor and connector defined in the configuration of JBoss EAP.

in-vm: In-vm is short for Intra Virtual Machine. Use this connector type when both the client and the server are running in the same JVM, for example, Message Driven Beans (MDBs) running in the same instance of JBoss EAP.

http: Used when client and server are running in different JVMs. Uses the undertow subsystem’s default port of 8080 and is thus able to multiplex messaging communications over HTTP. Red Hat recommends using the http connector when the client and server are running in different JVMs due to considerations such as port management, especially in a cloud environment.

remote: Remote transports are Netty-based components used for native TCP communication when the client and server are running in different JVMs. An alternative to http when it cannot be used.

A client must use a connector that is compatible with one of the server’s acceptors. For example, only an in-vm-connector can connect to an in-vm-acceptor, and only a http-connector can connect to an http-acceptor, and so on.

You can have the management CLI list the attributes for a given acceptor or connector type using the read-children-attributes operation. For example, to see the attributes of all the http-connectors for the default messaging server you would enter:


The attributes of all the http-acceptors are read using a similar command:


The other acceptor and connector types follow the same syntax. Just provide child-type with the acceptor or connector type, for example, remote-connector or in-vm-acceptor.

8.2. Acceptors

An acceptor defines which types of connection are accepted by the JBoss EAP integrated messaging server. You can define any number of acceptors per server. The sample configuration below is modified from the default full-ha configuration profile and provides an example of each acceptor type.

<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
  <server name="default">
    <http-acceptor name="http-acceptor" http-listener="default"/>
    <remote-acceptor name="legacy-messaging-acceptor" socket-binding="legacy-messaging"/>
    <in-vm-acceptor name="in-vm" server-id="0"/>

In the above configuration, the http-acceptor is using Undertow’s default http-listener which listens on JBoss EAP’s default http port, 8080. The http-listener is defined in the undertow subsystem:

<subsystem xmlns="urn:jboss:domain:undertow:7.0">
  <server name="default-server">
    <http-listener name="default" redirect-socket="https" socket-binding="http"/>

Also note how the remote-acceptor above uses the socket-binding named legacy-messaging, which is defined later in the configuration as part of the server’s default socket-binding-group.

<server xmlns="urn:jboss:domain:8.0">
  <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
      <socket-binding name="legacy-messaging" port="5445"/>

In this example, the legacy-messaging socket-binding binds JBoss EAP to port 5445, and the remote-acceptor above claims the port on behalf of the messaging-activemq subsystem for use by legacy clients.

Lastly, the in-vm-acceptor uses a unique value for the server-id attribute so that this server instance can be distinguished from other servers that might be running in the same JVM.

8.3. Connectors

A connector defines how to connect to an integrated JBoss EAP messaging server, and is used by a client to make connections.

You might wonder why connectors are defined on the server when they are actually used by the client. The reasons for this include:

  • In some instances, the server might act as a client when it connects to another server. For example, one server might act as a bridge to another, or it might want to participate in a cluster. In such cases, the server needs to know how to connect to other servers, and that is defined by connectors.
  • A server can provide connectors using a ConnectionFactory which is looked up by clients using JNDI, so creating connection to the server is simpler.

You can define any number of connectors per server. The sample configuration below is based on the full-ha configuration profile and includes connectors of each type.

<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
  <server name="default">
    <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http" server-name="messaging-server-1"/>
    <remote-connector name="legacy-remoting-connector" socket-binding="legacy-remoting"/>
    <in-vm-connector name="in-vm" server-id="0"/>

Like the http-acceptor from the full-ha profile, the http-connector uses the default http-listener defined by the undertow subsystem. The endpoint attribute declares which http-acceptor to connect to. In this case, the connector will connect to the default http-acceptor.

JBoss EAP 7.1 introduced a new server-name attribute for the http-connector. This new attribute is optional, but it is required to be able to connect to the correct http-acceptor on a remote server that is running more than one ActiveMQ Artemis instance. If this attribute is not defined, the value is resolved at runtime to be the name of the parent ActiveMQ Artemis server in which the connector is defined.

Also, note that the remote-connector references the same socket-binding as its remote-acceptor counterpart. Lastly, the in-vm-connector uses the same value for server-id as the in-vm-acceptor since they both run inside the same server instance.


If the bind address for the public interface is set to, you will see the following warning in the log when you start the JBoss EAP server:

AMQ121005: Invalid "host" value "" detected for "connector" connector. Switching to <HOST_NAME>. If this new address is incorrect please manually configure the connector to use the proper one.

This is because a remote connector cannot connect to a server using the address and the messaging-activemq subsystem tries to replace it with the server’s host name. The administrator should configure the remote connector to use a different interface address for the socket binding.

8.4. Configuring Acceptors and Connectors

There are a number of configuration options for connectors and acceptors. They appear in the configuration as child <param> elements. Each <param> element includes a name and value attribute pair that is understood and used by the default Netty-based factory class responsible for instantiating a connector or acceptor.

In the management CLI, each remote connector or acceptor element includes an internal map of the parameter name and value pairs. For example, to add a new param to a remote-connector named myRemote use the following command:


Retrieve parameter values using a similar syntax.

    "outcome" => "success",
    "result" => "bar"

You can also include parameters when you create an acceptor or connector, as in the example below.


Table 8.1. Transport Configuration Properties



Before writing packets to the transport, the messaging server can be configured to batch up writes for a maximum of batch-delay in milliseconds. This increases the overall throughput for very small messages by increasing average latency for message transfer. The default is 0.


When a message arrives on the server and is delivered to waiting consumers, by default, the delivery is done on the same thread on which the message arrived. This gives good latency in environments with relatively small messages and a small number of consumers but reduces the throughput and latency. For highest throughput you can set this property as false. The default is true.


Used by an http-connector to specify that it is using HTTP upgrade and therefore is multiplexing messaging traffic over HTTP. This property is set automatically by JBoss EAP to true when the http-connector is created and does not require an administrator.


Specifies the http-acceptor on the server-side to which the http-connector will connect. The connector will be multiplexed over HTTP and needs this info to find the relevant http-acceptor after the HTTP upgrade. This property is set automatically by JBoss EAP when the http-connector is created and does not require an administrator.


For a http or a remote connector, this is used to specify the local address which the client will use when connecting to the remote address. If a local address is not specified then the connector will use any available local address.


For a http or a remote connector, this is used to specify which local port the client will use when connecting to the remote address. If the local-port default is used (0) then the connector will let the system pick up an ephemeral port. Valid port values are 0 to 65535.


If configured to use NIO, the messaging will by default use a number of threads equal to three times the number of cores (or hyper-threads) as reported by Runtime.getRuntime().availableProcessors() for processing incoming packets. To override this value, you can set a custom value for the number of threads. The default is -1.


If this is true then Nagle’s algorithm will be enabled. This algorithm helps improve the efficiency of TCP/IP networks by reducing the number of packets sent over a network. The default is true.


This parameter determines the size of the TCP send buffer in bytes. The default is 32768.


This parameter determines the size of the TCP receive buffer in bytes. The default is 32768.


This parameter will ensure all JMS connections share a single pool of Java threads, rather than each connection having its own pool. This serves to avoid exhausting the maximum number of processes on the operating system. The default is true.

8.5. Connecting to a Server

If you want to connect a client to a server, you have to have a proper connector. There are two ways to do that. You could use a ConnectionFactory which is configured on the server and can be obtained via JNDI lookup. Alternatively, you could use the ActiveMQ Artemis core API and configure the whole ConnectionFactory on the client side.

8.5.1. JMS Connection Factories

Clients can use JNDI to look up ConnectionFactory objects which provide connections to the server. Connection Factories can expose each of the three types of connector:

A connection-factory referencing a remote-connector can be used by a remote client to send messages to or receive messages from the server (assuming the connection-factory has an appropriately exported entry). A remote-connector is associated with a socket-binding that tells the client using the connection-factory where to connect.

A connection-factory referencing an in-vm-connector is suitable to be used by a local client to either send messages to or receive messages from a local server. An in-vm-connector is associated with a server-id which tells the client using the connection-factory where to connect, since multiple messaging servers can run in a single JVM.

A connection-factory referencing a http-connector is suitable to be used by a remote client to send messages to or receive messages from the server by connecting to its HTTP port before upgrading to the messaging protocol. A http-connector is associated with the socket-binding that represents the HTTP socket, which by default is named http.

Since JMS 2.0, a default JMS connection factory is accessible to Java EE applications under the JNDI name java:comp/DefaultJMSConnectionFactory. The messaging-activemq subsystem defines a pooled-connection-factory that is used to provide this default connection factory.

Below are the default connectors and connection factories that are included in the full configuration profile for JBoss EAP:

<subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
  <server name="default">
    <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor" />
    <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
      <param name="batch-delay" value="50"/>
    <in-vm-connector name="in-vm" server-id="0"/>
    <connection-factory name="InVmConnectionFactory" connectors="in-vm" entries="java:/ConnectionFactory" />
    <pooled-connection-factory name="activemq-ra" transaction="xa" connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/>

The entries attribute of a factory specifies the JNDI names under which the factory will be exposed. Only JNDI names bound in the java:jboss/exported namespace are available to remote clients. If a connection-factory has an entry bound in the java:jboss/exported namespace a remote client would look-up the connection-factory using the text after java:jboss/exported. For example, the RemoteConnectionFactory is bound by default to java:jboss/exported/jms/RemoteConnectionFactory which means a remote client would look-up this connection-factory using jms/RemoteConnectionFactory. A pooled-connection-factory should not have any entry bound in the java:jboss/exported namespace because a pooled-connection-factory is not suitable for remote clients.

8.5.2. Connecting to the Server Using JNDI

If the client resides within the same JVM as the server, it can use the in-vm connector provided by the InVmConnectionFactory. Here is how the InvmConnectionFactory is typically configured, as found for example in standalone-full.xml.

  connectors="in-vm" />

Note the value of the entries attribute. Clients using the InVmConnectionFactory should drop the leading java:/ during lookup, as in the following example:

InitialContext ctx = new InitialContext();
ConnectionFactory cf = (ConnectionFactory)ctx.lookup("ConnectionFactory");
Connection connection = cf.createConnection();

Remote clients use the RemoteConnectionFactory, which is usually configured as below:


Remote clients should ignore the leading java:jboss/exported/ of the value for entries, following the example of the code snippet below:

final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, "org.wildfly.naming.client.WildFlyInitialContextFactory");
env.put(Context.PROVIDER_URL, "http-remoting://remotehost:8080");
InitialContext remotingCtx = new InitialContext(env);
ConnectionFactory cf = (ConnectionFactory) remotingCtx.lookup("jms/RemoteConnectionFactory");

Note the value for the PROVIDER_URL property and how the client is using the JBoss EAP http-remoting protocol. Note also how the client is using the org.wildfly.naming.client.WildFlyInitialContextFactory, which implies the client has this class and its encompassing client JAR somewhere in the classpath. For maven projects, this can be achieved by including the following dependency:


8.5.3. Connecting to the Server Using the Core API

You can use the Core API to make client connections without needing a JNDI lookup. Clients using the Core API require a client JAR in their classpath, just as JNDI-based clients.


Clients use ServerLocator instances to create ClientSessionFactory instances. As their name implies, ServerLocator instances are used to locate servers and create connections to them.

In JMS terms think of a ServerLocator in the same way you would a JMS Connection Factory.

ServerLocator instances are created using the ActiveMQClient factory class.

ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(new TransportConfiguration(InVMConnectorFactory.class.getName()));

Clients use a ClientSessionFactory to create ClientSession instances, which are basically connections to a server. In JMS terms think of them as JMS connections.

ClientSessionFactory instances are created using the ServerLocator class.

ClientSessionFactory factory =  locator.createClientSessionFactory();

A client uses a ClientSession for consuming and producing messages and for grouping them in transactions. ClientSession instances can support both transactional and non transactional semantics and also provide an XAResource interface so messaging operations can be performed as part of a JTA transaction.

ClientSession instances group ClientConsumers and ClientProducers.

ClientSession session = factory.createSession();

The simple example below highlights some of what was just discussed:

ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(
  new TransportConfiguration( InVMConnectorFactory.class.getName()));

// In this simple example, we just use one session for both
// producing and consuming
ClientSessionFactory factory =  locator.createClientSessionFactory();
ClientSession session = factory.createSession();

// A producer is associated with an address ...
ClientProducer producer = session.createProducer("example");
ClientMessage message = session.createMessage(true);

// We need a queue attached to the address ...
session.createQueue("example", "example", true);

// And a consumer attached to the queue ...
ClientConsumer consumer = session.createConsumer("example");

// Once we have a queue, we can send the message ...

// We need to start the session before we can -receive- messages ...
ClientMessage msgReceived = consumer.receive();

System.out.println("message = " + msgReceived.getBodyBuffer().readString());


8.6. Using Messaging Behind a Load Balancer

To use messaging behind a load balancer, you must use connection-level load balancing. You can achieve this in JBoss EAP by configuring a static Undertow HTTP load balancer. The steps to accomplish this are similar to those documented in the Configure Undertow as a Static Load Balancer section of the JBoss EAP Configuration Guide, with the following differences:

Configure the HTTP Load Balancer

The following example shows how to configure a JBoss EAP instance to be a static HTTP load balancer. The JBoss EAP instance is located at and will load balance between two additional servers: and The load balancer will reverse-proxy to the back-end servers and will use the HTTP protocol.

  1. Add a reverse proxy handler.

  2. Define the outbound socket bindings for each remote host.

    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host1/:add(, port=8080)
    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host2/:add(, port=8080)
  3. Add each remote host to the reverse proxy handler.

    /subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host1:add(outbound-socket-binding=remote-host1, scheme=http, instance-id=myroute, path=/)
    /subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host2:add(outbound-socket-binding=remote-host2, scheme=http, instance-id=myroute, path=/)
  4. Add the reverse proxy location.


Disable the Messaging Cluster for HTTP Load Balancing

Disable the messaging cluster by removing the cluster connection.


Configure the Back-end Messaging Workers

You must configure back-end messaging workers only if you plan to do JNDI lookups behind the load balancer.

  1. Create a new outbound socket binding that points to the load-balancing server.

  2. Create an HTTP connector that references the load-balancing server socket binding.

    /subsystem=messaging-activemq/server=default/http-connector=balancer-connector:add(socket-binding=balancer-binding, endpoint=http-acceptor)
  3. Add the HTTP connector to the connection factory used by the client.


Limitations of the HTTP Load Balancer for Messaging

No Support for Messaging Clusters Behind the Load Balancer

The only supported configuration is one load balancer and one back-end worker server. A messaging cluster behind a load balancer is not supported.

When a client connects to a server in a cluster, it receives the topology of the cluster. This means that clients are able to connect directly to worker servers without going through the load balancer. In the situation where back-end servers are not visible, for example, if they are on a private network behind the load balancer, the client connections will fail.

Note that without a cluster there is no message redistribution, which means messaging does not work as expected unless there is only one server behind the load balancer.

Lack of Support for Other Load Balancers

The only supported load balancer is Undertow configured as a static Undertow HTTP load balancer. High availability is not supported in this topology behind this load balancer.