Red Hat Training

A Red Hat training course is available for Red Hat JBoss Enterprise Application Platform

18.2. Configuration of Transports

18.2.1. About Acceptors and Connectors

HornetQ uses the concept of connectors and acceptors as a key part of the messaging system.

Acceptors and Connectors

Acceptor
An acceptor defines which types of connections are accepted by the HornetQ server.
Connector
A connector defines how to connect to a HornetQ server, and is used by the HornetQ client.
There are two types of connectors and acceptors, relating to whether the matched connector and acceptor pair occur within same JVM or not.

Invm and Netty

Invm
Invm is short for Intra Virtual Machine. It can be used when both the client and the server are running in the same JVM.
Netty
The name of a JBoss project. It must be used when the client and server are running in different JVMs.
A HornetQ client must use a connector that is compatible with one of the server's acceptors. Only an Invm connector can connect to an Invm acceptor, and only a netty connector can connect to a netty acceptor. The connectors and acceptors are both configured on the server in a standalone.xml and domain.xml. You can use either the Management Console or the Management CLI to define them.

18.2.2. Configuring Netty TCP

Netty TCP is a simple unencrypted TCP sockets based transport. Netty TCP can be configured to use old blocking Java IO or non blocking Java NIO. Java NIO is recommended on the server side for better scalability with many concurrent connections. If the number of concurrent connections is less Java old IO can give better latency than NIO.
Netty TCP is not recommended for running connections across an untrusted network as it is unencrypted. With the Netty TCP transport all connections are initiated from the client side.

Example 18.1. Example of Netty TCP Configuration from Default EAP Configuration

<connectors>
  <netty-connector name="netty" socket-binding="messaging"/>
  <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
  </netty-connector>
  <in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<acceptors>
  <netty-acceptor name="netty" socket-binding="messaging"/>
  <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
    <param key="batch-delay" value="50"/>
    <param key="direct-deliver" value="false"/>
  </netty-acceptor>
  <in-vm-acceptor name="in-vm" server-id="0"/>
</acceptors>
The example configuration also shows how the JBoss EAP 6 implementation of HornetQ uses socket bindings in the acceptor and connector configuration. This differs from the standalone version of HornetQ, which requires you to declare the specific hosts and ports.
The following table describes Netty TCP configuration properties:

Table 18.1. Netty TCP Configuration Properties

Property Default Description
batch-delay 0 milliseconds Before writing packets to the transport, HornetQ can be configured to batch up writes for a maximum of batch-delay milliseconds. This increases the overall throughput for very small messages by increasing average latency for message transfer
direct-deliver true When a message arrives on the server and is delivered to waiting consumers, by default, the delivery is done on the same thread on which the message arrived. This gives good latency in environments with relatively small messages and a small number of consumers but reduces the throughput and latency. For highest throughput you can set this property as "false"
local-address [local address available] For a netty connector, this is used to specify the local address which the client will use when connecting to the remote address. If a local address is not specified then the connector will use any available local address
local-port 0 For a netty connector, this is used to specify which local port the client will use when connecting to the remote address. If the local-port default is used (0) then the connector will let the system pick up an ephemeral port. valid ports are 0 to 65535
nio-remoting-threads -1 If configured to use NIO, HornetQ will, by default, use a number of threads equal to three times the number of cores (or hyper-threads) as reported by Runtime.getRuntime().availableProcessors() for processing incoming packets. To override this value, you can set a custom value for the number of threads
tcp-no-delay true If this is true then Nagle's algorithm will be enabled. This algorithm helps improve the efficiency of TCP/IP networks by reducing the number of packets sent over a network
tcp-send-buffer-size 32768 bytes This parameter determines the size of the TCP send buffer in bytes
tcp-receive-buffer-size 32768 bytes This parameter determines the size of the TCP receive buffer in bytes
use-nio false If this is true then Java non blocking NIO will be used. If set to false then old blocking Java IO will be used.If you need the server to handle many concurrent connections use non blocking Java NIO otherwise go for old (blocking) IO
use-nio-global-worker-pool false This parameter will ensure all JMS connections share a single pool of Java threads (rather than each connection having its own pool). This serves to avoid exhausting the maximum number of processes on the operating system.

Important

If you have use-nio set to true, use the use-nio-global-worker-pool parameter to minimize the risk that a machine may create a large number of connections, which could lead to an OutOfMemory error.
<netty-connector name="netty" socket-binding="messaging">
   <param key="use-nio" value="true"/>
   <param key="use-nio-global-worker-pool" value="true"/>
</netty-connector>

Note

Netty TCP properties are valid for all types of transport (Netty SSL, Netty HTTP and Netty Servlet).

18.2.3. Configuring Netty Secure Sockets Layer (SSL)

Netty TCP is a simple unencrypted TCP sockets based transport. Netty SSL is similar to Netty TCP but it provides enhanced security by encrypting TCP connections using the Secure Sockets Layer (SSL).

Warning

Use of SSLv3 protocol in SSL connectors/acceptors was disallowed because of "Poodle" vulnerability. JMS clients connecting by this protocol will be refused.
The following example shows Netty configuration for one way SSL:

Note

Most of the following parameters can be used with acceptors as well as connectors. However some parameters work only with acceptors. The parameter description explains the difference between using these parameters in connectors and acceptors.
<acceptors>
 <netty-acceptor name="netty" socket-binding="messaging"/>
   <param key="ssl-enabled" value="true"/>
   <param key="key-store-password" value="[keystore password]"/>
   <param key="key-store-path" value="[path to keystore file]"/>
 </netty-acceptor>
</acceptors>

Table 18.2. Netty SSL Configuration Properties

Property Name Default Description
ssl-enabled true This enables SSL
key-store-password [keystore password]
When used on an acceptor this is the password for the server side keystore.
When used on a connector this is the password for the client-side keystore. This is only relevant for a connector if you are using two way SSL (mutual authentication). This value can be configured on the server, but it is downloaded and used by the client.
key-store-path [path to keystore file]
When used on an acceptor this is the path to the SSL key store on the server which holds the server's certificates (whether self-signed or signed by an authority).
When used on a connector this is the path to the client-side SSL key store which holds the client certificates. This is only relevant for a connector if you are using 2-way SSL (i.e. mutual authentication). Although this value is configured on the server, it is downloaded and used by the client.
If you are configuring Netty for two way SSL (mutual authentication between server and client), there are three additional parameters in addition to the ones described in the above example for one way SSL:
  • need-client-auth: This specifies the need for two way (mutual authentication) for client connections.
  • trust-store-password: When used on an acceptor this is the password for the server side trust store. When used on a connector this is the password for the client side trust store. This is relevant for a connector for both one way and two way SSL. This value can be configured on the server, but it is downloaded and used by the client
  • trust-store-path: When used on an acceptor this is the path to the server side SSL trust store that holds the keys of all the clients that the server trusts. When used on a connector this is the path to the client side SSL key store which holds the public keys of all the servers that the client trusts. This is relevant for a connector for both one way and two way SSL. This path can be configured on the server, but it is downloaded and used by the client.

18.2.4. Configuring Netty HTTP

Netty HTTP tunnels packets over the HTTP protocol. It can be useful in scenarios where firewalls allow only HTTP traffic to pass. Netty HTTP uses the same properties as Netty TCP along with some following additional properties:

Note

The following parameters can be used with acceptors as well as connectors. Netty HTTP transport does not allow the reuse of standard HTTP port (8080 by default). The use of standard HTTP port results in an exception. You can use Section 18.2.5, “Configuring Netty Servlet” (Netty Servlet Transport) for tunneling HornetQ connections through standard HTTP port.
<socket-binding name="messaging-http" port="7080" />
<acceptors>
  <netty-acceptor name="netty" socket-binding="messaging-http">
    <param key="http-enabled" value="false"/>
    <param key="http-client-idle-time" value="500"/>
    <param key="http-client-idle-scan-period" value="500"/>
    <param key="http-response-time" value="10000"/>
   	<param key="http-server-scan-period" value="5000"/>
   	<param key="http-requires-session-id" value="false"/>
  </netty-acceptor>
</acceptors>
The following table describes the additional properties for configuring Netty HTTP:

Table 18.3. Netty HTTP Configuration Properties

Property Name Default Description
http-enabled false If this is true HTTP is enabled
http-client-idle-time 500 milliseconds How long a client can be idle before sending an empty HTTP request to keep the connection alive
http-client-idle-scan-period 500 milliseconds How often (milliseconds) to scan for idle clients
http-response-time 10000 milliseconds The time period for which the server can wait before sending an empty HTTP response to keep the connection alive
http-server-scan-period 5000 milliseconds How often, in milliseconds, to scan for clients needing responses
http-requires-session-id false If this is true then client will wait after the first call to receive a session ID

Warning

Automatic client failover is not supported for clients connecting through Netty HTTP transport.

18.2.5. Configuring Netty Servlet

The servlet transport allows HornetQ traffic to be tunneled over HTTP to a servlet running in a servlet engine which then redirects it to an in-VM HornetQ server. Netty HTTP transport acts as a web server listening for HTTP traffic on specific ports. With the servlet transport HornetQ traffic is proxied through a servlet engine which may already be serving web site or other applications.
In order to configure a servlet engine to work the Netty Servlet transport you need to follow these steps:
  • Deploy the servlet: The following example describes a web application that uses the servlet:
    <web-app>
      <servlet>
        <servlet-name>HornetQServlet</servlet-name>
        <servlet-class>org.jboss.netty.channel.socket.http.HttpTunnelingServlet</servlet-class>
        <init-param>
          <param-name>endpoint</param-name>
          <param-value>local:org.hornetq</param-value>
        </init-param>
          <load-on-startup>1</load-on-startup>
      </servlet>
    
      <servlet-mapping>
        <servlet-name>HornetQServlet</servlet-name>
        <url-pattern>/HornetQServlet</url-pattern>
      </servlet-mapping>
    </web-app>
    
    The init parameter endpoint specifies the host attribute of the Netty acceptor that the servlet will forward its packets to
  • Insert the Netty servlet acceptor on the server side configuration: The following example shows the definition of an acceptor in server configuration files (standalone.xml and domain.xml):
    <acceptors>
       <acceptor name="netty-servlet">
          <factory-class>
             org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory
          </factory-class>
          <param key="use-servlet" value="true"/>
          <param key="host" value="org.hornetq"/>
       </acceptor>
    </acceptors>
    
  • The last step is to define a connector for the client in server configuration files (standalone.xml and domain.xml):
    <netty-connector name="netty-servlet" socket-binding="http">
       <param key="use-servlet" value="true"/>
       <param key="servlet-path" value="/messaging/HornetQServlet"/>
    </netty-connector>
    
  • It is also possible to use the servlet transport over SSL by adding the following configuration to the connector:
    <netty-connector name="netty-servlet" socket-binding="https">
       <param key="use-servlet" value="true"/>
       <param key="servlet-path" value="/messaging/HornetQServlet"/>
       <param key="ssl-enabled" value="true"/>
       <param key="key-store-path" value="path to a key-store"/>
       <param key="key-store-password" value="key-store password"/>
    </netty-connector>
    

Warning

Automatic client failover is not supported for clients connecting through HTTP tunneling servlet.

Note

Netty servlet cannot be used to configure EAP 6 servers in order to set up a HornetQ cluster.