Red Hat Training

A Red Hat training course is available for Red Hat JBoss Enterprise Application Platform

Chapter 22. Configuring High Availability

22.1. Introduction to High Availability

JBoss EAP provides the following high availability services to guarantee the availability of deployed Java EE applications.

Load balancing
This allows a service to handle a large number of requests by spreading the workload across multiple servers. A client can have timely responses from the service even in the event of a high volume of requests.
Failover
This allows a client to have uninterrupted access to a service even in the event of hardware or network failures. If the service fails, another cluster member takes over the client’s requests so that it can continue processing.

Clustering is a term that encompasses all of these capabilities. Members of a cluster can be configured to share workloads (load balancing) and pick up client processing in the event of a failure of another cluster member (failover).

Note

It is important to keep in mind that the JBoss EAP operating mode chosen (standalone server or managed domain) pertains to how you want to manage your servers. High availability services can be configured in JBoss EAP regardless of its operating mode.

JBoss EAP supports high availability at several different levels using various components. Some of those components of the runtime and your applications that can be made highly-available are:

  • Instances of the application server
  • Web applications, when used in conjunction with the internal JBoss Web Server, Apache HTTP Server, Microsoft IIS, or Oracle iPlanet Web Server
  • Stateful and stateless session Enterprise JavaBeans (EJBs)
  • Single sign-on (SSO) mechanisms
  • HTTP sessions
  • JMS services and message-driven beans (MDBs)
  • Singleton MSC services
  • Singleton deployments

Clustering is made available to JBoss EAP by the jgroups, infinispan, and modcluster subsystems. The ha and full-ha profiles have these systems enabled. In JBoss EAP, these services start up and shut down on demand, but they will only start up if an application configured as distributable is deployed on the servers.

See the JBoss EAP Development Guide for how to mark an application as distributable.

22.2. Cluster Communication with JGroups

22.2.1. About JGroups

JGroups is a toolkit for reliable messaging and can be used to create clusters whose nodes can send messages to each other.

The jgroups subsystem provides group communication support for high availability services in JBoss EAP. It allows you to configure named channels and protocol stacks as well as view runtime statistics for channels. The jgroups subsystem is available when using a configuration that provides high availability capabilities, such as the ha or full-ha profile in a managed domain, or the standalone-ha.xml or standalone-full-ha.xml configuration file for a standalone server.

JBoss EAP is preconfigured with two JGroups stacks:

udp
The nodes in the cluster use User Datagram Protocol (UDP) multicasting to communicate with each other. This is the default stack.
tcp
The nodes in the cluster use Transmission Control Protocol (TCP) to communicate with each other.

You can use the preconfigured stacks or define your own to suit your system’s specific requirements.

Note

TCP has more overhead and is often considered slower than UDP since it handles error checking, packet ordering, and congestion control itself. JGroups handles these features for UDP, whereas TCP guarantees them itself. TCP is a good choice when using JGroups on unreliable or high congestion networks, or when multicast is not available.

22.2.2. Switch the Default JGroups Channel to Use TCP

By default, cluster nodes communicate using the UDP protocol. The default ee JGroups channel uses the predefined udp protocol stack.

<channels default="ee">
  <channel name="ee" stack="udp"/>
</channels>
<stacks>
  <stack name="udp">
    <transport type="UDP" socket-binding="jgroups-udp"/>
    <protocol type="PING"/>
    ...
  </stack>
  <stack name="tcp">
    <transport type="TCP" socket-binding="jgroups-tcp"/>
    <protocol type="MPING" socket-binding="jgroups-mping"/>
    ...
  </stack>
</stacks>

Some networks only allow TCP to be used. Use the following management CLI command to switch the ee channel to use the preconfigured tcp stack.

/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)

This default tcp stack uses the MPING protocol, which uses IP multicast to discover the initial cluster membership. See the following sections for configuring stacks for alternative membership discovery protocols:

  • Using the TCPPING protocol to define a static cluster membership list.
  • Using the TCPGOSSIP protocol to use an external gossip router to discover the members of a cluster.

22.2.3. Configure TCPPING

This procedure creates a new JGroups stack that uses the TCPPING protocol to define a static cluster membership list. A base script is provided that creates a tcpping stack and sets the default ee channel to use this new stack. The management CLI commands in this script must be customized for your environment and will be processed as a batch.

  1. Copy the following script into a text editor and save it to the local file system.

    batch
    # Add the tcpping stack
    /subsystem=jgroups/stack=tcpping:add
    /subsystem=jgroups/stack=tcpping/transport=TCP:add(socket-binding=jgroups-tcp)
    /subsystem=jgroups/stack=tcpping/protocol=TCPPING:add
    # Set the properties for the TCPPING protocol
    /subsystem=jgroups/stack=tcpping/protocol=TCPPING:write-attribute(name=properties,value={initial_hosts="HOST_A[7600],HOST_B[7600]",port_range=0,timeout=3000})
    /subsystem=jgroups/stack=tcpping/protocol=MERGE3:add
    /subsystem=jgroups/stack=tcpping/protocol=FD_SOCK:add(socket-binding=jgroups-tcp-fd)
    /subsystem=jgroups/stack=tcpping/protocol=FD:add
    /subsystem=jgroups/stack=tcpping/protocol=VERIFY_SUSPECT:add
    /subsystem=jgroups/stack=tcpping/protocol=pbcast.NAKACK2:add
    /subsystem=jgroups/stack=tcpping/protocol=UNICAST3:add
    /subsystem=jgroups/stack=tcpping/protocol=pbcast.STABLE:add
    /subsystem=jgroups/stack=tcpping/protocol=pbcast.GMS:add
    /subsystem=jgroups/stack=tcpping/protocol=MFC:add
    /subsystem=jgroups/stack=tcpping/protocol=FRAG2:add
    # Set tcpping as the stack for the ee channel
    /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping)
    run-batch
    reload

    Note that the order of protocols defined is important.

  2. Modify the script for your environment.

    • If you are running in a managed domain, you must specify which profile to update by preceding the /subsystem=jgroups commands with /profile=PROFILE_NAME.
    • Adjust the TCPPING properties, which are optional, for your environment:

      • initial_hosts: A comma-separated list of the hosts, using the syntax HOST[PORT], that are considered well-known and will be available to look up the initial membership.
      • port_range: If desired, you can assign a port range. If you assign a port range of 2, and the initial port for a host is 7600, then TCPPING will attempt to contact the host on ports 7600-7602. The port range applies to each address specified in initial_hosts.
      • timeout: A timeout value, in milliseconds, for cluster members.
  3. Run the script by passing the script file to the management CLI.

    $ EAP_HOME/bin/jboss-cli.sh --connect --file=/path/to/SCRIPT_NAME

The TCPPING stack is now available and TCP is used for network communication.

22.2.4. Configure TCPGOSSIP

This procedure creates a new JGroups stack that uses the TCPGOSSIP protocol to use an external gossip router to discover the members of a cluster. A base script is provided that creates a tcpgossip stack and sets the default ee channel to use this new stack. The management CLI commands in this script must be customized for your environment and will be processed as a batch.

  1. Copy the following script into a text editor and save it to the local file system.

    batch
    # Add the tcpgossip stack
    /subsystem=jgroups/stack=tcpgossip:add
    /subsystem=jgroups/stack=tcpgossip/transport=TCP:add(socket-binding=jgroups-tcp)
    /subsystem=jgroups/stack=tcpgossip/protocol=TCPGOSSIP:add
    # Set the properties for the TCPGOSSIP protocol
    /subsystem=jgroups/stack=tcpgossip/protocol=TCPGOSSIP:write-attribute(name=properties,value={initial_hosts="HOST_A[13001]"})
    /subsystem=jgroups/stack=tcpgossip/protocol=MERGE3:add
    /subsystem=jgroups/stack=tcpgossip/protocol=FD_SOCK:add(socket-binding=jgroups-tcp-fd)
    /subsystem=jgroups/stack=tcpgossip/protocol=FD:add
    /subsystem=jgroups/stack=tcpgossip/protocol=VERIFY_SUSPECT:add
    /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.NAKACK2:add
    /subsystem=jgroups/stack=tcpgossip/protocol=UNICAST3:add
    /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.STABLE:add
    /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.GMS:add
    /subsystem=jgroups/stack=tcpgossip/protocol=MFC:add
    /subsystem=jgroups/stack=tcpgossip/protocol=FRAG2:add
    # Set tcpgossip as the stack for the ee channel
    /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpgossip)
    run-batch
    reload

    Note that the order of protocols defined is important.

  2. Modify the script for your environment.

    • If you are running in a managed domain, you must specify which profile to update by preceding the /subsystem=jgroups commands with /profile=PROFILE_NAME.
    • Adjust the TCPGOSSIP properties, which are optional, for your environment:

      • initial_hosts: A comma-separated list of the hosts, using the syntax HOST[PORT], that are considered well-known and will be available to look up the initial membership.
      • reconnect_interval: The interval in milliseconds by which a disconnected stub attempts to reconnect to the gossip router.
      • sock_conn_timeout: The maximum time for socket creation. The default is 1000 milliseconds.
      • sock_read_timeout: The maximum time in milliseconds to block on a read. A value of 0 will block indefinitely.
  3. Run the script by passing the script file to the management CLI.

    $ EAP_HOME/bin/jboss-cli.sh --connect --file=/path/to/SCRIPT_NAME

The TCPGOSSIP stack is now available and TCP is used for network communication. This stack is configured for use with a gossip router so that JGroups cluster members can find other cluster members.

22.2.5. Binding JGroups to a Network Interface

By default, JGroups only binds to the private network interface, which points to localhost in the default configuration. For security reasons, JGroups will not bind to the network interface defined by the -b argument specified during JBoss EAP startup, as clustering traffic should not be exposed on a public network interface.

See the Network and Port Configuration chapter in this guide for information about how to configure network interfaces.

Important

For security reasons, JGroups should only be bound to a non-public network interface. For performance reasons, we also recommend that the network interface for JGroups traffic should be part of a dedicated Virtual Local Area Network (VLAN).

22.2.6. Securing a Cluster

There are several concerns to address in order to run a cluster securely:

  • Preventing unauthorized nodes from joining the cluster. This is addressed by requiring authentication.
  • Preventing non-members from communicating with cluster members. This is addressed by encrypting messages.
Configuring Authentication

JGroups authentication is performed by the AUTH protocol. The purpose is to ensure that only authenticated nodes can join a cluster.

In the applicable server configuration file, add the AUTH protocol with the appropriate property settings. The AUTH protocol should be configured immediately before the pbcast.GMS protocol.

<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
  <stacks>
    <stack name="udp">
      <transport type="UDP" socket-binding="jgroups-udp"/>
      <protocol type="PING"/>
      <protocol type="MERGE3"/>
      <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
      <protocol type="FD_ALL"/>
      <protocol type="VERIFY_SUSPECT"/>
      <protocol type="pbcast.NAKACK2"/>
      <protocol type="UNICAST3"/>
      <protocol type="pbcast.STABLE"/>
      <protocol type="AUTH">
        <property name="auth_class">org.jgroups.auth.MD5Token</property>
        <property name="auth_value">mytoken</property> <!-- Change this password -->
        <property name="token_hash">MD5</property>
      </protocol>
      <protocol type="pbcast.GMS"/>
      <protocol type="UFC"/>
      <protocol type="MFC"/>
      <protocol type="FRAG2"/>
    </stack>
  </stacks>
</subsystem>
Configuring Encryption

To encrypt messages, JGroups uses a secret key that is shared by the members of a cluster. The sender encrypts the message using the shared secret key, and the receiver decrypts the message using the same secret key. With symmetric encryption, which is configured using the SYM_ENCRYPT protocol, nodes use a shared keystore to retrieve the secret key. With asymmetric encryption, which is configured using the ASYM_ENCRYPT protocol, nodes retrieve the secret key from the coordinator of the cluster after being authenticated using AUTH.

Important

You must apply Red Hat JBoss Enterprise Application Platform 7.0 Update 01 or a newer cumulative patch to your JBoss EAP installation in order to have access to the SYM_ENCRYPT and ASYM_ENCRYPT protocols.

See the JBoss EAP Patching and Upgrading Guide for information on applying cumulative patches.

Using Symmetric Encryption

In order to use SYM_ENCRYPT, you must set up a keystore that will be referenced in the JGroups configuration for each node.

  1. Create a keystore.

    In the following command, replace VERSION with the appropriate JGroups JAR version and PASSWORD with a keystore password.

    $ java -cp EAP_HOME/modules/system/layers/base/org/jgroups/main/jgroups-VERSION.jar org.jgroups.demos.KeyStoreGenerator --alg AES --size 128 --storeName defaultStore.keystore --storepass PASSWORD --alias mykey

    This will generate a defaultStore.keystore file that will be referenced in the JGroups configuration.

  2. Configure the SYM_ENCRYPT protocol in the jgroups subsystem.

    In the applicable server configuration file, add the SYM_ENCRYPT protocol with the appropriate property settings. The SYM_ENCRYPT protocol should be configured immediately before the pbcast.NAKACK2 protocol.

    <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
      <stacks>
        <stack name="udp">
          <transport type="UDP" socket-binding="jgroups-udp"/>
          <protocol type="PING"/>
          <protocol type="MERGE3"/>
          <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
          <protocol type="FD_ALL"/>
          <protocol type="VERIFY_SUSPECT"/>
          <protocol type="SYM_ENCRYPT">
            <property name="provider">SunJCE</property>
            <property name="sym_algorithm">AES</property>
            <property name="encrypt_entire_message">true</property>
            <property name="keystore_name">/path/to/defaultStore.keystore</property>
            <property name="store_password">PASSWORD</property>
            <property name="alias">mykey</property>
          </protocol>
          <protocol type="pbcast.NAKACK2"/>
          <protocol type="UNICAST3"/>
          <protocol type="pbcast.STABLE"/>
          <protocol type="pbcast.GMS"/>
          <protocol type="UFC"/>
          <protocol type="MFC"/>
          <protocol type="FRAG2"/>
        </stack>
      </stacks>
    </subsystem>
Note

Configuring AUTH is optional when using SYM_ENCRYPT.

Using Asymmetric Encryption
  1. Configure the ASYM_ENCRYPT protocol in the jgroups subsystem.

    In the applicable server configuration file, add the ASYM_ENCRYPT protocol with the appropriate property settings. The ASYM_ENCRYPT protocol should be configured immediately before the pbcast.NAKACK2 protocol.

    <subsystem xmlns="urn:jboss:domain:jgroups:4.0">
      <stacks>
        <stack name="udp">
          <transport type="UDP" socket-binding="jgroups-udp"/>
          <protocol type="PING"/>
          <protocol type="MERGE3"/>
          <protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
          <protocol type="FD_ALL"/>
          <protocol type="VERIFY_SUSPECT"/>
          <protocol type="ASYM_ENCRYPT">
            <property name="encrypt_entire_message">true</property>
            <property name="sym_keylength">128</property>
            <property name="sym_algorithm">AES/ECB/PKCS5Padding</property>
            <property name="asym_keylength">512</property>
            <property name="asym_algorithm">RSA</property>
          </protocol>
          <protocol type="pbcast.NAKACK2"/>
          <protocol type="UNICAST3"/>
          <protocol type="pbcast.STABLE"/>
          <!-- Configure AUTH protocol here -->
          <protocol type="pbcast.GMS"/>
          <protocol type="UFC"/>
          <protocol type="MFC"/>
          <protocol type="FRAG2"/>
        </stack>
      </stacks>
    </subsystem>
  2. Configure the AUTH protocol in the jgroups subsystem.

    AUTH is required by ASYM_ENCRYPT. See the Configuring Authentication section for instructions.

22.2.7. Configure JGroups Thread Pools

The jgroups subsystem contains the default, internal, oob, and timer thread pools. These pools can be configured for any JGroups stack.

The following table lists the attributes you can configure for each thread pool and the default value for each.

Thread Pool Namekeepalive-timemax-threadsmin-threadsqueue-length

default

60000L

300

20

100

internal

60000L

4

2

100

oob

60000L

300

20

0

timer

5000L

4

2

500

Use the following syntax to configure a JGroups thread pool using the management CLI.

/subsystem=jgroups/stack=STACK_TYPE/transport=TRANSPORT_TYPE/thread-pool=THREAD_POOL_NAME:write-attribute(name=ATTRIBUTE_NAME, value=ATTRIBUTE_VALUE)

The following is an example of the management CLI command to set the max-threads value to 500 in the default thread pool for the udp stack.

/subsystem=jgroups/stack=udp/transport=UDP/thread-pool=default:write-attribute(name="max-threads", value="500")

22.2.8. Configure JGroups Send and Receive Buffers

Resolving Buffer Size Warnings

By default, JGroups is configured with certain send and receive buffer values. However, your operating system may limit the available buffer sizes and JBoss EAP may not be able to use its configured buffer values. In this situation you will see warnings in the JBoss EAP logs similar to the following:

WARNING [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 68)
JGRP000015: the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB.
This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARNING [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 68)
JGRP000015: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB.
This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

To resolve this, consult your operating system documentation for instructions on how to increase the buffer size. For Red Hat Enterprise Linux systems, edit /etc/sysctl.conf as the root user to configure maximum values for buffer sizes that will survive system restarts. For example:

# Allow a 25MB UDP receive buffer for JGroups
net.core.rmem_max = 26214400
# Allow a 1MB UDP send buffer for JGroups
net.core.wmem_max = 1048576

After modifying /etc/sysctl.conf, run sysctl -p for the changes to take effect.

Configuring JGroups Buffer Sizes

You can configure the JGroups buffer sizes that JBoss EAP uses by setting the following transport properties on the UDP and TCP JGroups stacks.

UDP Stack
  • ucast_recv_buf_size
  • ucast_send_buf_size
  • mcast_recv_buf_size
  • mcast_send_buf_size
TCP Stack
  • recv_buf_size
  • send_buf_size

JGroups buffer sizes can be configured using the management console or the management CLI.

Use the following syntax to set a JGroups buffer size property using the management CLI.

/subsystem=jgroups/stack=STACK_NAME/transport=TRANSPORT/property=PROPERTY_NAME:add(value=BUFFER_SIZE)

The following is an example management CLI command to set the recv_buf_size property to 20000000 on the tcp stack.

/subsystem=jgroups/stack=tcp/transport=TRANSPORT/property=recv_buf_size:add(value=20000000)

JGroups buffer sizes can also be configured using the management console by navigating to the JGroups subsystem from the Configuration tab, viewing the relevant stack, selecting Transport, and selecting the transport Properties tab.

22.2.9. JGroups Troubleshooting

22.2.9.1. Nodes Do Not Form a Cluster

Make sure your machine is set up correctly for IP multicast. There are two test programs that ship with JBoss EAP that can be used to test IP multicast: McastReceiverTest and McastSenderTest.

In a terminal, start McastReceiverTest.

$ java -cp EAP_HOME/bin/client/jboss-client.jar org.jgroups.tests.McastReceiverTest -mcast_addr 230.11.11.11 -port 5555

Then in another terminal window, start McastSenderTest.

$ java -cp EAP_HOME/bin/client/jboss-client.jar org.jgroups.tests.McastSenderTest -mcast_addr 230.11.11.11 -port 5555

If you want to bind to a specific network interface card (NIC), use -bind_addr YOUR_BIND_ADDRESS, where YOUR_BIND_ADDRESS is the IP address of the NIC to which you want to bind. Use this parameter in both the sender and the receiver.

When you type in the McastSenderTest terminal window, you should see the output in the McastReceiverTest window. If you do not, try the following steps.

  • Increase the time-to-live for multicast packets by adding -ttl VALUE to the sender command. The default used by this test program is 32 and the VALUE must be no greater than 255.
  • If the machines have multiple interfaces, verify that you are using the correct interface.
  • Contact a system administrator to make sure that multicast will work on the interface you have chosen.

Once you know multicast is working properly on each machine in your cluster, you can repeat the above test to test the network, putting the sender on one machine and the receiver on another.

22.2.9.2. Causes of Missing Heartbeats in Failure Detection

Sometimes a cluster member is suspected by failure detection (FD) because a heartbeat acknowledgement has not been received for some time T, which is defined by timeout and max_tries.

For an example cluster of nodes A, B, C, and D, where A pings B, B pings C, C pings D, and D pings A, C can be suspected for any of the following reasons:

  • B or C are running at 100% CPU for more than T seconds. So even if C sends a heartbeat acknowledgement to B, B may not be able to process it because it is at 100% CPU usage.
  • B or C are garbage collecting, which results in the same situation as above.
  • A combination of the two cases above.
  • The network loses packets. This usually happens when there is a lot of traffic on the network, and the switch starts dropping packets, usually broadcasts first, then IP multicasts, and TCP packets last.
  • B or C are processing a callback. For example, if C received a remote method call over its channel that takes T + 1 seconds to process, during this time, C will not process any other messages, including heartbeats. Therefore, B will not receive the heartbeat acknowledgement and will suspect C.

22.3. Infinispan

22.3.1. About Infinispan

Infinispan is a Java data grid platform that provides a JSR-107-compatible cache interface for managing cached data. For more information about Infinispan functionality and configuration options see the Infinispan Documentation.

The infinispan subsystem provides caching support for JBoss EAP. It allows you to configure and view runtime metrics for named cache containers and caches.

When using a configuration that provides high availability capabilities, such as the ha or full-ha profile in a managed domain, or the standalone-ha.xml or standalone-full-ha.xml configuration file for a standalone server, the infinispan subsystem provides caching, state replication, and state distribution support. In non-high-availability configurations, the infinispan subsystem provides local caching support.

Important

Infinispan is delivered as a private module in JBoss EAP to provide the caching capabilities of JBoss EAP. Infinispan is not supported for direct use by applications.

22.3.2. Cache Containers

A cache container is a repository for the caches used by a subsystem. Each cache container defines a default cache to be used.

JBoss EAP 7 defines the following default Infinispan cache containers:

  • server for singleton caching
  • web for web session clustering
  • ejb for stateful session bean clustering
  • hibernate for entity caching

Example: Default Infinispan Configuration

<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
  <cache-container name="server" aliases="singleton cluster" default-cache="default" module="org.wildfly.clustering.server">
    <transport lock-timeout="60000"/>
    <replicated-cache name="default" mode="SYNC">
      <transaction mode="BATCH"/>
    </replicated-cache>
  </cache-container>
  <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
    <transport lock-timeout="60000"/>
    <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2">
      <locking isolation="REPEATABLE_READ"/>
      <transaction mode="BATCH"/>
      <file-store/>
    </distributed-cache>
  </cache-container>
  <cache-container name="ejb" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
    <transport lock-timeout="60000"/>
    <distributed-cache name="dist" mode="ASYNC" l1-lifespan="0" owners="2">
      <locking isolation="REPEATABLE_READ"/>
      <transaction mode="BATCH"/>
      <file-store/>
    </distributed-cache>
  </cache-container>
  <cache-container name="hibernate" default-cache="local-query" module="org.hibernate.infinispan">
    <transport lock-timeout="60000"/>
    <local-cache name="local-query">
      <eviction strategy="LRU" max-entries="10000"/>
      <expiration max-idle="100000"/>
    </local-cache>
    <invalidation-cache name="entity" mode="SYNC">
      <transaction mode="NON_XA"/>
      <eviction strategy="LRU" max-entries="10000"/>
      <expiration max-idle="100000"/>
    </invalidation-cache>
    <replicated-cache name="timestamps" mode="ASYNC"/>
  </cache-container>
</subsystem>

Note the default cache defined in each cache container. For example, the web cache container defines the dist distributed cache as the default. The dist cache will therefore be used when clustering web sessions.

Important

You can add additional caches and cache containers, for example, for HTTP sessions, stateful session beans, or singleton services or deployments. It is not supported to use these caches directly by user applications.

22.3.2.1. Configure Cache Containers

Cache containers and cache attributes can be configured using the management console or management CLI.

Warning

You should avoid changing cache or cache container names, as other components in the configuration may reference them.

Configure Caches Using the Management Console

Once you navigate to the Infinispan subsystem from the Configuration tab in the management console, you can configure caches and cache containers. In a managed domain, make sure to select the appropriate profile to configure.

  • Add a cache container.

    Click the Add button next to the Cache Container heading and enter the settings for the new cache container.

  • Update cache container settings.

    Choose the appropriate cache container and select Container Settings from the drop down. Configure the cache container settings as necessary.

  • Update cache container transport settings.

    Choose the appropriate cache container and select Transport Settings from the drop down. Configure the cache container transport settings as necessary.

  • Configure caches.

    Choose the appropriate cache container and select View. From the appropriate cache tab (for example, Replicated Caches), you can add, update, and remove caches.

Configure Caches Using the Management CLI

You can configure caches and cache containers using the management CLI. In a managed domain, you must specify the profile to update by preceding these commands with /profile=PROFILE_NAME.

  • Add a cache container.

    /subsystem=infinispan/cache-container=CACHE_CONTAINER:add
  • Add a replicated cache.

    /subsystem=infinispan/cache-container=CACHE_CONTAINER/replicated-cache=CACHE:add(mode=MODE)
  • Set the default cache for a cache container.

    /subsystem=infinispan/cache-container=CACHE_CONTAINER:write-attribute(name=default-cache,value=CACHE)
  • Configure batching for a replicated cache.

    /subsystem=infinispan/cache-container=CACHE_CONTAINER/replicated-cache=CACHE/component=transaction:write-attribute(name=mode,value=BATCH)
Change the Default EJB Cache Container

You can use cache containers in the ejb3 subsystem as described below:

  • To support passivation of EJB session beans, you can use the ejb cache container defined in the infinispan subsystem to store the sessions.
  • For remote EJB clients connecting to a clustered deployment on a server, you must provide the cluster topology information to these clients, so that they can fail over to other nodes in the cluster if the node they are interacting with fails.

If you are changing or renaming the default cache container, named ejb, which supports passivation and the provision of topology information, you must add the cache-container attribute to the passivation-stores element and the cluster attribute to the remote element, as shown in the example below. If you are just adding a new cache for your own use, you need not make those changes.

<subsystem xmlns="urn:jboss:domain:ejb3:4.0">
    <passivation-stores>
        <passivation-store name="infinispan" cache-container="ejb-cltest" max-size="10000"/>
    </passivation-stores>

    <remote cluster="ejb-cltest" connector-ref="http-remoting-connector" thread-pool-name="default"/>
</subsystem>

<subsystem xmlns="urn:jboss:domain:infinispan:4.0">
    ...
    <cache-container name="ejb-cltest" aliases="sfsb" default-cache="dist" module="org.wildfly.clustering.ejb.infinispan">
</subsystem>

22.3.3. Clustering Modes

Clustering can be configured in two different ways in JBoss EAP using Infinispan. The best method for your application will depend on your requirements. There is a trade-off between availability, consistency, reliability and scalability with each mode. Before choosing a clustering mode, you must identify what are the most important features of your network for you, and balance those requirements.

Cache Modes
Replicated
Replicated mode automatically detects and adds new instances on the cluster. Changes made to these instances will be replicated to all nodes on the cluster. Replicated mode typically works best in small clusters because of the amount of information that has to be replicated over the network. Infinispan can be configured to use UDP multicast, which alleviates network traffic congestion to a degree.
Distributed

Distributed mode allows Infinispan to scale the cluster linearly. Distributed mode uses a consistent hash algorithm to determine where in a cluster a new node should be placed. The number of copies (owners) of information to be kept is configurable. There is a trade-off between the number of copies kept, durability of the data, and performance. The more copies that are kept, the more impact on performance, but the less likely you are to lose data in a server failure. The hash algorithm also works to reduce network traffic by locating entries without multicasting or storing metadata.

You should consider using distributed mode as a caching strategy when the cluster size exceeds 6-8 nodes. With distributed mode, data is distributed to only a subset of nodes within the cluster, as opposed to all nodes.

Synchronous and Asynchronous Replication

Replication can be performed either in synchronous or asynchronous mode, and the mode chosen depends on your requirements and your application.

Synchronous replication
With synchronous replication, the thread that handles the user request is blocked until replication has been successful. When the replication is successful, a response is sent back to the client, and only then is the thread is released. Synchronous replication has an impact on network traffic because it requires a response from each node in the cluster. It has the advantage, however, of ensuring that all modifications have been made to all nodes in the cluster.
Asynchronous replication
With asynchronous replication, Infinispan uses a thread pool to carry out replication in the background. The sender does not wait for replies from other nodes in the cluster. However, cache reads for the same session will block until the previous replication completes so that stale data is not read. Replication is triggered either on a time basis or by queue size. Failed replication attempts are written to a log, not notified in real time.

22.3.3.1. Configure the Cache Mode

You can change the default cache using the management CLI.

Note

This section shows instructions specific to configuring the web session cache, which defaults to distributed mode. The steps and management CLI commands can easily be adjusted to apply to other cache containers.

Change to Replicated Cache Mode

The default JBoss EAP 7 configuration for the web session cache does not include a repl replicated cache. This cache must first be added.

Note

The below management CLI commands are for a standalone server. When running in a managed domain, you must specify which profile to update by preceding the /subsystem=infinispan commands with /profile=PROFILE_NAME.

  1. Add the repl replicated cache.

    /subsystem=infinispan/cache-container=web/replicated-cache=repl:add(mode=ASYNC)
    /subsystem=infinispan/cache-container=web/replicated-cache=repl/component=transaction:write-attribute(name=mode,value=BATCH)
    /subsystem=infinispan/cache-container=web/replicated-cache=repl/component=locking:write-attribute(name=isolation, value=REPEATABLE_READ)
    /subsystem=infinispan/cache-container=web/replicated-cache=repl/store=file:add
  2. Change the default cache to the repl replicated cache.

    /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=repl)
  3. Reload the server.

    reload
Change to Distributed Cache Mode

The default JBoss EAP 7 configuration for the web session cache already includes a dist distributed cache.

Note

The below management CLI commands are for a standalone server. When running in a managed domain, you must specify which profile to update by preceding the /subsystem=infinispan commands with /profile=PROFILE_NAME.

  1. Change the default cache to the dist distributed cache.

    /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=dist)
  2. Set the number of owners for the distributed cache. The following command sets 5 owners. The default is 2.

    /subsystem=infinispan/cache-container=web/distributed-cache=dist/:write-attribute(name=owners,value=5)
  3. Reload the server.

    reload

22.3.3.2. Cache Strategy Performance

When using a SYNC caching strategy, the cost of replication is easy to measure and directly seen in response times since the request does not complete until the replication completes.

Although it seems that the ASYNC caching strategy should result in lower response times than the SYNC caching strategy, this is only true under the right conditions. The ASYNC caching strategy is more difficult to measure, but it can provide better performance than the SYNC strategy when the duration between requests is long enough for the cache operation to complete. This is because the cost of replication is not immediately seen in response times.

If requests for the same session are made too quickly, the cost of replication for the previous request is shifted to the front of the subsequent request since it must wait for the replication from the previous request to complete. For rapidly fired requests where a subsequent request is sent immediately after a response is received, the ASYNC caching strategy will perform worse than the SYNC caching strategy. Consequently, there is a threshold for the period of time between requests for the same session where the SYNC caching strategy will actually perform better than the ASYNC caching strategy. In real world usage, requests for the same session are not normally received in rapid succession. Instead, there is typically a period of time in the order of a few seconds or more between the requests. In this case, the ASYNC caching strategy is a sensible default and provides the fastest response times.

22.3.4. Configure Infinispan Thread Pools

The infinispan subsystem contains the async-operations, expiration, listener, persistence, remote-command, state-transfer, and transport thread pools. These pools can be configured for any Infinispan cache container.

The following table lists the attributes you can configure for each thread pool in the infinispan subsystem and the default value for each.

Thread Pool Namekeepalive-timemax-threadsmin-threadsqueue-length

async-operations

60000L

25

25

1000

expiration

60000L

1

N/A

N/A

listener

60000L

1

1

100000

persistence

60000L

4

1

0

remote-command

60000L

200

1

0

state-transfer

60000L

60

1

0

transport

60000L

25

25

100000

Use the following syntax to configure an Infinispan thread pool using the management CLI.

/subsystem=infinispan/cache-container=CACHE_CONTAINER_NAME/thread-pool=THREAD_POOL_NAME:write-attribute(name=ATTRIBUTE_NAME, value=ATTRIBUTE_VALUE)

The following is an example of the management CLI command to set the max-threads value to 10 in the persistence thread pool for the server cache container.

/subsystem=infinispan/cache-container=server/thread-pool=persistence:write-attribute(name="max-threads", value="10")

22.3.5. Infinispan Statistics

Runtime statistics about Infinispan caches and cache containers can be enabled for monitoring purposes. Statistics collection is not enabled by default for performance reasons.

Statistics collection can be enabled for each cache container, cache, or both. The statistics option for each cache overrides the option for the cache container. Enabling or disabling statistics collection for a cache container will cause all caches in that container to inherit the setting, unless they explicitly specify their own.

22.3.5.1. Enable Infinispan Statistics

Warning

Enabling Infinispan statistics may have a negative impact on the performance of the infinispan subsystem. Statistics should be enabled only when required.

You can enable or disable the collection of Infinispan statistics using the management console or the management CLI. From the management console, navigate to the Infinispan subsystem from the Configuration tab, select the appropriate cache or cache container, and edit the Statistics enabled attribute. Use the below commands to enable statistics using the management CLI.

Enable statistics collection for a cache container. A server reload will be required.

/subsystem=infinispan/cache-container=CACHE_CONTAINER:write-attribute(name=statistics-enabled,value=true)

Enable statistics collection for a cache. A server reload will be required.

/subsystem=infinispan/cache-container=CACHE_CONTAINER/CACHE_TYPE=CACHE:write-attribute(name=statistics-enabled,value=true)
Note

You can use the following command to undefine the statistics-enabled attribute of a cache so that it will inherit the settings of its cache container’s statistics-enabled attribute.

/subsystem=infinispan/cache-container=CACHE_CONTAINER/CACHE_TYPE=CACHE:undefine-attribute(name=statistics-enabled)

22.3.6. Infinispan Partition Handling

An Infinispan cluster is built out of several nodes where data is stored. To prevent data loss if multiple nodes fail, Infinispan copies the same data over multiple nodes. This level of data redundancy is configured using the owners attribute. As long as fewer than the configured number of nodes crash simultaneously, Infinispan will have a copy of the data available.

However, there are potential catastrophic situations that could occur when too many nodes disappear from the cluster:

Split brain

This splits the cluster in two or more partitions, or sub-clusters, that operate independently. In these circumstances, multiple clients reading and writing from different partitions see different versions of the same cache entry, which for many applications is problematic.

Note

There are ways to alleviate the possibility for the split brain to happen, such as redundant networks or IP bonding. However, these only reduce the window of time for the problem to occur.

Multiple nodes crash in sequence
If multiple nodes, specifically the number of owners, crash in rapid sequence and Infinispan does not have the time to properly rebalance its state between crashes, the result is partial data loss.

The goal is to avoid situations in which incorrect data is returned to the user as a result of either split brain or multiple nodes crashing in rapid sequence.

22.3.6.1. Split Brain

In a split brain situation, each network partition will install its own JGroups view, removing the nodes from the other partitions. We do not have a direct way to determine whether the cluster has been split into two or more partitions, since the partitions are unaware of each other. Instead, we assume the cluster has split when one or more nodes disappear from the JGroups cluster without sending an explicit leave message.

With partition handling disabled, each such partition would continue to function as an independent cluster. Each partition may only see a part of the data, and each partition could write conflicting updates in the cache.

With partition handling enabled, if we detect a split, each partition does not start a rebalance immediately, but first checks whether it should enter degraded mode instead:

  • If at least one segment has lost all its owners, meaning that at least the number of owners specified has left since the last rebalance ended, the partition enters degraded mode.
  • If the partition does not contain a simple majority of the nodes (floor(numNodes/2) + 1) in the latest stable topology, the partition also enters degraded mode.
  • Otherwise, the partition keeps functioning normally and starts a rebalance.

The stable topology is updated every time a rebalance operation ends and the coordinator determines that another rebalance is not necessary. These rules ensure that at most, one partition stays in available mode, and the other partitions enter degraded mode.

When a partition is in degraded mode, it only allows access to the keys that are wholly owned:

  • Requests (reads and writes) for entries that have all the copies on nodes within this partition are honored.
  • Requests for entries that are partially or totally owned by nodes that have disappeared are rejected with an AvailabilityException.

This guarantees that partitions cannot write different values for the same key (cache is consistent), and also that one partition can not read keys that have been updated in the other partitions (no stale data).

Note

Two partitions could start up isolated, and as long as they do not merge, they can read and write inconsistent data. In the future, we may allow custom availability strategies (e.g. check that a certain node is part of the cluster, or check that an external machine is accessible) that could handle that situation as well.

22.3.6.2. Configuring Partition Handling

Currently the partition handling is disabled by default. Use the following management CLI command to enable partition handling:

/subsystem=infinispan/cache-container=web/distributed-cache=dist/component=partition-handling:write-attribute(name=enabled, value=true)

22.3.7. Externalize HTTP Sessions to JBoss Data Grid

Note

You will need a Red Hat JBoss Data Grid subscription to use this functionality.

Red Hat JBoss Data Grid can be used as an external cache container for application-specific data in JBoss EAP, such as HTTP sessions. This allows scaling of the data layer independent of the application, and enables different JBoss EAP clusters, which may reside in various domains, to access data from the same JBoss Data Grid cluster. Additionally, other applications can interface with the caches presented by Red Hat JBoss Data Grid.

The following example shows how to externalize HTTP sessions. It applies to both standalone instances of JBoss EAP as well as managed domains. However, in a managed domain, each server group requires a unique remote cache configured. While multiple server groups can utilize the same Red Hat JBoss Data Grid cluster, the respective remote caches will be unique to the JBoss EAP server group.

Note

For each distributable application, an entirely new cache must be created. It can be created in an existing cache container, for example, web.

To externalize HTTP sessions:

  1. Define the location of the remote Red Hat JBoss Data Grid server by adding the networking information to the socket-binding-group.

    Example Adding Remote Socket Bindings

    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-jdg-server1:add(host=JDGHostName1, port=11222)
    
    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-jdg-server2:add(host=JDGHostName2, port=11222)

    Resulting XML

    <socket-binding-group name="standard-sockets" ... >
      ...
      <outbound-socket-binding name="remote-jdg-server1">
        <remote-destination host="JDGHostName1" port="11222"/>
      </outbound-socket-binding>
      <outbound-socket-binding name="remote-jdg-server2">
        <remote-destination host="JDGHostName2" port="11222"/>
      </outbound-socket-binding>
    </socket-binding-group>

    Note

    You will need a remote socket binding configured for each Red Hat JBoss Data Grid server.

  2. Ensure the remote cache containers are defined in JBoss EAP’s infinispan subsystem; in the example below the cache attribute in the remote-store element defines the cache name on the remote JBoss Data Grid server.

    If you are running in a managed domain, precede these commands with /profile=PROFILE_NAME.

    Example Adding a Remote Cache Container

    /subsystem=infinispan/cache-container=web/invalidation-cache=jdg:add(mode=SYNC)
    
    /subsystem=infinispan/cache-container=web/invalidation-cache=jdg/component=locking:write-attribute(name=isolation,value=REPEATABLE_READ)
    
    /subsystem=infinispan/cache-container=web/invalidation-cache=jdg/component=transaction:write-attribute(name=mode,value=BATCH)
    
    /subsystem=infinispan/cache-container=web/invalidation-cache=jdg/store=remote:add(remote-servers=["remote-jdg-server1","remote-jdg-server2"], cache=default, socket-timeout=60000, passivation=false, purge=false, shared=true)

    Resulting XML

    <subsystem xmlns="urn:jboss:domain:infinispan:4.0">
      ...
      <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan" statistics-enabled="true">
        <transport lock-timeout="60000"/>
        <invalidation-cache name="jdg" mode="SYNC">
          <locking isolation="REPEATABLE_READ"/>
          <transaction mode="BATCH"/>
          <remote-store cache="default" socket-timeout="60000" remote-servers="remote-jdg-server1 remote-jdg-server2" passivation="false" purge="false" shared="true"/>
        </invalidation-cache>
        ...
      </cache-container>
    </subsystem>

  3. Add cache information into the application’s jboss-web.xml. In the following example, web is the name of the cache container and jdg is the name of the appropriate cache located in this container.

    Example jboss-web.xml File

    <?xml version="1.0" encoding="UTF-8"?>
    <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd"
               version="10.0">
        <replication-config>
            <replication-granularity>SESSION</replication-granularity>
            <cache-name>web.jdg</cache-name>
        </replication-config>
    </jboss-web>

22.4. Configuring JBoss EAP as a Front-end Load Balancer

You can configure JBoss EAP and the undertow subsystem to act as a front-end load balancer to proxy requests to back-end JBoss EAP servers. Since Undertow makes use of asynchronous IO, the IO thread that is responsible for the connection is the only thread that is involved in the request. That same thread is also used for the connection made to the back-end server.

You can use the following protocols:

  • HTTP over plain text (http), supporting HTTP/1 and HTTP/2 (h2c)

    Note

    HTTP/2 is provided as technology preview only.

  • HTTP over secured connection (https), supporting HTTP/1 and HTTP/2 (h2)

    Note

    HTTP/2 is provided as technology preview only.

  • AJP (ajp)

You can either define a static load balancer and specify the back-end hosts in your configuration, or use the mod_cluster front end to dynamically update the hosts.

22.4.1. Configure Undertow as a Load Balancer Using mod_cluster

You can use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances.

This procedure assumes that you are running in a managed domain and already have the following configured:

  • A JBoss EAP server that will act as the load balancer.

    • This server uses the default profile, which is bound to the standard-sockets socket binding group.
  • Two JBoss EAP servers, which will act as the back-end servers.

    • These servers are running in a cluster and use the ha profile, which is bound to the ha-sockets socket binding group.
  • The distributable application to be load balanced deployed to the back-end servers.
Configure the mod_cluster Front-end Load Balancer

The below steps load balance servers in a managed domain, but they can be adjusted to apply to a set of standalone servers. Be sure to update the management CLI command values to suit your environment.

  1. Set the mod_cluster advertise security key.

    Adding the advertise security key allows the load balancer and servers to authenticate during discovery.

    Use the following management CLI command to set the mod_cluster advertise security key.

    /profile=ha/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=advertise-security-key, value=mypassword)
  2. Create a socket binding with the multicast address and port for mod_cluster.

    You need to create a socket configuration for mod_cluster to use for discovery and communication with the servers that it is going to load balance.

    Use the following management CLI command to add a modcluster socket binding with the appropriate multicast address and port configured.

    /socket-binding-group=standard-sockets/socket-binding=modcluster:add(multicast-port=23364, multicast-address=224.0.1.105)
  3. Include the mod_cluster load balancer.

    Once you have the advertise security key and socket binding set up, you need to add the mod_cluster filter to Undertow for the load balancer instance of JBoss EAP.

    Use the following management CLI command to add the mod_cluster filter.

    /profile=default/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(management-socket-binding=http, advertise-socket-binding=modcluster, security-key=mypassword)

    Use the following management CLI command to bind the mod_cluster filter to the default host.

    /profile=default/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add
Important

It is recommended that the management and advertise socket bindings used by mod_cluster only be exposed to the internal network, not a public IP address.

The load balancer JBoss EAP server can now load balance the two back-end JBoss EAP servers.

22.4.2. Configure Undertow as a Static Load Balancer

To configure a static load balancer with Undertow, you need to configure a proxy handler in the undertow subsystem. To configure a proxy handler in Undertow, you need to do the following on your JBoss EAP instance that will serve as your static load balancer:

  1. Add a reverse proxy handler
  2. Define the outbound socket bindings for each remote host
  3. Add each remote host to the reverse proxy handler
  4. Add the reverse proxy location

The following example shows how to configure a JBoss EAP instance to be a static load balancer. The JBoss EAP instance is located at lb.example.com and will load balance between two additional servers: server1.example.com and server2.example.com. The load balancer will reverse-proxy to the location /app and will the AJP protocol.

  1. To add a reverse proxy handler:

    /subsystem=undertow/configuration=handler/reverse-proxy=my-handler:add
  2. To define the outbound socket bindings for each remote host:

    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host1/:add(host=server1.example.com, port=8009)
    
    /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host2/:add(host=server2.example.com, port=8009)
  3. To add each remote host to the reverse proxy handler:

    /subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host1:add(outbound-socket-binding=remote-host1, scheme=ajp, instance-id=myroute, path=/test)
    
    /subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host2:add(outbound-socket-binding=remote-host2, scheme=ajp, instance-id=myroute, path=/test)
  4. To add the reverse proxy location:

    /subsystem=undertow/server=default-server/host=default-host/location=\/app:add(handler=my-handler)

When accessing lb.example.com:8080/app, you will now see the content proxied from server1.example.com and server2.example.com.

22.5. Using an External Web Server as a Proxy Server

JBoss EAP can accept requests from an external web server using the supported HTTP, HTTPS, or AJP protocol, depending on the external web server configuration.

See Overview of HTTP Connectors for details on the supported HTTP connectors for each web server. Once you have decided which web server and HTTP connector to use, see the appropriate section for information on configuring your connector:

For the most current information about supported configurations for HTTP connectors, see JBoss EAP supported configurations.

You also will need to make sure that JBoss EAP is configured to accept requests from external web servers.

22.5.1. Overview of HTTP Connectors

JBoss EAP has the ability to use load-balancing and clustering mechanisms built into external web servers, such as Apache HTTP Server, Microsoft IIS, and Oracle iPlanet as well as through Undertow. JBoss EAP communicates with the web servers using a connector. These connectors are configured within the undertow subsystem of JBoss EAP.

The web servers include software modules which control the way that HTTP requests are routed to JBoss EAP nodes. Each of these modules varies in how it works and how it is configured. The modules are configured to balance work loads across multiple JBoss EAP nodes, to move work loads to alternate servers in case of a failure event, or both.

JBoss EAP supports several different connectors. The one you choose depends on the web server in use and the functionality you need. See the tables below for comparisons of the supported configurations and features of the various HTTP connectors that are compatible with JBoss EAP.

Note

See Configure Undertow as a Load Balancer Using mod_cluster for using JBoss EAP 7 as a multi-platform load balancer.

For the most current information about supported configurations for HTTP connectors, see JBoss EAP supported configurations.

Table 22.1. HTTP Connector Supported Configurations

ConnectorWeb ServerSupported Operating SystemsSupported Protocols

mod_cluster

Red Hat JBoss Core Services Apache HTTP Server, Red Hat JBoss Web Server Apache HTTP Server, JBoss EAP (Undertow)

Red Hat Enterprise Linux, Microsoft Windows Server, Oracle Solaris

HTTP, HTTPS, AJP, WebSocket

mod_jk

Red Hat JBoss Core Services Apache HTTP Server, Red Hat JBoss Web Server Apache HTTP Server

Red Hat Enterprise Linux, Microsoft Windows Server, Oracle Solaris

AJP

mod_proxy

Red Hat JBoss Core Services Apache HTTP Server, Red Hat JBoss Web Server Apache HTTP Server

Red Hat Enterprise Linux, Microsoft Windows Server, Oracle Solaris

HTTP, HTTPS, AJP

ISAPI connector

Microsoft IIS

Microsoft Windows Server

AJP

NSAPI connector

Oracle iPlanet Web Server

Oracle Solaris

AJP

Table 22.2. HTTP Connector Features

ConnectorSupports Sticky SessionsAdapts to Deployment Status

mod_cluster

Yes

Yes. Detects deployment and undeployment of applications and dynamically decides whether to direct client requests to a server based on whether the application is deployed on that server.

mod_jk

Yes

No. Directs client requests to the container as long as the container is available, regardless of application status.

mod_proxy

Yes

No. Directs client requests to the container as long as the container is available, regardless of application status.

ISAPI connector

Yes

No. Directs client requests to the container as long as the container is available, regardless of application status.

NSAPI connector

Yes

No. Directs client requests to the container as long as the container is available, regardless of application status.

22.5.2. Apache HTTP Server

A standalone Apache HTTP Server bundle is now available as a separate download with Red Hat JBoss Core Services. This simplifies installation and configuration, and allows for a more consistent update experience.

22.5.2.1. Installing Apache HTTP Server

For information on installing Apache HTTP Server, see the JBoss Core Services Apache HTTP Server Installation Guide.

22.5.3. Accepting Requests from External Web Servers

JBoss EAP does not require any special configuration to begin accepting requests from a proxy server as long as the correct protocol handler, for example AJP, HTTP, or HTTPS, is configured.

If the proxy server uses mod_jk, mod_proxy, ISAPI, or NSAPI, it sends requests to JBoss EAP and JBoss EAP simply provides a response. With mod_cluster, you must also configure your network to allow JBoss EAP to send information, such as its current load, application lifecycle events, and health status, to the proxy server to help it determine where to route requests. For more information about configuring a mod_cluster proxy server, see The mod_cluster HTTP Connector.

Update JBoss EAP Configuration

In the following procedure, substitute the protocols and ports in the examples with the ones you need to configure.

  1. Configure the instance-id attribute of Undertow.

    The external web server identifies the JBoss EAP instance in its connector configuration using the instance-id. Use the following management CLI command to set the instance-id attribute in Undertow.

    /subsystem=undertow:write-attribute(name=instance-id,value=node1)

    In the above example, the external web server identifies the current JBoss EAP instance as node1.

  2. Add the necessary listeners to Undertow.

    In order for an external web server to be able to connect to JBoss EAP, Undertow needs a listener. Each protocol needs its own listener, which is tied to a socket binding.

    Note

    Depending on your desired protocol and port configuration, this step may not be necessary. An HTTP listener is configured in all default JBoss EAP configurations, and an AJP listener is configured if you are using the ha or full-ha profile.

    You can check whether the required listeners are already configured by reading the default server configuration:

    /subsystem=undertow/server=default-server:read-resource

    To add a listener to Undertow, it must have a socket binding. The socket binding is added to the socket binding group used by your server or server group. The following management CLI command adds an ajp socket binding, bound to port 8009, to the standard-sockets socket binding group

    /socket-binding-group=standard-sockets/socket-binding=ajp:add(port=8009)

    The following management CLI command adds an ajp listener to Undertow, using the ajp socket binding.

    /subsystem=undertow/server=default-server/ajp-listener=ajp:add(socket-binding=ajp)

22.6. The mod_cluster HTTP Connector

The mod_cluster connector is an Apache HTTP Server-based load balancer. It uses a communication channel to forward requests from the Apache HTTP Server to one of a set of application server nodes.

The mod_cluster connector has several advantages over other connectors.

  • The mod_cluster Management Protocol (MCMP) is an additional connection between the JBoss EAP servers and the Apache HTTP Server with the mod_cluster module enabled. It is used by the JBoss EAP servers to transmit server-side load-balance factors and lifecycle events back to the Apache HTTP Server via a custom set of HTTP methods.
  • Dynamic configuration of Apache HTTP Server with mod_cluster allows JBoss EAP servers to join the load-balancing arrangement without manual configuration.
  • JBoss EAP performs the load-balancing factor calculations, rather than relying on the Apache HTTP Server with mod_cluster. This makes load-balancing metrics more accurate than other connectors.
  • The mod_cluster connector gives fine-grained application lifecycle control. Each JBoss EAP server forwards web application context lifecycle events to the Apache HTTP Server, informing it to start or stop routing requests for a given context. This prevents end users from seeing HTTP errors due to unavailable resources.
  • AJP, HTTP or HTTPS transports can be used.

For more details on the specific configuration options of the modcluster subsystem, see the ModCluster Subsystem Attributes.

22.6.1. Configure mod_cluster in Apache HTTP Server

The mod_cluster modules are already included when installing JBoss Core Services Apache HTTP Server or using JBoss Web Server and are loaded by default.

Note

Apache HTTP Server is no longer distributed with JBoss Web Server as of version 3.1.0.

See the steps below to configure the mod_cluster module to suit your environment.

Note

Red Hat customers can also use the Load Balancer Configuration Tool on the Red Hat Customer Portal to quickly generate optimal configuration templates for mod_cluster and other connectors. Note that you must be logged in to access this tool.

Configure mod_cluster

Apache HTTP Server already contains a mod_cluster configuration file, mod_cluster.conf, that loads the mod_cluster modules and provides basic configuration. The IP address, port, and other settings in this file, shown below, can be configured to suit your needs.

# mod_proxy_balancer should be disabled when mod_cluster is used
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule advertise_module modules/mod_advertise.so

MemManagerFile cache/mod_cluster

<IfModule manager_module>
  Listen 6666
  <VirtualHost *:6666>
    <Directory />
      Require ip 127.0.0.1
    </Directory>
    ServerAdvertise on
    EnableMCPMReceive
    <Location /mod_cluster_manager>
      SetHandler mod_cluster-manager
      Require ip 127.0.0.1
   </Location>
  </VirtualHost>
</IfModule>

The Apache HTTP Server server is configured as a load balancer and can work with the modcluster subsystem running on JBoss EAP. You must configure a mod_cluster worker node to make JBoss EAP aware of mod_cluster.

If you want to disable advertising for mod_cluster and configure a static proxy list instead, see Disable Advertisement for mod_cluster. For more information on the available mod_cluster configuration options in Apache HTTP Server, see the Apache HTTP Server mod_cluster Directives

For more details on configuring mod_cluster, see the Configure Load Balancing Using Apache HTTP Server and mod_cluster section of the JBoss Web Server HTTP Connectors and Load Balancing Guide.

22.6.2. Disable Advertising for mod_cluster

By default, the modcluster subsystem’s balancer uses multicast UDP to advertise its availability to the background workers. You can disable advertising and to use a proxy list instead using the following procedure.

Note

The management CLI commands in the following procedure assume that you are using the full-ha profile in a managed domain. If you are using a profile other than full-ha, use the appropriate profile name in the command. If you are running a standalone server, remove the /profile=full-ha completely.

  1. Modify the Apache HTTP Server configuration.

    Edit the httpd.conf Apache HTTP Server configuration file. Make the following updates to the virtual host that listens for MCPM requests, using the EnableMCPMReceive directive.

    1. Add the directive to disable server advertisement.

      Set the ServerAdvertise directive to Off to disable server advertisement.

      ServerAdvertise Off
    2. Disable the advertise frequency.

      If your configuration specifies the AdvertiseFrequency parameter, comment it out using a # character.

      # AdvertiseFrequency 5
    3. Enable the ability to receive MCPM messages.

      Ensure that the EnableMCPMReceive directive exists, to allow the web server to receive MCPM messages from the worker nodes.

      EnableMCPMReceive
  2. Disable advertising in the JBoss EAP modcluster subsystem.

    Use the following management CLI command to disable advertising.

    /profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=advertise,value=false)
    Important

    Be sure to continue to the next step to provide the list of proxies. Advertising will not be disabled if the list of proxies is empty.

  3. Provide a list of proxies in the JBoss EAP modcluster subsystem.

    It is necessary to provide a list of proxies because the modcluster subsystem will not be able to automatically discover proxies if advertising is disabled.

    First, define the outbound socket bindings in the appropriate socket binding group.

    /socket-binding-group=full-ha-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=10.33.144.3,port=6666)
    /socket-binding-group=full-ha-sockets/remote-destination-outbound-socket-binding=proxy2:add(host=10.33.144.1,port=6666)

    Next, add the proxies to the mod_cluster configuration.

    /profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration:list-add(name=proxies,value=proxy1)
    /profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration:list-add(name=proxies,value=proxy2)

The Apache HTTP Server balancer no longer advertises its presence to worker nodes and UDP multicast is no longer used.

22.6.3. Configure a mod_cluster Worker Node

A mod_cluster worker node consists of a JBoss EAP server. This server can be a standalone server or part of a server group in a managed domain. A separate process runs within JBoss EAP, which manages all of the worker nodes of the cluster. This is called the master.

Worker nodes in a managed domain share an identical configuration across a server group. Worker nodes running as standalone servers are configured individually. The configuration steps are otherwise identical.

  • A standalone server must be started with the standalone-ha or standalone-full-ha profile.
  • A server group in a managed domain must use the ha or full-ha profile, and the ha-sockets or full-ha-sockets socket binding group. JBoss EAP ships with a cluster-enabled server group called other-server-group which meets these requirements.
Configure a Worker Node

The management CLI commands in this procedure assume that you are using a managed domain with the full-ha profile. If you are running a standalone server, remove the /profile=full-ha portion of the commands.

  1. Configure the network interfaces.

    By default, the network interfaces all default to 127.0.0.1. Every physical host that hosts either a standalone server or one or more servers in a server group needs its interfaces to be configured to use its public IP address, which the other servers can see.

    Use the following management CLI commands to modify the external IP addresses for the management, public, and unsecure interfaces as appropriate for your environment. Be sure to replace EXTERNAL_IP_ADDRESS in the command with the actual external IP address of the host.

    /interface=management:write-attribute(name=inet-address,value="${jboss.bind.address.management:EXTERNAL_IP_ADDRESS}")
    /interface=public:write-attribute(name=inet-address,value="${jboss.bind.address.public:EXTERNAL_IP_ADDRESS}")
    /interface=unsecure:write-attribute(name=inet-address,value="${jboss.bind.address.unsecure:EXTERNAL_IP_ADDRESS}")

    Reload the server.

    reload
  2. Configure host names.

    Set a unique host name for each host that participates in a managed domain. This name must be unique across slaves and will be used for the slave to identify to the cluster, so make a note of the name you use.

    1. Start the JBoss EAP slave host, using the appropriate host.xml configuration file.

      $ EAP_HOME/bin/domain.sh --host-config=host-slave.xml
    2. Use the following management CLI command to set a unique host name. This example uses slave1 as the new host name.

      /host=EXISTING_HOST_NAME:write-attribute(name=name,value=slave1)

      For more information on configuring a host name, see Configure the Name of a Host.

  3. Configure each host to connect to the domain controller.

    Note

    This step does not apply for a standalone server.

    For newly configured hosts that need to join a managed domain, you must remove the local element and add the remote element host attribute that points to the domain controller.

    1. Start the JBoss EAP slave host, using the appropriate host.xml configuration file.

      $ EAP_HOME/bin/domain.sh --host-config=host-slave.xml
    2. Use the following management CLI command to configure the domain controller settings.

      /host=SLAVE_HOST_NAME:write-remote-domain-controller(host=DOMAIN_CONTROLLER_IP_ADDRESS,port=${jboss.domain.master.port:9999},security-realm="ManagementRealm")

      This modifies the XML in the host-slave.xml file as follows:

      <domain-controller>
          <remote host="DOMAIN_CONTROLLER_IP_ADDRESS" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
      </domain-controller>

      For more information, see Connect to the Domain Controller.

  4. Configure authentication for each slave host.

    Each slave server needs a username and password created in the domain controller’s or standalone master’s ManagementRealm. On the domain controller or standalone master, run the EAP_HOME/bin/add-user.sh command for each host. Add a management user for each host with the username that matches the host name of the slave.

    Be sure to answer yes to the last question, that asks "Is this new user going to be used for one AS process to connect to another AS process?", so that you are provided with a secret value.

    Example add-user script output (trimmed)

    $ EAP_HOME/bin/add-user.sh
    
    What type of user do you wish to add?
     a) Management User (mgmt-users.properties)
     b) Application User (application-users.properties)
    (a): a
    
    Username : slave1
    Password : changeme
    Re-enter Password : changeme
    What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[  ]:
    About to add user 'slave1' for realm 'ManagementRealm'
    Is this correct yes/no? yes
    Is this new user going to be used for one AS process to connect to another AS process?
    e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
    yes/no? yes
    To represent the user add the following to the server-identities definition <secret value="SECRET_VALUE" />

    Copy the Base64-encoded secret value provided from this output (SECRET_VALUE), which may be used in the next step.

    For more information, see the Adding a User to the Master Domain Controller section of the JBoss EAP How To Configure Server Security guide.

  5. Modify the slave host’s security realm to use the new authentication.

    You can specify the password by setting the secret value in the server configuration, getting the password from the vault, or passing the password as a system property.

    • Specify the Base64-encoded password value in the server configuration file using the Management CLI.

      Use the following management CLI command to specify the secret value. Be sure to replace the SECRET_VALUE with the secret value returned from the add-user output from the previous step.

      /host=SLAVE_HOST_NAME/core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value="SECRET_VALUE")

      You will need to reload the server. The --host argument is not applicable for a standalone server.

      reload --host=HOST_NAME

      For more information, see the Configuring the Slave Controllers to Use the Credential section of the JBoss EAP How To Configure Server Security guide.

    • Configure the host to get the password from the vault.

      1. Use the EAP_HOME/bin/vault.sh script to generate a masked password. It will generate a string in the format VAULT::secret::password::VAULT_SECRET_VALUE, for example:

        VAULT::secret::password::ODVmYmJjNGMtZDU2ZC00YmNlLWE4ODMtZjQ1NWNmNDU4ZDc1TElORV9CUkVBS3ZhdWx0.
        Note

        When creating a password in the vault, it must be specified in plain text, not Base64-encoded.

      2. Use the following management CLI command to specify the secret value. Be sure to replace the VAULT_SECRET_VALUE with the masked password generated in the previous step.

        /host=master/core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value="${VAULT::secret::password::VAULT_SECRET_VALUE}")

        You will need to reload the server. The --host argument is not applicable for a standalone server.

        reload --host=HOST_NAME

        For more information, see the Password Vault section of the JBoss EAP How To Configure Server Security guide.

    • Specify the password as a system property.

      The following examples use server.identity.password as the system property name for the password.

      1. Specify the system property for the password in the server configuration file.

        Use the following managemente CLI command to configure the secret identity to use the system property.

        /host=SLAVE_HOST_NAME/core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value="${server.identity.password}")

        You will need to reload the server. The --host argument is not applicable for a standalone server.

        reload --host=master
      2. Set the password for the system property when starting the server.

        You can set the server.identity.password system property by passing it as a command line argument or in a properties file.

        1. Pass in as a plain text command line argument.

          Start the server and pass in the server.identity.password property.

          $ EAP_HOME/bin/domain.sh --host-config=host-slave.xml -Dserver.identity.password=changeme
          Warning

          The password must be entered in plain text and will be visible to anyone who issues a ps -ef command.

        2. Set the property in a properties file.

          Create a properties file and add the key/value pair to a properties file, for example:

          server.identity.password=changeme
          Warning

          The password is in plain text and will be visible to anyone who has access to this properties file.

          Start the server with the command line arguments.

          $ EAP_HOME/bin/domain.sh --host-config=host-slave.xml --properties=PATH_TO_PROPERTIES_FILE
  6. Restart the server.

    The slave will now authenticate to the master using its host name as the username and the encrypted string as its password.

Your standalone server, or servers within a server group of a managed domain, are now configured as mod_cluster worker nodes. If you deploy a clustered application, its sessions are replicated to all cluster nodes for failover, and it can accept requests from an external web server or load balancer. Each node of the cluster discovers the other nodes using automatic discovery, by default.

22.6.4. Configure the mod_cluster fail_on_status Parameter

The fail_on_status parameter lists those HTTP status codes which, when returned by a worker node in a cluster, will mark that node as having failed. The load balancer will then send future requests to another worker node in the cluster. The failed worker node will remain in a NOTOK state until it sends the load balancer a STATUS message.

Note

The fail_on_status parameter cannot be used with HP-UX v11.3 hpws httpd B.2.2.15.15 from Hewlett-Packard as it does not support the feature.

The fail_on_status parameter must be configured in the httpd configuration file of your load balancer. Multiple HTTP status codes for fail_on_status can be specified as a comma-separated list. The following example specifies the HTTP status codes 203 and 204 for fail_on_status. ⁠

Example fail_on_status Configuration

ProxyPass / balancer://MyBalancer stickysession=JSESSIONID|jsessionid nofailover=on failonstatus=203,204
ProxyPassReverse / balancer://MyBalancer
ProxyPreserveHost on

22.6.5. Migrate Traffic Between Clusters

After creating a new cluster using JBoss EAP, you can migrate traffic from the previous cluster to the new one as part of an upgrade process. In this task, you will see the strategy that can be used to migrate this traffic with minimal outage or downtime.

  • A new cluster setup: (we will call this cluster: ClusterNEW).
  • An old cluster setup that is being made redundant (we will call this cluster: ClusterOLD).
Upgrade Process for Clusters - Load-Balancing Groups
  1. Set up your new cluster using the steps described in the prerequisites.
  2. In both ClusterNEW and ClusterOLD, ensure that the configuration option sticky-session is set to true (this option is set to true by default). Enabling this option means that all new requests made to a cluster node in any of the clusters will continue to go to the respective cluster node.

    /profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=sticky-session,value=true)
  3. Set the load-balancing-group to ClusterOLD, assuming that all the cluster nodes in ClusterOLD are members of ClusterOLD load-balancing group.

    /profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=load-balancing-group,value=ClusterOLD)
    <subsystem xmlns="urn:jboss:domain:modcluster:2.0">
      <mod-cluster-config load-balancing-group="ClusterOLD" advertise-socket="modcluster" connector="ajp">
        <dynamic-load-provider>
          <load-metric type="cpu"/>
        </dynamic-load-provider>
      </mod-cluster-config>
    </subsystem>
  4. Add the nodes in ClusterNEW to the mod_cluster configuration individually using the process described in the Configure a mod_cluster Worker Node section. Additionally use the aforementioned procedure and set their load-balancing group to ClusterNEW.

    At this point, you can see an output similar to the undermentioned shortened example on the mod_cluster-manager console:

                    mod_cluster/<version>
    
        LBGroup ClusterOLD: [Enable Nodes]   [Disable Nodes]   [Stop Nodes]
            Node node-1-jvmroute (ajp://node1.oldcluster.example:8009):
                [Enable Contexts]   [Disable Contexts]   [Stop Contexts]
                Balancer: qacluster, LBGroup: ClusterOLD, Flushpackets: Off, ..., Load: 100
                Virtual Host 1:
                    Contexts:
                        /my-deployed-application-context, Status: ENABLED Request: 0 [Disable]   [Stop]
    
            Node node-2-jvmroute (ajp://node2.oldcluster.example:8009):
                [Enable Contexts]   [Disable Contexts]   [Stop Contexts]
                Balancer: qacluster, LBGroup: ClusterOLD, Flushpackets: Off, ..., Load: 100
                Virtual Host 1:
                    Contexts:
                        /my-deployed-application-context, Status: ENABLED Request: 0 [Disable]   [Stop]
    
    
        LBGroup ClusterNEW: [Enable Nodes]   [Disable Nodes]   [Stop Nodes]
            Node node-3-jvmroute (ajp://node3.newcluster.example:8009):
                [Enable Contexts]   [Disable Contexts]   [Stop Contexts]
                Balancer: qacluster, LBGroup: ClusterNEW, Flushpackets: Off, ..., Load: 100
                Virtual Host 1:
                    Contexts:
                        /my-deployed-application-context, Status: ENABLED Request: 0 [Disable]   [Stop]
    
            Node node-4-jvmroute (ajp://node4.newcluster.example:8009):
                [Enable Contexts]   [Disable Contexts]   [Stop Contexts]
                Balancer: qacluster, LBGroup: ClusterNEW, Flushpackets: Off, ..., Load: 100
                Virtual Host 1:
                    Contexts:
                        /my-deployed-application-context, Status: ENABLED Request: 0 [Disable]   [Stop]
  5. There are old active sessions within the ClusterOLD group and any new sessions are created either within the ClusterOLD or CLusterNEW group. Next, we want to disable the whole ClusterOLD group, so as we can power down its cluster nodes without causing any error to currently active client’s sessions.

    Click on the Disable Nodes link for LBGroup ClusterOLD on mod_cluster-manager web console.

    From this point on, only requests belonging to already established sessions will be routed to members of ClusterOLD load-balancing group. Any new client’s sessions will be created in the ClusterNEW group only. As soon as there are no active sessions within ClusterOLD group, we can safely remove its members.

    Note

    Using Stop Nodes would command the load balancer to stop routing any requests to this domain immediately. This will force a failover to another load-balancing group which will cause session data loss to clients, provided there is no session replication between ClusterNEW and ClusterOLD.

Default Load-Balancing Group

In case the current ClusterOLD setup does not contain any load-balancing group settings (one can see LBGroup:, on mod_cluster-manager console), one can still take advantage of disabling the ClusterOLD nodes. In this case, click on Disable Contexts for each of the ClusterOLD nodes. Contexts of these nodes will be disabled and once there are no active sessions present, they will be ready for removal. New clients' sessions will be created only on nodes with enabled contexts, presumably ClusterNEW members in this example.

Using the Management CLI

In addition to using the mod_cluster-manager web console, you can use the JBoss EAP management CLI to stop or disable a particular context.

Stop a Context

/host=master/server=server-one/subsystem=modcluster:stop-context(context=/my-deployed-application-context, virtualhost=default-host, waittime=0)

Stopping a context with waittime set to 0, meaning no timeout, instructs the balancer to stop routing any request to it immediately, which forces failover to another available context.

If you set a timeout value using the waittime argument, no new sessions are created on this context, but existing sessions will continue to be directed to this node until they complete or the specified timeout has elapsed. The waittime argument defaults to 10 seconds.

Disable a Context

/host=master/server=server-one/subsystem=modcluster:disable-context(context=/my-deployed-application-context, virtualhost=default-host)

Disabling a context tells the balancer that no new sessions should be created on this context.

22.7. Apache mod_jk HTTP Connector

Apache mod_jk is an HTTP connector that is provided for customers who need it for compatibility purposes.

JBoss EAP can accept workloads from an Apache HTTP proxy server. The proxy server accepts client requests from the web front end, and passes the work to participating JBoss EAP servers. If sticky sessions are enabled, the same client request always goes to the same JBoss EAP server, unless the server is unavailable.

mod_jk communicates over the AJP 1.3 protocol. Other protocols can be used with mod_cluster or mod_proxy. See the Overview of HTTP Connectors for more information.

Note

mod_cluster is a more advanced load balancer than mod_jk and is the recommended HTTP connector. mod_cluster provides all of the functionality of mod_jk, plus additional features. Unlike the JBoss EAP mod_cluster HTTP connector, an Apache mod_jk HTTP connector does not know the status of deployments on servers or server groups, and cannot adapt where it sends its work accordingly.

See the Apache mod_jk documentation for more information.

22.7.1. Configure mod_jk in Apache HTTP Server

The mod_jk module (mod_jk.so) is already included when installing JBoss Core Services Apache HTTP Server or using JBoss Web Server, however, it is not loaded by default.

Note

Apache HTTP Server is no longer distributed with JBoss Web Server as of version 3.1.0.

Use the following steps to load and configure mod_jk in Apache HTTP Server. Note that these steps assume that you have already navigated to the httpd/ directory for Apache HTTP Server, which will vary depending on your platform. For more information, see the installation instructions for your platform in the JBoss Core Services Apache HTTP Server Installation Guide.

Note

Red Hat customers can also use the Load Balancer Configuration Tool on the Red Hat Customer Portal to quickly generate optimal configuration templates for mod_jk and other connectors. Note that you must be logged in to access this tool.

  1. Configure the mod_jk module.

    Note

    A sample mod_jk configuration file is provided at conf.d/mod_jk.conf.sample. You can use this sample instead of creating your own file by removing the .sample extension and modifying its contents as needed.

    Create a new file called conf.d/mod_jk.conf. Add the following configuration to the file, making sure to modify the contents to suite your needs.

    # Load mod_jk module
    # Specify the filename of the mod_jk lib
    LoadModule jk_module modules/mod_jk.so
    
    # Where to find workers.properties
    JkWorkersFile conf.d/workers.properties
    
    # Where to put jk logs
    JkLogFile logs/mod_jk.log
    
    # Set the jk log level [debug/error/info]
    JkLogLevel info
    
    # Select the log format
    JkLogStampFormat  "[%a %b %d %H:%M:%S %Y]"
    
    # JkOptions indicates to send SSK KEY SIZE
    JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
    
    # JkRequestLogFormat
    JkRequestLogFormat "%w %V %T"
    
    # Mount your applications
    JkMount /application/* loadbalancer
    
    # Add shared memory.
    # This directive is present with 1.2.10 and
    # later versions of mod_jk, and is needed for
    # for load balancing to work properly
    JkShmFile logs/jk.shm
    
    # Add jkstatus for managing runtime data
    <Location /jkstatus/>
        JkMount status
        Require ip 127.0.0.1
    </Location>
    Note

    The JkMount directive specifies which URLs Apache HTTP Server must forward to the mod_jk module. Based on the directive’s configuration, mod_jk sends the received URL to the correct workers. To serve static content directly and only use the load balancer for Java applications, the URL path must be /application/*. To use mod_jk as a load balancer, use the value /*, to forward all URLs to mod_jk.

    Aside from general mod_jk configuration, this file specifies to load the mod_jk.so module, and defines where to find the workers.properties file.

  2. Configure the mod_jk worker nodes.

    Note

    A sample workers configuration file is provided at conf.d/workers.properties.sample. You can use this sample instead of creating your own file by removing the .sample extension and modifying its contents as needed.

    Create a new file called conf.d/workers.properties. Add the following configuration to the file, making sure to modify the contents to suite your needs.

    # Define list of workers that will be used
    # for mapping requests
    worker.list=loadbalancer,status
    
    # Define Node1
    # modify the host as your host IP or DNS name.
    worker.node1.port=8009
    worker.node1.host=node1.mydomain.com
    worker.node1.type=ajp13
    worker.node1.ping_mode=A
    worker.node1.lbfactor=1
    
    # Define Node2
    # modify the host as your host IP or DNS name.
    worker.node2.port=8009
    worker.node2.host=node2.mydomain.com
    worker.node2.type=ajp13
    worker.node2.ping_mode=A
    worker.node2.lbfactor=1
    
    # Load-balancing behavior
    worker.loadbalancer.type=lb
    worker.loadbalancer.balance_workers=node1,node2
    worker.loadbalancer.sticky_session=1
    
    # Status worker for managing load balancer
    worker.status.type=status

    For details on the syntax of the mod_jk workers.properties file and other advanced configuration options, see mod_jk Worker Properties.

  3. Optionally, specify a JKMountFile directive.

    In addition to the JKMount directive in the mod-jk.conf, you can specify a file which contains multiple URL patterns to be forwarded to mod_jk.

    1. Create a uriworkermap.properties file.

      Note

      A sample URI worker map configuration file is provided at conf.d/uriworkermap.properties.sample. You can use this sample instead of creating your own file by removing the .sample extension and modifying its contents as needed.

      Create a new file called conf.d/uriworkermap.properties. Add a line for each URL pattern to be matched, for example:

      # Simple worker configuration file
      /*=loadbalancer
    2. Update the configuration to point to uriworkermap.properties file.

      Append the following to conf.d/mod_jk.conf.

      # Use external file for mount points.
      # It will be checked for updates each 60 seconds.
      # The format of the file is: /url=worker
      # /examples/*=loadbalancer
      JkMountFile conf.d/uriworkermap.properties

For more details on configuring mod_jk, see the Configuring Apache HTTP Server to Load mod_jk section of the JBoss Web Server HTTP Connectors and Load Balancing Guide.

22.7.2. Configure JBoss EAP to Communicate with mod_jk

The JBoss EAP undertow subsystem needs to specify a listener in order to accept requests from and send replies back to an external web server. Since mod_jk uses the AJP protocol, an AJP listener must be configured.

If you are using one of the default high availability configurations (ha or full-ha), an AJP listener is already configured.

For instructions, see Accepting Requests From External Web Servers.

22.8. Apache mod_proxy HTTP Connector

Apache mod_proxy is an HTTP connector that supports connections over AJP, HTTP, and HTTPS protocols. mod_proxy can be configured in load-balanced or non-load-balanced configurations, and it supports the notion of sticky sessions.

The mod_proxy module requires JBoss EAP to have the HTTP, HTTPS or AJP listener configured in the undertow subsystem, depending on which protocol you plan to use.

Note

mod_cluster is a more advanced load balancer than mod_proxy and is the recommended HTTP connector. mod_cluster provides all of the functionality of mod_proxy, plus additional features. Unlike the JBoss EAP mod_cluster HTTP connector, an Apache mod_proxy HTTP connector does not know the status of deployments on servers or server groups, and cannot adapt where it sends its work accordingly.

See the Apache mod_proxy documentation for more information.

22.8.1. Configure mod_proxy in Apache HTTP Server

The mod_proxy modules are already included when installing JBoss Core Services Apache HTTP Server or using JBoss Web Server and are loaded by default.

Note

Apache HTTP Server is no longer distributed with JBoss Web Server as of version 3.1.0.

See the appropriate section below to configure a basic load-balancing or non-load-balancing proxy. These steps assume that you have already navigated to the httpd/ directory for Apache HTTP Server, which will vary depending on your platform. For more information, see the installation instructions for your platform in the JBoss Core Services Apache HTTP Server Installation Guide. These steps also assume that the necessary HTTP listener has already been configured in the JBoss EAP undertow subsystem.

Note

Red Hat customers can also use the Load Balancer Configuration Tool on the Red Hat Customer Portal to quickly generate optimal configuration templates for mod_proxy and other connectors. Note that you must be logged in to access this tool.

Add a Non-load-balancing Proxy

Add the following configuration to your conf/httpd.conf file, directly beneath any other <VirtualHost> directives you may have. Replace the values with ones appropriate to your setup.

<VirtualHost *:80>
# Your domain name
ServerName YOUR_DOMAIN_NAME

ProxyPreserveHost On

# The IP and port of JBoss
# These represent the default values, if your httpd is on the same host
# as your JBoss managed domain or server

ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/

# The location of the HTML files, and access control information
DocumentRoot /var/www
<Directory /var/www>
Options -Indexes
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
Add a Load-balancing Proxy
Note

The default Apache HTTP Server configuration has the mod_proxy_balancer.so module disabled, as it is incompatible with mod_cluster. In order to complete this task, you will need to load this module and disable the mod_cluster modules.

To use mod_proxy as a load balancer, and send work to multiple JBoss EAP instances, add the following configuration to your conf/httpd.conf file. The example IP addresses are fictional. Replace them with the appropriate values for your environment.

<Proxy balancer://mycluster>

Order deny,allow
Allow from all

# Add each JBoss Enterprise Application Server by IP address and port.
# If the route values are unique like this, one node will not fail over to the other.
BalancerMember http://192.168.1.1:8080 route=node1
BalancerMember http://192.168.1.2:8180 route=node2
</Proxy>

<VirtualHost *:80>
 # Your domain name
 ServerName YOUR_DOMAIN_NAME

 ProxyPreserveHost On
 ProxyPass / balancer://mycluster/

 # The location of the HTML files, and access control information DocumentRoot /var/www
 <Directory /var/www>
  Options -Indexes
  Order allow,deny
  Allow from all
 </Directory>

</VirtualHost>

The examples above all communicate using the HTTP protocol. You can use AJP or HTTPS protocols instead, if you load the appropriate mod_proxy modules. See the Apache mod_proxy documentation for more details.

Enable Sticky Sessions

A sticky session means that if a client request originally goes to a specific JBoss EAP worker, all future requests will be sent to the same worker, unless it becomes unavailable. This is almost always the recommended behavior.

To enable sticky sessions for mod_proxy, add the stickysession parameter to the ProxyPass statement.

ProxyPass / balancer://mycluster stickysession=JSESSIONID

You can specify additional parameters to the ProxyPass statement, such as lbmethod and nofailover. See the Apache mod_proxy documentation for more information on the available parameters.

22.8.2. Configure JBoss EAP to Communicate with mod_proxy

The JBoss EAP undertow subsystem needs to specify a listener in order to accept requests from and send replies back to an external web server. Depending on the protocol that you will be using, you may need to configure a listener.

An HTTP listener is configured in the JBoss EAP default configuration. If you are using one of the default high availability configurations (ha or full-ha), an AJP listener is also preconfigured.

For instructions, see Accepting Requests From External Web Servers.

22.9. Microsoft ISAPI Connector

The Internet Server API (ISAPI) is a set of APIs used to write OLE Server extensions and filters for web servers such as Microsoft’s Internet Information Services (IIS). isapi_redirect.dll is an extension of mod_jk adjusted to IIS. isapi_redirect.dll enables you to configure JBoss EAP instances as worker nodes with IIS as a load balancer.

Note

See JBoss EAP supported configurations for information on the supported configurations of Windows Server and IIS.

22.9.1. Configure Microsoft IIS to Use the ISAPI Connector

Download the ISAPI connector from the Red Hat Customer Portal:

  1. Open a browser and log in to the Red Hat Customer Portal JBoss Software Downloads page.
  2. Select Web Connectors in the Product drop-down menu.
  3. Select the latest JBoss Core Services version from the Version drop-down menu.
  4. Find the Red Hat JBoss Core Services ISAPI Connector in the list, and click the Download link.
  5. Extract the archive and copy the contents of the sbin directory to a location on your server. The instructions below assume that the contents were copied to C:\connectors\.

To configure the IIS Redirector Using the IIS Manager (IIS 7):

  1. Open the IIS manager by clicking StartRun, and typing inetmgr.
  2. In the tree view pane at the left, expand IIS 7.
  3. Double-click ISAPI and CGI Registrations to open it in a new window.
  4. In the Actions pane, click Add. The Add ISAPI or CGI Restriction window opens.
  5. Specify the following values:

    • ISAPI or CGI Path: C:\connectors\isapi_redirect.dll
    • Description: jboss
    • Allow extension path to execute: select the check box.
  6. Click OK to close the Add ISAPI or CGI Restriction window.
  7. Define a JBoss Native virtual directory

    • Right-click Default Web Site, and click Add Virtual Directory. The Add Virtual Directory window opens.
    • Specify the following values to add a virtual directory:

      • Alias: jboss
      • Physical Path: C:\connectors\
    • Click OK to save the values and close the Add Virtual Directory window.
  8. Define a JBoss Native ISAPI Redirect Filter

    • In the tree view pane, expand SitesDefault Web Site.
    • Double-click ISAPI Filters. The ISAPI Filters Features view appears.
    • In the Actions pane, click Add. The Add ISAPI Filter window appears.
    • Specify the following values in the Add ISAPI Filter window:

      • Filter name: jboss
      • Executable: C:\connectors\isapi_redirect.dll
    • Click OK to save the values and close the Add ISAPI Filters window.
  9. Enable the ISAPI-dll handler

    • Double-click the IIS 7 item in the tree view pane. The IIS 7 Home Features View opens.
    • Double-click Handler Mappings. The Handler Mappings Features View appears.
    • In the Group by combo box, select State. The Handler Mappings are displayed in Enabled and Disabled Groups.
    • Find ISAPI-dll. If it is in the Disabled group, right-click it and select Edit Feature Permissions.
    • Enable the following permissions:

      • Read
      • Script
      • Execute
    • Click OK to save the values, and close the Edit Feature Permissions window.

Microsoft IIS is now configured to use the ISAPI connector.

22.9.2. Configure the ISAPI Connector to Send Client Requests to JBoss EAP

This task configures a group of JBoss EAP servers to accept requests from the ISAPI connector. It does not include configuration for load-balancing or high-availability failover.

This configuration is done on the IIS server, and assumes that you already have JBoss EAP configured to accept requests from an external web server. You also need full administrator access to the IIS server and to have configured IIS to use the ISAPI connector.

Create Property Files and Set Up Redirection
  1. Create a directory to store logs, property files, and lock files.

    The rest of this procedure assumes that you are using the directory C:\connectors\ for this purpose. If you use a different directory, modify the instructions accordingly.

  2. Create the isapi_redirect.properties file.

    Create a new file called C:\connectors\isapi_redirect.properties. Copy the following contents into the file.

    # Configuration file for the ISAPI Connector
    # Extension uri definition
    extension_uri=/jboss/isapi_redirect.dll
    
    # Full path to the log file for the ISAPI Connector
    log_file=c:\connectors\isapi_redirect.log
    
    # Log level (debug, info, warn, error or trace)
    log_level=info
    
    # Full path to the workers.properties file
    worker_file=c:\connectors\workers.properties
    
    # Full path to the uriworkermap.properties file
    worker_mount_file=c:\connectors\uriworkermap.properties
    
    #Full path to the rewrite.properties file
    rewrite_rule_file=c:\connectors\rewrite.properties

    If you do not want to use a rewrite.properties file, comment out the last line by placing a # character at the beginning of the line.

  3. Create the uriworkermap.properties file

    The uriworkermap.properties file contains mappings between deployed application URLs and which worker handles requests to them. The following example file shows the syntax of the file. Place your uriworkermap.properties file into C:\connectors\.

    # images and css files for path /status are provided by worker01
    /status=worker01
    /images/*=worker01
    /css/*=worker01
    
    # Path /web-console is provided by worker02
    # IIS (customized) error page is used for http errors with number greater or equal to 400
    # css files are provided by worker01
    /web-console/*=worker02;use_server_errors=400
    /web-console/css/*=worker01
    
    # Example of exclusion from mapping, logo.gif won't be displayed
    # /web-console/images/logo.gif=*
    
    # Requests to /app-01 or /app-01/something will be routed to worker01
    /app-01|/*=worker01
    
    # Requests to /app-02 or /app-02/something will be routed to worker02
    /app-02|/*=worker02
  4. Create the workers.properties file.

    The workers.properties file contains mapping definitions between worker labels and server instances. This file follows the syntax of the same file used for Apache mod_jk worker properties configuration.

    The following is an example of a workers.properties file. The worker names, worker01 and worker02, must match the instance-id configured in the JBoss EAP undertow subsystem.

    Place this file into the C:\connectors\ directory.

    # An entry that lists all the workers defined
    worker.list=worker01, worker02
    
    # Entries that define the host and port associated with these workers
    
    # First JBoss EAP server definition, port 8009 is standard port for AJP in EAP
    worker.worker01.host=127.0.0.1
    worker.worker01.port=8009
    worker.worker01.type=ajp13
    
    # Second JBoss EAP server definition
    worker.worker02.host=127.0.0.100
    worker.worker02.port=8009
    worker.worker02.type=ajp13
  5. Create the rewrite.properties file.

    The rewrite.properties file contains simple URL rewriting rules for specific applications. The rewritten path is specified using name-value pairs, as shown in the example below. Place this file into the C:\connectors\ directory.

    #Simple example
    # Images are accessible under abc path
    /app-01/abc/=/app-01/images/
  6. Restart your IIS server by using the net stop and net start commands.

    C:\> net stop was /Y
    C:\> net start w3svc

The IIS server is configured to send client requests to the specific JBoss EAP servers you have configured, on an application-specific basis.

22.9.3. Configure the ISAPI Connector to Balance Client Requests Across Multiple JBoss EAP Servers

This configuration balances client requests across the JBoss EAP servers you specify. This configuration is done on the IIS server, and assumes that you already have JBoss EAP configured to accept requests from an external web server. You also need full administrator access to the IIS server and to have configured IIS to use the ISAPI connector.

Balance Client Requests Across Multiple Servers
  1. Create a directory to store logs, property files, and lock files.

    The rest of this procedure assumes that you are using the directory C:\connectors\ for this purpose. If you use a different directory, modify the instructions accordingly.

  2. Create the isapi_redirect.properties file.

    Create a new file called C:\connectors\isapi_redirect.properties. Copy the following contents into the file.

    # Configuration file for the ISAPI Connector
    # Extension uri definition
    extension_uri=/jboss/isapi_redirect.dll
    
    # Full path to the log file for the ISAPI Connector
    log_file=c:\connectors\isapi_redirect.log
    
    # Log level (debug, info, warn, error or trace)
    log_level=info
    
    # Full path to the workers.properties file
    worker_file=c:\connectors\workers.properties
    
    # Full path to the uriworkermap.properties file
    worker_mount_file=c:\connectors\uriworkermap.properties
    
    #OPTIONAL: Full path to the rewrite.properties file
    rewrite_rule_file=c:\connectors\rewrite.properties

    If you do not want to use a rewrite.properties file, comment out the last line by placing a # character at the beginning of the line.

  3. Create the uriworkermap.properties file.

    The uriworkermap.properties file contains mappings between deployed application URLs and which worker handles requests to them. The following example file shows the syntax of the file, with a load-balanced configuration. The wildcard (*) character sends all requests for various URL sub-directories to the load balancer called router. The configuration of the load balancer is covered in the next step.

    Place your uriworkermap.properties file into C:\connectors\.

    # images, css files, path /status and /web-console will be
    # provided by nodes defined in the load-balancer called "router"
    /css/*=router
    /images/*=router
    /status=router
    /web-console|/*=router
    
    # Example of exclusion from mapping, logo.gif won't be displayed
    # /web-console/images/logo.gif=*
    
    # Requests to /app-01 and /app-02 will be routed to nodes defined
    # in the load-balancer called "router"
    /app-01|/*=router
    /app-02|/*=router
    
    # mapping for management console, nodes in cluster can be enabled or disabled here
    /jkmanager|/*=status
  4. Create the workers.properties file.

    The workers.properties file contains mapping definitions between worker labels and server instances. This file follows the syntax of the same file used for Apache mod_jk worker properties configuration.

    The following is an example of a workers.properties file. The load balancer is configured near the end of the file, to comprise workers worker01 and worker02. These worker names must match the instance-id configured in the JBoss EAP undertow subsystem.

    Place this file into the C:\connectors\ directory.

    # The advanced router LB worker
    worker.list=router,status
    
    # First EAP server definition, port 8009 is standard port for AJP in EAP
    #
    # lbfactor defines how much the worker will be used.
    # The higher the number, the more requests are served
    # lbfactor is useful when one machine is more powerful
    # ping_mode=A – all possible probes will be used to determine that
    # connections are still working
    
    worker.worker01.port=8009
    worker.worker01.host=127.0.0.1
    worker.worker01.type=ajp13
    worker.worker01.ping_mode=A
    worker.worker01.socket_timeout=10
    worker.worker01.lbfactor=3
    
    # Second EAP server definition
    worker.worker02.port=8009
    worker.worker02.host=127.0.0.100
    worker.worker02.type=ajp13
    worker.worker02.ping_mode=A
    worker.worker02.socket_timeout=10
    worker.worker02.lbfactor=1
    
    # Define the LB worker
    worker.router.type=lb
    worker.router.balance_workers=worker01,worker02
    
    # Define the status worker for jkmanager
    worker.status.type=status

  5. Create the rewrite.properties file.

    The rewrite.properties file contains simple URL rewriting rules for specific applications. The rewritten path is specified using name-value pairs, as shown in the example below. Place this file into the C:\connectors\ directory.

    #Simple example
    # Images are accessible under abc path
    /app-01/abc/=/app-01/images/
    Restart the IIS server.
    
    Restart your IIS server by using the net stop and net start commands.
    C:\> net stop was /Y
    C:\> net start w3svc

The IIS server is configured to send client requests to the JBoss EAP servers referenced in the workers.properties file, spreading the load across the servers in a 1:3 ratio. This ratio is derived from the load-balancing factor (lbfactor) assigned to each server.

22.10. Oracle NSAPI Connector

The Netscape Server API (NSAPI) is an API provided by Oracle iPlanet Web Server, formerly Netscape Web Server, for implementing extensions to the server. These extensions are known as server plugins. The NSAPI connector is used in nsapi_redirector.so, which is an extension of mod_jk adjusted to Oracle iPlanet Web Server. The NSAPI connector enables you to configure JBoss EAP instances as worker nodes with Oracle iPlanet Web Server as a load balancer.

Note

See the JBoss EAP supported configurations for information on the supported configurations of Solaris and Oracle iPlanet Web Server.

22.10.1. Configure Oracle iPlanet Web Server to use the NSAPI Connector

Prerequisites

  • JBoss EAP is installed and configured on each server that will serve as a worker.

Download the NSAPI connector from the Red Hat Customer Portal:

  1. Open a browser and log in to the Red Hat Customer Portal JBoss Software Downloads page.
  2. Select Web Connectors in the Product drop-down menu.
  3. Select the latest JBoss Core Services version from the Version drop-down menu.
  4. Find the Red Hat JBoss Core Services NSAPI Connector in the list, ensuring that you select the correct platform and architecture for your system, and click the Download link.
  5. Extract the nsapi_redirector.so file, which is located in either the lib/ or the lib64/ directory, into either the IPLANET_CONFIG/lib/ or the IPLANET_CONFIG/lib64/ directory.

Set up the NSAPI Connector:

Note

In these instructions, IPLANET_CONFIG refers to the Oracle iPlanet configuration directory, which is usually /opt/oracle/webserver7/config/. If your Oracle iPlanet configuration directory is different, modify the instructions accordingly.

  1. Disable servlet mappings.

    Open the IPLANET_CONFIG/default.web.xml file and locate the section with the heading Built In Server Mappings. Disable the mappings to the following three servlets, by wrapping them in XML comment characters (<!-- and -->).

    • default
    • invoker
    • jsp

      The following example configuration shows the disabled mappings.

      <!-- ============== Built In Servlet Mappings =============== -->
      <!-- The servlet mappings for the built in servlets defined above. -->
      <!-- The mapping for the default servlet -->
      <!--servlet-mapping>
       <servlet-name>default</servlet-name>
       <url-pattern>/</url-pattern>
      </servlet-mapping-->
      <!-- The mapping for the invoker servlet -->
      <!--servlet-mapping>
       <servlet-name>invoker</servlet-name>
       <url-pattern>/servlet/*</url-pattern>
      </servlet-mapping-->
      <!-- The mapping for the JSP servlet -->
      <!--servlet-mapping>
       <servlet-name>jsp</servlet-name>
       <url-pattern>*.jsp</url-pattern>
      </servlet-mapping-->

      Save and exit the file.

  2. Configure the iPlanet Web Server to load the NSAPI connector module.

    Add the following lines to the end of the IPLANET_CONFIG/magnus.conf file, modifying file paths to suit your configuration. These lines define the location of the nsapi_redirector.so module, as well as the workers.properties file, which lists the workers and their properties.

    Init fn="load-modules" funcs="jk_init,jk_service" shlib="/lib/nsapi_redirector.so" shlib_flags="(global|now)"
    
    Init fn="jk_init" worker_file="IPLANET_CONFIG/connectors/workers.properties" log_level="info" log_file="IPLANET_CONFIG/connectors/nsapi.log" shm_file="IPLANET_CONFIG/connectors/tmp/jk_shm"

    The configuration above is for a 32-bit architecture. If you use 64-bit Solaris, change the string lib/nsapi_redirector.so to lib64/nsapi_redirector.so.

    Save and exit the file.

  3. Configure the NSAPI connector.

    You can configure the NSAPI connector for a basic configuration, with no load balancing, or a load-balancing configuration. Choose one of the following options, after which your configuration will be complete.

22.10.2. Configure the NSAPI Connector to Send Client Requests to JBoss EAP

This task configures the NSAPI connector to redirect client requests to JBoss EAP servers with no load balancing or failover. The redirection is done on a per-deployment (and hence per-URL) basis.

Important

You must have already configured the NSAPI connector before continuing with this task.

Set Up the Basic HTTP Connector
  1. Define the URL paths to redirect to the JBoss EAP servers.

    Note

    In IPLANET_CONFIG/obj.conf, spaces are not allowed at the beginning of a line, except when the line is a continuation of the previous line.

    Edit the IPLANET_CONFIG/obj.conf file. Locate the section which starts with <Object name="default">, and add each URL pattern to match, in the format shown by the example file below. The string jknsapi refers to the HTTP connector which will be defined in the next step. The example shows the use of wildcards for pattern matching.

    <Object name="default">
    [...]
    NameTrans fn="assign-name" from="/status" name="jknsapi"
    NameTrans fn="assign-name" from="/images(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/css(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/nc(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/jmx-console(|/*)" name="jknsapi"
    </Object>
  2. Define the worker which serves each path.

    Continue editing the IPLANET_CONFIG/obj.conf file. Add the following directly after the closing tag of the section you have just finished editing: </Object>.

    <Object name="jknsapi">
    ObjectType fn=force-type type=text/plain
    Service fn="jk_service" worker="worker01" path="/status"
    Service fn="jk_service" worker="worker02" path="/nc(/*)"
    Service fn="jk_service" worker="worker01"
    </Object>

    The example above redirects requests to the URL path /status to the worker called worker01, and all URL paths beneath /nc/ to the worker called worker02. The third line indicates that all URLs assigned to the jknsapi object which are not matched by the previous lines are served to worker01.

    Save and exit the file.

  3. Define the workers and their attributes.

    Create a file called workers.properties in the IPLANET_CONFIG/connectors/ directory. Paste the following contents into the file, and modify them to suit your environment.

    # An entry that lists all the workers defined
    worker.list=worker01, worker02
    
    # Entries that define the host and port associated with these workers
    worker.worker01.host=127.0.0.1
    worker.worker01.port=8009
    worker.worker01.type=ajp13
    
    worker.worker02.host=127.0.0.100
    worker.worker02.port=8009
    worker.worker02.type=ajp13

    The workers.properties file uses the same syntax as Apache mod_jk.

    Save and exit the file.

  4. Restart the iPlanet Web Server

    Issue the following command to restart the iPlanet Web Server.

    IPLANET_CONFIG/../bin/stopserv
    IPLANET_CONFIG/../bin/startserv

iPlanet Web Server now sends client requests to the URLs you have configured to deployments on JBoss EAP.

22.10.3. Configure the NSAPI Connector to Balance Client Requests Across Multiple JBoss EAP Servers

This task configures the NSAPI connector to send client requests to JBoss EAP servers in a load-balancing configuration.

Important

You must have already configured the NSAPI connector before continuing with this task.

Configure the Connector for Load Balancing
  1. Define the URL paths to redirect to the JBoss EAP servers.

    Note

    In IPLANET_CONFIG/obj.conf, spaces are not allowed at the beginning of a line, except when the line is a continuation of the previous line.

    Edit the IPLANET_CONFIG/obj.conf file. Locate the section which starts with <Object name="default">, and add each URL pattern to match, in the format shown by the example file below. The string jknsapi refers to the HTTP connector that will be defined in the next step. The example shows the use of wildcards for pattern matching.

    <Object name="default">
    [...]
    NameTrans fn="assign-name" from="/status" name="jknsapi"
    NameTrans fn="assign-name" from="/images(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/css(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/nc(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/jmx-console(|/*)" name="jknsapi"
    NameTrans fn="assign-name" from="/jkmanager/*" name="jknsapi"
    </Object>
  2. Define the worker that serves each path.

    Continue editing the IPLANET_CONFIG/obj.conf file. Directly after the closing tag for the section you modified in the previous step (</Object>), add the following new section and modify it to your needs:

    <Object name="jknsapi">
    ObjectType fn=force-type type=text/plain
    Service fn="jk_service" worker="status" path="/jkmanager(/*)"
    Service fn="jk_service" worker="router"
    </Object>

    This jksnapi object defines the worker nodes used to serve each path that was mapped to the name="jksnapi" mapping in the default object. Everything except for URLs matching /jkmanager/* is redirected to the worker called router.

  3. Define the workers and their attributes.

    Create a file called workers.properties in IPLANET_CONFIG/connector/. Paste the following contents into the file, and modify them to suit your environment.

    # The advanced router LB worker
    # A list of each worker
    worker.list=router,status
    
    # First JBoss EAP server
    # (worker node) definition.
    # Port 8009 is the standard port for AJP
    #
    
    worker.worker01.port=8009
    worker.worker01.host=127.0.0.1
    worker.worker01.type=ajp13
    worker.worker01.ping_mode=A
    worker.worker01.socket_timeout=10
    worker.worker01.lbfactor=3
    
    # Second JBoss EAP server
    worker.worker02.port=8009
    worker.worker02.host=127.0.0.100
    worker.worker02.type=ajp13
    worker.worker02.ping_mode=A
    worker.worker02.socket_timeout=10
    worker.worker02.lbfactor=1
    
    # Define the load-balancer called "router"
    worker.router.type=lb
    worker.router.balance_workers=worker01,worker02
    
    # Define the status worker
    worker.status.type=status

    The workers.properties file uses the same syntax as Apache mod_jk.

    Save and exit the file.

  4. Restart the iPlanet Web Server 7.0.

    IPLANET_CONFIG/../bin/stopserv
    IPLANET_CONFIG/../bin/startserv

The iPlanet Web Server redirects the URL patterns you have configured to your JBoss EAP servers in a load-balancing configuration.