Chapter 8. Clustering

This section covers configuring Red Hat Single Sign-On to run in a cluster. There’s a number of things you have to do when setting up a cluster, specifically:

Picking an operation mode and configuring a shared database have been discussed earlier in this guide. In this chapter we’ll discuss setting up a load balancer and supplying a private network. We’ll also discuss some issues that you need to be aware of when booting up a host in the cluster.

Note

It is possible to cluster Red Hat Single Sign-On without IP Multicast, but this topic is beyond the scope of this guide. For more information, see JGroups chapter of the JBoss EAP Configuration Guide.

8.2. Clustering Example

Red Hat Single Sign-On does come with an out of the box clustering demo that leverages domain mode. Review the Clustered Domain Example chapter for more details.

8.3. Setting Up a Load Balancer or Proxy

This section discusses a number of things you need to configure before you can put a reverse proxy or load balancer in front of your clustered Red Hat Single Sign-On deployment. It also covers configuring the built in load balancer that was Clustered Domain Example.

8.3.1. Identifying Client IP Addresses

A few features in Red Hat Single Sign-On rely on the fact that the remote address of the HTTP client connecting to the authentication server is the real IP address of the client machine. Examples include:

  • Event logs - a failed login attempt would be logged with the wrong source IP address
  • SSL required - if the SSL required is set to external (the default) it should require SSL for all external requests
  • Authentication flows - a custom authentication flow that uses the IP address to for example show OTP only for external requests
  • Dynamic Client Registration

This can be problematic when you have a reverse proxy or loadbalancer in front of your Red Hat Single Sign-On authentication server. The usual setup is that you have a frontend proxy sitting on a public network that load balances and forwards requests to backend Red Hat Single Sign-On server instances located in a private network. There is some extra configuration you have to do in this scenario so that the actual client IP address is forwarded to and processed by the Red Hat Single Sign-On server instances. Specifically:

  • Configure your reverse proxy or loadbalancer to properly set X-Forwarded-For and X-Forwarded-Proto HTTP headers.
  • Configure your reverse proxy or loadbalancer to preserve the original 'Host' HTTP header.
  • Configure the authentication server to read the client’s IP address from X-Forwarded-For header.

Configuring your proxy to generate the X-Forwarded-For and X-Forwarded-Proto HTTP headers and preserving the original Host HTTP header is beyond the scope of this guide. Take extra precautions to ensure that the X-Forwared-For header is set by your proxy. If your proxy isn’t configured correctly, then rogue clients can set this header themselves and trick Red Hat Single Sign-On into thinking the client is connecting from a different IP address than it actually is. This becomes really important if you are doing any black or white listing of IP addresses.

Beyond the proxy itself, there are a few things you need to configure on the Red Hat Single Sign-On side of things. If your proxy is forwarding requests via the HTTP protocol, then you need to configure Red Hat Single Sign-On to pull the client’s IP address from the X-Forwarded-For header rather than from the network packet. To do this, open up the profile configuration file (standalone.xml, standalone-ha.xml, or domain.xml depending on your operating mode) and look for the urn:jboss:domain:undertow:4.0 XML block.

X-Forwarded-For HTTP Config

<subsystem xmlns="urn:jboss:domain:undertow:4.0">
   <buffer-cache name="default"/>
   <server name="default-server">
      <ajp-listener name="ajp" socket-binding="ajp"/>
      <http-listener name="default" socket-binding="http" redirect-socket="https"
          proxy-address-forwarding="true"/>
      ...
   </server>
   ...
</subsystem>

Add the proxy-address-forwarding attribute to the http-listener element. Set the value to true.

If your proxy is using the AJP protocol instead of HTTP to forward requests (i.e. Apache HTTPD + mod-cluster), then you have to configure things a little differently. Instead of modifying the http-listener, you need to add a filter to pull this information from the AJP packets.

X-Forwarded-For AJP Config

<subsystem xmlns="urn:jboss:domain:undertow:4.0">
     <buffer-cache name="default"/>
     <server name="default-server">
         <ajp-listener name="ajp" socket-binding="ajp"/>
         <http-listener name="default" socket-binding="http" redirect-socket="https"/>
         <host name="default-host" alias="localhost">
             ...
             <filter-ref name="proxy-peer"/>
         </host>
     </server>
        ...
     <filters>
         ...
         <filter name="proxy-peer"
                 class-name="io.undertow.server.handlers.ProxyPeerAddressHandler"
                 module="io.undertow.core" />
     </filters>
 </subsystem>

8.3.2. Enable HTTPS/SSL with a Reverse Proxy

Assuming that your reverse proxy doesn’t use port 8443 for SSL you also need to configure what port HTTPS traffic is redirected to.

<subsystem xmlns="urn:jboss:domain:undertow:4.0">
    ...
    <http-listener name="default" socket-binding="http"
        proxy-address-forwarding="true" redirect-socket="proxy-https"/>
    ...
</subsystem>

Add the redirect-socket attribute to the http-listener element. The value should be proxy-https which points to a socket binding you also need to define.

Then add a new socket-binding element to the socket-binding-group element:

<socket-binding-group name="standard-sockets" default-interface="public"
    port-offset="${jboss.socket.binding.port-offset:0}">
    ...
    <socket-binding name="proxy-https" port="443"/>
    ...
</socket-binding-group>

8.3.3. Verify Configuration

You can verify the reverse proxy or load balancer configuration by opening the path /auth/realms/master/.well-known/openid-configuration through the reverse proxy. For example if the reverse proxy address is https://acme.com/ then open the URL https://acme.com/auth/realms/master/.well-known/openid-configuration. This will show a JSON document listing a number of endpoints for Red Hat Single Sign-On. Make sure the endpoints starts with the address (scheme, domain and port) of your reverse proxy or load balancer. By doing this you make sure that Red Hat Single Sign-On is using the correct endpoint.

You should also verify that Red Hat Single Sign-On sees the correct source IP address for requests. Do check this you can try to login to the admin console with an invalid username and/or password. This should show a warning in the server log something like this:

08:14:21,287 WARN  XNIO-1 task-45 [org.keycloak.events] type=LOGIN_ERROR, realmId=master, clientId=security-admin-console, userId=8f20d7ba-4974-4811-a695-242c8fbd1bf8, ipAddress=X.X.X.X, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8080/auth/admin/master/console/?redirect_fragment=%2Frealms%2Fmaster%2Fevents-settings, code_id=a3d48b67-a439-4546-b992-e93311d6493e, username=admin

Check that the value of ipAddress is the IP address of the machine you tried to login with and not the IP address of the reverse proxy or load balancer.

8.3.4. Using the Built-In Load Balancer

This section covers configuring the built in load balancer that is discussed in the Clustered Domain Example.

The Clustered Domain Example is only designed to run on one machine. To bring up a slave on another host, you’ll need to

  1. Edit the domain.xml file to point to your new host slave
  2. Copy the server distribution. You don’t need the domain.xml, host.xml, or host-master.xml files. Nor do you need the standalone/ directory.
  3. Edit the host-slave.xml file to change the bind addresses used or override them on the command line

8.3.4.1. Register a New Host With Load Balancer

Let’s look first at registering the new host slave with the load balancer configuration in domain.xml. Open this file and go to the undertow configuration in the load-balancer profile. Add a new host definition called remote-host3 within the reverse-proxy XML block.

domain.xml reverse-proxy config

<subsystem xmlns="urn:jboss:domain:undertow:4.0">
  ...
  <handlers>
      <reverse-proxy name="lb-handler">
         <host name="host1" outbound-socket-binding="remote-host1" scheme="ajp" path="/" instance-id="myroute1"/>
         <host name="host2" outbound-socket-binding="remote-host2" scheme="ajp" path="/" instance-id="myroute2"/>
         <host name="remote-host3" outbound-socket-binding="remote-host3" scheme="ajp" path="/" instance-id="myroute3"/>
      </reverse-proxy>
  </handlers>
  ...
</subsystem>

The output-socket-binding is a logical name pointing to a socket-binding configured later in the domain.xml file. the instance-id attribute must also be unique to the new host as this value is used by a cookie to enable sticky sessions when load balancing.

Next go down to the load-balancer-sockets socket-binding-group and add the outbound-socket-binding for remote-host3. This new binding needs to point to the host and port of the new host.

domain.xml outbound-socket-binding

<socket-binding-group name="load-balancer-sockets" default-interface="public">
    ...
    <outbound-socket-binding name="remote-host1">
        <remote-destination host="localhost" port="8159"/>
    </outbound-socket-binding>
    <outbound-socket-binding name="remote-host2">
        <remote-destination host="localhost" port="8259"/>
    </outbound-socket-binding>
    <outbound-socket-binding name="remote-host3">
        <remote-destination host="192.168.0.5" port="8259"/>
    </outbound-socket-binding>
</socket-binding-group>

8.3.4.2. Master Bind Addresses

Next thing you’ll have to do is to change the public and management bind addresses for the master host. Either edit the domain.xml file as discussed in the Bind Addresses chapter or specify these bind addresses on the command line as follows:

$ domain.sh --host-config=host-master.xml -Djboss.bind.address=192.168.0.2 -Djboss.bind.address.management=192.168.0.2

8.3.4.3. Host Slave Bind Addresses

Next you’ll have to change the public, management, and domain controller bind addresses (jboss.domain.master-address). Either edit the host-slave.xml file or specify them on the command line as follows:

$ domain.sh --host-config=host-slave.xml
     -Djboss.bind.address=192.168.0.5
      -Djboss.bind.address.management=192.168.0.5
       -Djboss.domain.master.address=192.168.0.2

The values of jboss.bind.address and jboss.bind.addres.management pertain to the host slave’s IP address. The value of jboss.domain.master.address need to be the IP address of the domain controller which is the management address of the master host.

8.3.5. Configuring Other Load Balancers

See the load balancing section in the JBoss EAP Configuration Guide for information how to use other software-based load balancers.

8.4. Sticky sessions

Typical cluster deployment consists of the load balancer (reverse proxy) and 2 or more Red Hat Single Sign-On servers on private network. For performance purposes, it may be useful if load balancer forwards all requests related to particular browser session to the same Red Hat Single Sign-On backend node.

The reason is, that Red Hat Single Sign-On is using infinispan distributed cache under the covers for save data related to current authentication session and user session. The Infinispan distributed caches are configured with one owner by default. That means that particular session is saved just on one cluster node and the other nodes need to lookup the session remotely if they want to access it.

For example if authentication session with ID 123 is saved in the infinispan cache on node1, and then node2 needs to lookup this session, it needs to send the request to node1 over the network to return the particular session entity.

It is beneficial if particular session entity is always available locally, which can be done with the help of sticky sessions. The workflow in the cluster environment with the public frontend load balancer and two backend Red Hat Single Sign-On nodes can be like this:

  • User sends initial request to see the Red Hat Single Sign-On login screen
  • This request is served by the frontend load balancer, which forwards it to some random node (eg. node1). Strictly said, the node doesn’t need to be random, but can be chosen according to some other criterias (client IP address etc). It all depends on the implementation and configuration of underlying load balancer (reverse proxy).
  • Red Hat Single Sign-On creates authentication session with random ID (eg. 123) and saves it to the Infinispan cache.
  • Infinispan distributed cache assigns the primary owner of the session based on the hash of session ID. See Infinispan documentation for more details around this. Let’s assume that infinispan assigned node2 to be the owner of this session.
  • Red Hat Single Sign-On creates the cookie AUTH_SESSION_ID with the format like <session-id>.<owner-node-id> . In our example case, it will be 123.node2 .
  • Response is returned to the user with the Red Hat Single Sign-On login screen and the AUTH_SESSION_ID cookie in the browser

From this point, it is beneficial if load balancer forwards all the next requests to the node2 as this is the node, who is owner of the authentication session with ID 123 and hence Infinispan can lookup this session locally. After authentication is finished, the authentication session is converted to user session, which will be also saved on node2 because it has same ID 123 .

The sticky session is not mandatory for the cluster setup, however it is good for performance for the reasons mentioned above. You need to configure your loadbalancer to sticky over the AUTH_SESSION_ID cookie. How exactly do this is dependent on your loadbalancer.

It is recommended on the Red Hat Single Sign-On side to use the system property jboss.node.name during startup, with the value corresponding to the name of your route. For example, -Djboss.node.name=node1 will use node1 to identify the route. This route will be used by Infinispan caches and will be attached to the AUTH_SESSION_ID cookie when the node is the owner of the particular key. An example of the start up command with this system property can be seen in Mod Cluster Example.

Typically, the route name be the same name as your backend host, but it is not necessary. You can use a different route name, for example if you want to hide the host name of your Red Hat Single Sign-On server inside your private network.

8.4.1. Disable adding the route

Some load balancers can be configured to add the route information by themselves instead of relying on the back end Red Hat Single Sign-On node. However, as described above, adding the route by the Red Hat Single Sign-On is recommended. This is because when done this way performance improves, since Red Hat Single Sign-On is aware of the entity that is the owner of particular session and can route to that node, which is not necessarily the local node.

You are permitted to disable adding route information to the AUTH_SESSION_ID cookie by Red Hat Single Sign-On, if you prefer, by adding the following into your RHSSO_HOME/standalone/configuration/standalone-ha.xml file in the Red Hat Single Sign-On subsystem configuration:

<subsystem xmlns="urn:jboss:domain:keycloak-server:1.1">
  ...
    <spi name="stickySessionEncoder">
        <provider name="infinispan" enabled="true">
            <properties>
                <property name="shouldAttachRoute" value="false"/>
            </properties>
        </provider>
    </spi>

</subsystem>

8.4.2. Example cluster setup with mod_cluster

In the example, we will use Mod Cluster as load balancer. One of the key features of mod cluster is, that there is not much configuration on the load balancer side. Instead it requires support on the backend node side. Backend nodes communicate with the load balancer through the dedicated protocol called MCMP and they notify loadbalancer about various events (eg. node joined or left cluster, new application was deployed etc).

Example setup will consist of the one JBoss EAP 7.1 load balancer node and two Red Hat Single Sign-On nodes.

Clustering example require MULTICAST to be enabled on machine’s loopback network interface. This can be done by running the following commands under root privileges (on linux):

route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
ifconfig lo multicast

8.4.2.1. Load Balancer Configuration

Unzip the JBoss EAP 7.1 server somewhere. Assumption is location EAP_LB

Edit EAP_LB/standalone/configuration/standalone.xml file. In the undertow subsystem add the mod_cluster configuration under filters like this:

<subsystem xmlns="urn:jboss:domain:undertow:4.0">
 ...
 <filters>
  ...
  <mod-cluster name="modcluster" advertise-socket-binding="modcluster"
      advertise-frequency="${modcluster.advertise-frequency:2000}"
      management-socket-binding="http" enable-http2="true"/>
 </filters>

and filter-ref under default-host like this:

<host name="default-host" alias="localhost">
    ...
    <filter-ref name="modcluster"/>
</host>

Then under socket-binding-group add this group:

<socket-binding name="modcluster" port="0"
    multicast-address="${jboss.modcluster.multicast.address:224.0.1.105}"
    multicast-port="23364"/>

Save the file and run the server:

cd $WILDFLY_LB/bin
./standalone.sh

8.4.2.2. Backend node configuration

Unzip the Red Hat Single Sign-On server distribution to some location. Assuming location is RHSSO_NODE1 .

Edit RHSSO_NODE1/standalone/configuration/standalone-ha.xml and configure datasource against the shared database. See Database chapter for more details.

In the undertow subsystem, add the session-config under the servlet-container element:

<servlet-container name="default">
    <session-cookie name="AUTH_SESSION_ID" http-only="true" />
    ...
</servlet-container>
Warning

Use the session-config only with the mod_cluster load balancer. In other cases, configure sticky session over the AUTH_SESSION_ID cookie directly in a load balancer.

Then you can configure proxy-address-forwarding as described in the chapter Load Balancer . Note that mod_cluster uses AJP connector by default, so you need to configure that one.

That’s all as mod_cluster is already configured.

The node name of the Red Hat Single Sign-On can be detected automatically based on the hostname of current server. However for more fine grained control, it is recommended to use system property jboss.node.name to specify the node name directly. It is especially useful in case that you test with 2 backend nodes on same physical server etc. So you can run the startup command like this:

cd $RHSSO_NODE1
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=100 -Djboss.node.name=node1

Configure the second backend server in same way and run with different port offset and node name.

cd $RHSSO_NODE2
./standalone.sh -c standalone-ha.xml -Djboss.socket.binding.port-offset=200 -Djboss.node.name=node2

Access the server on http://localhost:8080/auth . Creation of admin user is possible just from local address and without load balancer (proxy) access, so you first need to access backend node directly on http://localhost:8180/auth to create admin user.

Warning

When using mod_cluster you should always access the cluster through the load balancer and not directly through the backend node.

8.5. Multicast Network Setup

Out of the box clustering support needs IP Multicast. Multicast is a network broadcast protocol. This protocol is used at boot time to discover and join the cluster. It is also used to broadcast messages for the replication and invalidation of distributed caches used by Red Hat Single Sign-On.

The clustering subsystem for Red Hat Single Sign-On runs on the JGroups stack. Out of the box, the bind addresses for clustering are bound to a private network interface with 127.0.0.1 as default IP address. You have to edit your the standalone-ha.xml or domain.xml sections discussed in the Bind Address chapter.

private network config

    <interfaces>
        ...
        <interface name="private">
            <inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
        </interface>
    </interfaces>
    <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
        ...
        <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>
        <socket-binding name="jgroups-tcp" interface="private" port="7600"/>
        <socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>
        <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>
        <socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>
        <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>
        ...
    </socket-binding-group>

Things you’ll want to configure are the jboss.bind.address.private and jboss.default.multicast.address as well as the ports of the services on the clustering stack.

Note

It is possible to cluster Red Hat Single Sign-On without IP Multicast, but this topic is beyond the scope of this guide. For more information, see JGroups in the JBoss EAP Configuration Guide.

8.6. Securing Cluster Communication

When cluster nodes are isolated on a private network it requires access to the private network to be able to join a cluster or to view communication in the cluster. In addition you can also enable authentication and encryption for cluster communication. As long as your private network is secure it is not necessary to enable authentication and encryption. Red Hat Single Sign-On does not send very sensitive information on the cluster in either case.

If you want to enable authentication and encryption for clustering communication see Securing a Cluster in the JBoss EAP Configuration Guide.

8.7. Serialized Cluster Startup

Red Hat Single Sign-On cluster nodes are allowed to boot concurrenty. When Red Hat Single Sign-On server instance boots up it may do some database migration, importing, or first time initializations. A DB lock is used to prevent start actions from conflicting with one another when cluster nodes boot up concurrently.

By default, the maximum timeout for this lock is 900 seconds. If a node is waiting on this lock for more than the timeout it will fail to boot. Typically you won’t need to increase/decrease the default value, but just in case it’s possible to configure it in standalone.xml, standalone-ha.xml, or domain.xml file in your distribution. The location of this file depends on your operating mode.

<spi name="dblock">
    <provider name="jpa" enabled="true">
        <properties>
            <property name="lockWaitTimeout" value="900"/>
        </properties>
    </provider>
</spi>

8.8. Booting the Cluster

Booting Red Hat Single Sign-On in a cluster depends on your operating mode

Standalone Mode

$ bin/standalone.sh --server-config=standalone-ha.xml

Domain Mode

$ bin/domain.sh --host-config=host-master.xml
$ bin/domain.sh --host-config=host-slave.xml

You may need to use additional parameters or system properties. For example, the parameter -b for the binding host or the system property jboss.node.name to specify the name of the route, as described in Sticky Sessions section.

8.9. Troubleshooting

Note that when you run cluster, you should see message similar to this in the log of both cluster nodes:

INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-10,shared=udp)
ISPN000094: Received new cluster view: [node1/keycloak|1] (2) [node1/keycloak, node2/keycloak]

If you see just one node mentioned, it’s possible that your cluster hosts are not joined together.

Usually it’s best practice to have your cluster nodes on private network without firewall for communication among them. Firewall could be enabled just on public access point to your network instead. If for some reason you still need to have firewall enabled on cluster nodes, you will need to open some ports. Default values are UDP port 55200 and multicast port 45688 with multicast address 230.0.0.4. Note that you may need more ports opened if you want to enable additional features like diagnostics for your JGroups stack. Red Hat Single Sign-On delegates most of the clustering work to Infinispan/JGroups. For more information, see JGroups in the JBoss EAP Configuration Guide.