17.8. High Availability

17.8.1. About HornetQ Shared Stores

When using a shared store, both the live and backup servers share the same, entire data directory, using a shared file system. This includes the paging directory, journal directory, large messages, and the binding journal. When failover occurs and the backup server takes over, it will load the persistent storage from the shared file system. Clients can then connect to it.


HornetQ supports two different configurations for shared stores:
  • GFS2 on a SAN, using the ASYNCIO journal type.
  • NFSv4, using either the ASYNCIO or NIO journal type.


HornetQ supports NFS, under strict configuration guidelines that are outlined below.
This form of high availability differs from data replication, as it requires a shared file system accessible by both the live and backup nodes. This will usually be a high performance Storage Area Network (SAN) of some kind.


Do not use NFS mounts to store any shared journal when using NIO (non-blocking I/O), unless you are using Red Hat Enterprise Linux. This is due to the NFS implementation used.
The Red Hat Enterprise Linux NFS implementation supports both direct I/O (opening files with the O_DIRECT flag set), and kernel based asynchronous I/O. With both of these features present, it is possible to use NFS as a shared storage option, under strict configuration rules:
  • HornetQ must be configured to use one of the following journal types: ASYNCIO/AIO or NIO.
  • The Red Hat Enterprise Linux NFS client cache must be disabled.


The server log should be checked after JBoss EAP 6 is started, to ensure that the native library successfully loaded, and that the ASYNCIO journal type is being used. If the native library fails to load, HornetQ will gracefully fail to the NIO journal type, and this will be stated in the server log.


The native library that implements asynchronous I/O requires that libaio is installed on the Red Hat Enterprise Linux system where JBoss EAP 6 is running.


It is recommended that, if using NFS under the above stipulations, a highly-available NFS configuration is used.
The advantage of shared-store high availability is that no replication occurs between the live and backup nodes. This means it does not suffer any performance penalties due to the overhead of replication during normal operation.
The disadvantage of shared store replication is that it requires a shared file system, and when the backup server activates it needs to load the journal from the shared store. This can take some time, depending on the amount of data in the store.
If the highest performance during normal operation is required, there is access to a fast SAN, and a slightly slower fail-over rate is acceptable (depending on the amount of data), shared store high availability is recommended.