Chapter 4. Creating the Environment

4.1. Prerequisites

Prerequisites for creating this reference architecture include a supported Operating System and JDK. Refer to Red Hat documentation for JBoss EAP 7 supported environments.

With minor changes, almost any relational database management system (RDBMS) may be used in lieu of PostgreSQL Database, but if Postgres is used, the details of the download and installation are also considered a prerequisite for this reference architecture. On a Red Hat Enterprise Linux system, installing PostgreSQL can be as simple as running:

# yum install postgresql-server.i686

4.2. Downloads

Download the attachments to this document. These scripts and files will be used in configuring the reference architecture environment:

https://access.redhat.com/node/2359241/40/0

If you do not have access to the Red Hat customer portal, See the Comments and Feedback section to contact us for alternative methods of access to these files.

Download the following Red Hat JBoss Enterprise Application Platform and Red Hat JBoss Core Services Apache HTTP Server versions from Red Hat’s Customer Support Portal:

Warning

It is strongly recommended to set up clusters on dedicated virtual local area networks (VLAN). If a VLAN is not available, use a version of JBoss EAP 7 that includes the fix for CVE-2016-2141, or download and apply the patch for Red Hat JBoss Enterprise Application Platform 7.0, while also encrypting the communication.

4.3. Installation

4.3.1. Red Hat JBoss Core Services Apache HTTP Server 2.4

Red Hat JBoss Core Services Apache HTTP Server is available for download and installation in both RPM and ZIP form. This reference architecture uses modified scripts to run two instances of Apache HTTP Server. Follow the ZIP installation process to allow more customizability and support for other Operating Systems.

4.3.2. Red Hat JBoss Enterprise Application Platform 7

Red Hat JBoss EAP 7 does not require any installation steps. The archive file simply needs to be extracted after the download. This reference architecture installs two sets of binaries for EAP 7 on each machine, one for the active domain and another for the passive domain:

# unzip jboss-eap-7.0.0.zip -d /opt/ # mv /opt/jboss-eap-7.0 /opt/jboss-eap-7.0_active # unzip jboss-eap-7.0.0.zip -d /opt/ # mv /opt/jboss-eap-7.0 /opt/jboss-eap-7.0_passive

4.4. Configuration

4.4.1. Overview

Various other types of configuration may be required for UDP and TCP communication. For example, Linux operating systems typically have a low maximum socket buffer size configured, which is lower than the default cluster JGroups buffer size. It may be important to correct any such warnings observed in the EAP logs. For example, in this case for a Linux operating system, the maximum socket buffer size may be configured as follows. Further details are available in the official documentation.

# sysctl -w net.core.rmem_max=26214400 # sysctl -w net.core.wmem_max=1048576

4.4.2. Security-Enhanced Linux

This reference environment has been set up and tested with Security-Enhanced Linux (SELinux) enabled in ENFORCING mode. Once again, please refer to the Red Hat documentation on SELinux for further details on using and configuring this feature. For any other operating system, consult the respective documentation for security and firewall solutions to ensure that maximum security is maintained while the ports required by your application are opened.

When enabled in ENFORCING mode, by default, SELinux prevents Apache web server from establishing network connections. On the machine hosting Apache web server, configure SELinux it to allow httpd network connections:

# /usr/sbin/setsebool httpd_can_network_connect 1

4.4.3. Ports and Firewall

In the reference environment, several ports are used for intra-node communication. This includes ports 6661 and 6662 on the web servers' mod-cluster module, being accessed by all three cluster nodes, as well as the 5432 Postgres port. Web clients are routed to the web server through ports 81 and 82. The EAP cluster nodes also require many different ports for access, including 8080 or 8180 for HTTP access and EJB calls, 8009 and 8109 for AJP and so on.

This reference architecture uses firewalld, the default Red Hat firewall, to block all network packets by default and only allow configured ports and addresses to communicate. Refer to the Red Hat documentation on firewalld for further details.

Check the status of firewalld on each machine and make sure it is running:

# systemctl status firewalld

This reference environment starts with the default and most restrictive firewall setting (zone=public) and only opens the required ports.For example open the 81 port for the active HTTP Apache server for client to connect.

sudo firewall-cmd --permanent --zone=public --add-rich-rule='rule family=ipv4 source address="10.10.0.1/8" port port=81 protocol=tcp accept'

Using the permanent flag persists the firewall configuration but also requires a reload to have the changes take effect immediately:

# firewall-cmd --reload

Please see the appendix on firewalld configuration for the firewall rules used for the active domain in this reference environment.

Note, please make sure the network is properly configured for multicast and the multicast address would be verified during configuration stage of the enivornment, otherwise the HA function might not work properly.

4.4.4. Red Hat JBoss Core Services Apache HTTP Server 2.4

A freshly installed version of the Apache HTTP Server includes sample configuration under httpd/conf/httpd.conf. This reference architecture envisions running two instances of the web server using the same installed binary. The configuration file has therefore been separated into the provided cluster1.conf and cluster2.conf files, with an index variable that is set to 1 and 2 respectively to allow a clean separation of the two Apache HTTP Server instance.

The provided cluster1.conf and cluster2.conf files assume that Apache HTTP Server has been unzipped to /opt/jbcs-httpd. To avoid confusion, it may be best to delete the sample httpd.conf file and place these two files in the httpd/conf directory instead. Edit cluster1.conf and cluster2.conf and modify the root directory of the web server installation as appropriate for the directory structure. The ServerRoot file will have a default value such as:

ServerRoot "/opt/jbcs-httpd/httpd"

Review the cluster configuration files for other relevant file system references. This reference architecture does not use the web server to host any content, but the referenced file locations can be adjusted as appropriate if hosting other content is required. Notice that the two cluster files use a suffix of 1 and 2 to separate the process ID file, listen port, MemManagerFile, mod_cluster port, error log and custom log of the two clusters.

To allow starting and stopping Apache HTTP Server separately for each cluster, modify the OPTIONS variable in the apachectl script to point to the correct configuration file and output to the appropriate error log. The modified apachectl script is provided in the attachments:

OPTIONS="-f /opt/jbcs-httpd/httpd/conf/cluster${CLUSTER_NUM}.conf
						-E /opt/jbcs-httpd/httpd/logs/httpd${CLUSTER_NUM}.log"

It is enough to simply set the environment variable to 1 or 2 before running the script, to control the first or second cluster’s load balancer. The provided cluster1.sh and cluster2.sh scripts set this variable:

#!/bin/sh export CLUSTER_NUM=1 ./httpd/sbin/apachectl $@

These scripts can be used in the same way as apachectl, for example, to start the first Apache HTTP Server instance:

# ./cluster1.sh start

4.4.5. PostgreSQL Database setup

After a basic installation of the PostgreSQL Database Server, the first step is to initialize it. On Linux systems where proper installation has been completed through yum or RPM files, a service will be configured, and can be executed to initialize the database server:

# service postgresql initdb

Regardless of the operating system, the control of the PostgreSQL database server is performed through the pg_ctl file. The -D flag can be used to specify the data directory to be used by the database server:

# pg_ctl -D /usr/local/pgsql/data initdb

By default, the database server may only be listening on localhost. Specify a listen address in the database server configuration file or set it to “*”, so that it listens on all available network interfaces. The configuration file is located under the data directory and in version 9.3 of the product, it is available at /var/lib/pgsql/data/postgresql.conf.

Uncomment and modify the following line:

listen_addresses = '*'

Note

PostgreSQL Database Server typically ships with XA transaction support turned off by default. To enable XA transactions, the max_prepared_transactions property must be uncommented and set to a value equal or greater than max_connections:

max_prepared_transactions = 0 # zero disables the feature
                                        
 (change requires restart)
# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.

Another important consideration is the ability of users to access the database from remote servers, and authenticate against it. Java applications typically use password authentication to connect to a database. For a PostgreSQL Database Server to accept authentication from a remote user, the corresponding configuration file needs to be updated to reflect the required access. The configuration file in question is located under the data directory and in version 9.3 of the product, it is available at /var/lib/pgsql/data/pg_hba.conf.

One or more lines need to be added, to permit password authentication from the desired IP addresses. Standard network masks and slash notation may be used. For this reference architecture, a moderately more permissive configuration is used:

host all all 10.19.137.0/24 password

PostgreqSQL requires a non-root user to take ownership of the database process. This is typically achieved by a postgres user being created. This user should then be used to start, stop and configure databases. The database server is started using one of the available approaches, for example:

$ /usr/bin/pg_ctl start

To create a database and a user that JBoss EAP instances use to access the database, perform the following:

# su – postgres
$ /usr/bin/psql -U postgres postgres
CREATE USER jboss WITH PASSWORD 'password';
CREATE DATABASE eap6 WITH OWNER jboss;
\q

Making the jboss user the owner of the eap7 database ensures the necessary privileges to create tables and modify data are assigned.

4.4.6. Red Hat JBoss Enterprise Application Platform

This reference architecture includes six distinct installations of JBoss EAP 7. There are three machines, where each includes one node of the primary cluster and one node of the backup cluster. The names node1, node2 and node3 are used to refer to both the machines and the EAP nodes on the machines. The primary cluster will be called the active domain while the redundant backup cluster will be termed the passive domain. This reference architecture selects node1 to host the domain controller of the active domain and node2 for the passive domain.

4.4.6.1. Adding Users

The first important step in configuring the EAP 7 clusters is to add the required users. They are Admin Users, Node Users and Application Users.

1) Admin User

An administrator user is required for each domain. Assuming the user ID of admin and the password of password1! for this admin user:

On node1:

# /opt/jboss-eap-7.0_active/bin/add-user.sh admin password1!

On node2:

# /opt/jboss-eap-7.0_passive/bin/add-user.sh admin password1!

This uses the non-interactive mode of the add-user script, to add management users with a given username and password.

2) Node Users

The next step is to add a user for each node that will connect to the cluster. For the active domain, that means creating two users called node2 and node3 (since node1 hosts the domain controller and does not need to use a password to authenticate against itself), and for the passive domain, two users called node1 and node3 are required. This time, provide no argument to the add-user script and instead follow the interactive setup.

The first step is to specify that it is a management user. The interactive process is as follows:

What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): a

  • Simply press enter to accept the default selection of a

Enter the details of the new user to add.
Realm (ManagementRealm) :

  • Once again simply press enter to continue

Username : node1

  • Enter the username and press enter (node1, node2 or node3)

Password : password1!

  • Enter password1! as the password, and press enter

Re-enter Password : password1!

  • Enter password1! again to confirm, and press enter

About to add user 'node_X_' for realm 'ManagementRealm'
Is this correct yes/no? yes

  • Type yes and press enter to continue

Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no?

  • Type yes and press enter

To represent the user add the following to the server-identities definition <secret value="cGFzc3dvcmQxIQ==" />

  • Copy and paste the provided secret hash into slaves.properties before running the CLI scripts. The host XML file of any servers that are not domain controllers need to provide this password hash along with the associated username to connect to the domain controller.

This concludes the setup of required management users to administer the domains and connect the slave machines.

3) Application User

An application user is also required on all 6 nodes to allow a remote Java class to authenticate and invoke a deployed EJB. Creating an application user requires the interactive mode, so run add-user.sh and provide the following six times, once on each EAP 7 installation node:

What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): b

  • Enter b and press enter to add an application user

Enter the details of the new user to add.
Realm (ApplicationRealm) :

  • Simply press enter to continue

Username : ejbcaller

  • Enter the username as ejbcaller and press enter

Password : password1!

  • Enter password1! as the password, and press enter

Re-enter Password : password1!

  • Enter password1! again to confirm, and press enter

What roles do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]:

  • Press enter to leave this blank and continue

About to add user 'ejbcaller' for realm 'ApplicationRealm'
Is this correct yes/no? yes

  • Type yes and press enter to continue

…​[confirmation of user being added to files]
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls.
yes/no? no

  • Type no and press enter to complete user setup

At this point, all the required users have been created; move forward to setup the server configuration files using CLI scripts, before starting the servers.

4.4.6.2. Domain configuration using CLI scripts

After the users are added, run the CLI scripts on domain controller machines to set up the configuration and deployments (node1 hosts the domain controller of the active domain and node2, the passive domain). It also sets up the slave machine’s configuration file (host-slave.xml), which you need to manually copy to other machines.

For both active and passive domains, the CLI domain setup script is similar, just with different values for environment properties.

Below are the files used for the CLI domain setup script:

  • configure.sh: main shell script to set up the whole cluster environment
  • environment.sh: configures the environment variables
  • setup-master.cli: CLI script that configures the main server’s domain.xml and host.xml
  • cluster.properties: property file used by setup-master.cli
  • setup-slave.cli: CLI script configures the slave servers' host-slave.xml
  • slaves.properties: property file used by setup-slave.cli

Before running for the first time, please update these three files with proper values:

  • environment.sh
  • cluster.properties
  • slaves.properties

After update, execute:

# configure.sh

Also please note that it’s not mandatory to have a configuration backup folder, but it is recommended that before running for the first time, you make a copy of the $JBOSS_HOME/domain/configuration folder which ships with JBoss EAP 7. The reason is that the CLI domain setup script will make changes to the default configuration files. Having the configuration backup folder will help ensure that the CLI will start from the same default configuration every time, thus allowing multiple retries.

Detailed explanations of these three modified files:

1) environment.sh

Sample values:

JBOSS_HOME=/opt/jboss-eap-7.0
CONF_HOME=$JBOSS_HOME/domain/configuration
CONF_BACKUP_HOME=$JBOSS_HOME/domain/configuration_bck

Explanation:

  • JBOSS_HOME: JBoss EAP 7 binary installation location
  • CONF_HOME: Domain configuration folder, which will be the target of the CLI scripts
  • CONF_BACKUP_HOME: The location from which configure.sh tries to restore the default configuration. Please make a backup of the configure folder before first running the script.

2) cluster.properties

Sample values:

lb.host=10.19.137.37
lb.port=6661
master.name=node1
psql.connection.url=jdbc:postgresql://10.19.137.37:5432/eap7
port.offset=0
management.http.port=9990
management.native.port=9999
multicast.address=230.0.0.1
cluster.user=admin
cluster.password=password
share.store.path=/mnt/activemq/sharefs
messaging.port=5545
messaging.backup.port=5546

Explanation:

  • lb.host: JSON Web Signature (JWS) apache http server host address
  • lb.port: JWS apache http server port
  • master.name: Server name in host.xml for the DMC box
  • psql.connection.url: JDBC URL for postgres DB
  • port.offset: Server port offset value
  • management.http.port: Listen port for EAP management console
  • management.native.port: Listen port for EAP management tools, including CLI
  • multicast.address: UDP address used across the cluster
  • cluster.user: ActiveMQ cluster user name
  • cluster.password: ActiveMQ cluster password
  • share.store.path: the path to the ActiveMQ shared store location
  • messaging.port: messaging port used in ActiveMQ remote connector/acceptor in default jms server
  • messaging.backup.port: messaging port used in ActiveMQ remote connector/acceptor in backup jms server

3) slaves.properties

Sample values:

slaves=slave1,slave2
slave1_name=node2
slave1_secret="cGFzc3dvcmQxIQ=="
slave1_live_group=2
slave1_backup_group=3
slave2_name=node3
slave2_secret="cGFzc3dvcmQxIQ=="
slave2_live_group=3
slave2_backup_group=1

Explanation:

  • slaves: the property names for the slave servers, the configure.sh does a for loop based on the names here, it is configurable for allowing more servers.
  • slaveX_name: this name need to match the name added using $JBOSS_HOME/bin/add-user.sh for the slave boxes
  • slaveX_secret: this is the secret hash from add node users, for AS process to connect to another AS process.
  • slaveX_live_group: Live jms server group number
  • slaveX_backup_group: Backup jms server group number

After running configure.sh, each slave machine’s host file is generated under its own folder under the DMC box’s domain/configuration folder.

$JBOSS_HOME/domain/configuration/slaveX/host-slave.xml

These new generated host-slave.xml needs to be manually copied to each slave box under $JBOSS_HOME/domain/configuration/

4.5. Startup

4.5.1. Starting Red Hat JBoss Core Services Apache HTTP Server

Log on to the machine where Apache HTTP Server is installed and navigate to the installed directory:

# cd /opt/jbcs-httpd

To start the web server front-ending the active domain:

# ./cluster1.sh start

To start the web server front-ending the passive domain:

# ./cluster2.sh start

To stop the active domain web server:

# ./cluster1.sh stop

To stop the passive domain web server:

# ./cluster2.sh stop

4.5.2. Starting the EAP domains

4.5.2.1. Active Domain startup

To start the active domain, assuming that 10.19.137.34 is the IP address for the node1 machine, 10.19.137.35 for node2 and 10.19.137.36 for node3:

On node1:

# /opt/jboss-eap-7.0_active/bin/domain.sh -b 10.19.137.34 -bprivate=10.19.137.34 -Djboss.bind.address.management=10.19.137.34

Wait until the domain controller on node1 has started, then start node2 and node3

# /opt/jboss-eap-7.0_active/bin/domain.sh -b 10.19.137.35 -bprivate=10.19.137.35 -Djboss.domain.master.address=10.19.137.34 --host-config=host-slave.xml # /opt/jboss-eap-7.0_active/bin/domain.sh -b 10.19.137.36 -bprivate=10.19.137.36 -Djboss.domain.master.address=10.19.137.34 --host-config=host-slave.xml

4.5.2.2. Passive Domain startup

Now start the servers in the passive domain with the provided sample configuration. Once again, assuming that 10.19.137.34 is the IP address for the node1 machine, 10.19.137.35 for node2 and 10.19.137.36 for node3:

On node2:

# /opt/jboss-eap-7.0_passive/bin/domain.sh -b 10.19.137.35 -bprivate=10.19.137.35 -Djboss.bind.address.management=10.19.137.35

Wait until the domain controller on node2 has started. Then on node1 and node3

# /opt/jboss-eap-7.0_passive/bin/domain.sh -b 10.19.137.34 -bprivate=10.19.137.34 -Djboss.domain.master.address=10.19.137.35 --host-config=host-slave.xml # /opt/jboss-eap-7.0_passive/bin/domain.sh -b 10.19.137.36 -bprivate=10.19.137.36 -Djboss.domain.master.address=10.19.137.35 --host-config=host-slave.xml