Chapter 1. Quickly Install MRG Messaging

1.1. The Messaging Server

1.1.1. The Messaging Server

MRG Messaging is an Enterprise-grade tested and supported messaging server based on the Apache Qpid project. The MRG Messaging Server uses an enterprise-grade version of the Apache Qpid broker to provide messaging services.

1.1.2. Messaging Broker

The messaging broker is the server that stores, forwards, and distributes messages. Red Hat Enterprise Messaging uses the Apache Qpid C++ broker.

1.1.3. Install MRG-M 3 Messaging Server on Red Hat Enterprise Linux 6

  1. If you are using RHN classic management for your system, subscribe your system to the base channel for Red Hat Enterprise Linux 6.
  2. Additionally, subscribe to the available MRG Messaging software channels relevant to your installation and requirements:

    MRG Messaging Software Channels

    Base Channel
    Subscribe to the Additional Services Channels for Red Hat Enterprise Linux 6 / MRG Messaging v.3 (for RHEL-6 Server) channel to enable full MRG Messaging Platform installations.
    High Availability Channel
    Subscribe to the Additional Services Channels for Red Hat Enterprise Linux 6 / RHEL Server High Availability channel to enable High Availability installations.
  3. Install the MRG Messaging server and client using the following commands:

    Note

    If only Messaging Client support is required go directly to Step 4.
    MRG Messaging Server and Client
    Install the "MRG Messaging" group using the following yum command (as root):
    yum groupinstall "MRG Messaging"
    
    High Availability Support
    If High Availability support is required, install the package using the following yum command (as root):
    yum install qpid-cpp-server-ha
  4. Alternative: Install Messaging Client Support Only

    If only messaging client support is required, install the "Messaging Client Support" group using the following yum command (as root):
    yum groupinstall "Messaging Client Support"
    You do not need to install this group if you have already installed the "MRG Messaging" group. It is included by default.

    Note

    Both Qpid JMS AMQP 0.10 and 1.0 clients require Java 1.7 to run. Ensure the Java version installed on your system is 1.7 or higher.

1.1.4. Upgrade a MRG Messaging 2 Server to MRG Messaging 3

Upgrading from MRG Messaging 2 to 3 requires a number of configuration changes in addition to changing RHN channels and installing packages.
  1. If you are using RHN classic management for your system, subscribe your system to the base channel for Red Hat Enterprise Linux 6.
  2. Remove incompatible components. Run the following command as root:
    yum erase qpid-cpp-server-cluster sesame cumin cumin-messaging python-wallaby
  3. Unsubscribe the system from the MRG v2 channels.
  4. Additionally, subscribe to the available MRG Messaging software channels relevant to your installation and requirements:

    MRG Messaging Software Channels

    Base Channel
    Subscribe to the Additional Services Channels for Red Hat Enterprise Linux 6 / MRG Messaging v.3 (for RHEL-6 Server) channel to enable full MRG Messaging Platform installations.
    High Availability Channel
    Subscribe to the Additional Services Channels for Red Hat Enterprise Linux 6 / RHEL Server High Availability channel to enable High Availability installations.
  5. Update the MRG Messaging server and client using the following commands:

    Note

    If only Messaging Client support is required go directly to Step Six.
    MRG Messaging Server and Client
    Update the "MRG Messaging" group using the following yum command (as root):
    yum groupinstall "MRG Messaging"
    
    High Availability Support
    If High Availability support is required, update the package using the following yum command (as root):
    yum install qpid-cpp-server-ha
  6. If only messaging client support is required, update the "Messaging Client Support" group using the following yum command (as root):
    yum groupinstall "Messaging Client Support"
    You do not need to update this group if you have already updated the "MRG Messaging" group. It is included by default.

    Note

    Both Qpid JMS AMQP 0.10 and 1.0 clients require Java 1.7 to run. Ensure the Java version installed on your system is 1.7 or higher.

1.1.5. Linearstore Custom Broker EFP Partitions

MRG-M 3.2 introduces an upgraded directory structure for Empty File Pool (EFP) broker partitions, which allows you to specify unique EFP partitions and their sizes.
This feature allows EFPs to be established on different media, and for queues to be able to choose which partition to use depending on their performance requirements. For example, queues with high throughput and low latency requirements can now be established on more expensive solid state media, while low throughput noncritical queues can be directed to use regular rotating magnetic media.
The new layout allows both the old and new stores to co-exist in mutually exclusive locations in the store directory, which provides the ability to back out of an upgrade if required.

1.1.6. Upgrade a MRG Messaging 3.1 Server to MRG Messaging 3.2

Because of the changes to EFP described in Section 1.1.5, “Linearstore Custom Broker EFP Partitions” some specific requirements exist when upgrading from MRG-M 3.1 to 3.2.

Procedure 1.1. How to Upgrade MRG Messaging 3.1 to 3.2

  1. Verify that all required software channels are still correctly subscribed to in Section 1.1.3, “Install MRG-M 3 Messaging Server on Red Hat Enterprise Linux 6”
  2. Stop the server by doing one of the following:
    1. Press Ctrl+C to shutdown the server correctly if started from the command line.
    2. Run service qpidd stop to stop the service correctly.
  3. Run sudo yum update qpid-cpp-server-ha to upgrade to the latest packages.
  4. Important

    If you intend to set up custom EFP partitions, complete the steps in Procedure 1.2, “How To Manually Upgrade Linearstore EFP to the New Partitioning Structure” before completing this step.
    Restart the server by running qpidd or service qpidd start depending on requirements.
If it is not possible to cleanly shut down a MRG-M broker prior to upgrade, the Linearstore EFP files must be manually upgraded to the new structure, and linked correctly.
As part of the Linearstore partition changes, a new directory structure exists.

Directory Changes

qls/dat
This directory is now qls/dat2. There is no other change other than the directory name.
qls/tpl
This directory is now qls/tpl2.
The journal files previously stored in this directory are now links to journal files. The actual files now reside in qls/pNNN/efp/[size]k/in_use directory in the EFP. This allows the files to be contained within the partition in which the EFP exists.
qls/jrnl
This directory is now qls/jrnl2, and contains the [queue-name] directories.
The [queue-name] directories previously stored in qls/jrnl are now links to journal directories. The actual directories now reside in qls/pNNN/efp/[size]k/in_use directory in the EFP. This allows the directories to be contained within the partition in which the EFP exists.
qls/pNNN/efp/[size]k
Directories of this type now contain an /in_use and /returned subdirectory, along with the empty files.
pNNN relates to the broker partition ID, which is set on the command line using the --efp-partition parameter.
[size]k is the size in MiB of the broker partition, which is set on the command line using the --efp-file-size parameter.
To ensure data integrity upon live upgrade (where the broker can not be shut down), the new directory structure will not recover a previous store. You must upgrade the store contents manually after taking precautions to back up the store contents.

Note

It is recommended that customers start with a clean store and recreate queues as needed. Only perform an upgrade if:
  • You have queues that cannot be recreated.
  • There is message data that cannot be expunged before the upgrade.

Example 1.1. Old directory structure

qls
   ├── dat (contains Berkeley DB database files)
   ├── p001
   │  └── efp
   │     └── 2028k (contains empty/returned journal files)
   ├── jrnl
   │  ├── queue_1 (contains in-use journal files belonging to queue_1)
   │  ├── queue_2 (contains in-use journal files belonging to queue_2)
   │  ├── queue_3 (contains in-use journal files belonging to queue_3)
   │  ...
   └── tpl (contains in-use journal files belonging to the TPL)

Possible variations

  • It is possible to use any number of different EFP file sizes, and there may be a number of other directories besides the default of 2048k.
  • It is possible to have a number of different partition directories, but in the old Linearstore, these don't perform any useful function other than providing a separate directory for EFP files. These directories must be named pNNN, where NNN is a 3-digit number. The partition numbers need not be sequential.

Example 1.2. New directory structure

qls
   ├── dat2 (contains Berkeley DB database files)
   ├── p001
   │  └── efp
   │     └── 2028k (contains empty/returned journal files)
   │        ├── in_use (contains in-use journal files)
   │        └── returned (contains files recently returned from being in-use, but not yet processed before being returned to the 2048k directory)
   │
   ├── jrnl2
   │  ├── queue_1 (contains in-use journal files belonging to queue_1)
   │  ├── queue_2 (contains in-use journal files belonging to queue_2)
   │  ├── queue_3 (contains in-use journal files belonging to queue_3)
   │  ...
   └── tpl2 (contains in-use journal files belonging to the TPL)

Note

The database and journal directories are mutually exclusive. It is recommended that the old structure and journals/files be left in place alongside the new structure until the success of the upgrade is confirmed. It also allows that if the upgrade is rolled back to the previous version, the store will continue to operate using the old directory structure.

Procedure 1.2. How To Manually Upgrade Linearstore EFP to the New Partitioning Structure

  1. Create new directories qls/dat2.
    # mkdir dat2
    
  2. Copy the contents of the Berkeley DB database from qls/dat to the new qls/dat2 directory.
    # cp dat/* dat2/
  3. For each EFP directory in qls/pNNN/efp/[size]k, add 2 additional subdirectories;
    1. in_use
      # mkdir p001/efp/2048k/in_use
      
    2. returned
      # mkdir p001/efp/2048k/returned
      
    By default, there is only one partition; qls/p001, and only one EFP size; 2048k.
  4. Create a jrnl2 directory.
    # mkdir jrnl2
    For each directory in the old jrnl directory (each of which is named for an existing queue), create an identically named directory in the new jrnl2 directory.
    # mkdir jrnl2/[queue-name-1]
    # mkdir jrnl2/[queue-name-2]
    ...
    
    You can list the directories present in the jrnl2 directory with the following command:
    # dir jrnl
  5. Each journal file must be first copied to the in_use directory of the correct partition directory with correct efp size directory. Then a link must be created to this journal file in the new jrnl2/[queue-name] directory.
    Two pieces of information are needed for every journal file:
    1. Which partition it originated from.
    2. Which size within that partition it is.
    The default setting is a single partition number (in directory qls/p001), and a single EFP size of 2048k (which is the approximate size of each journal file). If the old directory structure has only these defaults, then proceed as follows:
    1. For each queue in qls/jrnl, note the journal files present. Once they are moved, it will be difficult to distinguish which journal files are from which queue as other journal files from other queues will also be present.
      # ls -la jrnl/queue-name/*
      
    2. Copy all the journal files from the old queue directory into the partition's 2048k in_use directory.
      # cp jrnl/queue-name/* p001/efp/2048k/in_use/
    3. Finally, create a symbolic link to these files in the new queue directory created in step 3 above. This step requires the names of the files copied in step b. above.
      # ln -s /abs_path_to/qls/p001/efp/2048k/in_use/journal_1_file_name.jrnl jrnl2/queue-name/
      # ln -s /abs_path_to/qls/p001/efp/2048k/in_use/journal_2_file_name.jrnl jrnl2/queue-name/
      ...
      

      Note

      When creating a symlink, use an absolute path to the source file.
    4. Repeat the previous step for each journal file in queue.
      If more than one partition exists, it is important to know which journal files belong to which partition.
      You can inspect a hexdump of the file header for each journal file to obtain this information. Note the 2-byte value at offset 26 (0x1a):
      # hexdump -Cn 4096 path/to/uuid.jrnl
      00000000  51 4c 53 66 02 00 00 00  1c 62 0c f1 e2 4c 42 0d  |QLSf.....b...LB.|  
      00000010  5a 6b 00 00 00 00 00 00  01 00 01 00 00 00 00 00  |Zk..............|
      00000020  00 02 00 00 00 00 00 00  00 10 00 00 00 00 00 00  |................|
      00000030  34 63 b9 54 00 00 00 00  8e 61 ef 2c 00 00 00 00  |4c.T.....a.,....|
      00000040  2f 00 00 00 00 00 00 00  08 00 54 70 6c 53 74 6f  |/.........TplSto|
      00000050  72 65 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |re..............|
      00000060  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
      
      In the event that there are several size directories in pNNN/efp/ directory, it is necessary to consider the size of the files being copied in step b. above, and ensure that they are copied to the in_use directory of correct efp size.

      Example 1.3. More than one size in use in a partition

      qls
       └── jrnl
            ├── queue-1
            │    └──jrnl1_file.jrnl  (size 2101248)
            └── queue-2
                 └──jrnl2_file.jrnl  (size 4198400)
      
      Assuming that both these files belong to partition pNNN, then jrnl1_file.jrnl will be copied to the new pNNN/efp/2048k/ directory, and jrnl2_file.jrnl will be copied to the new pNNN/efp/4096k/ directory.
  6. The Transaction Prepared List (TPL) is a special queue which records transaction prepare and commit/abort boundaries for a transaction. In the new store, it is located in a new directory called tpl2.
    1. Create the tpl2 directory:
      # mkdir tpl2
      
    2. Repeat the process described in step 4 above, except that the journal files are located in the tpl directory, and the symlinks must be created in the new tpl2 directory:
      1. List current journal files:
        # ls -la tpl
        
      2. Copy journal files to from the tpl directory to the correct pNNN/efp/[size]k/in_use alongside other files copied as part of step 4 above.
        # cp tpl/* p001/efp/2048k/in_use/
        
      3. Create symbolic links in the new tpl2 directory to these files:
        # ln -s abs_path_to/qls/p001/efp/2048k/in_use/efp_journal_1_file_name.jrnl tpl2/
        
      4. Repeat the above step for each file copied in tpl.
    See the note in step 4 above if more than one partition and/or more than one EFP size is in use, and make the appropriate adjustments as described there if necessary.
  7. Restore the correct ownership of the qls directory:
    # chown -R qpidd:qpidd /absolute_path_to/qls
    
  8. Restore SELinux contexts for qls directory
    # restorecon -FvvR /abs_path_to/qls
    
The upgrade is now complete, The broker can now be started. To confirm this, it is suggested that the broker be started with elevated logging, which will cause it to print additional messages about the Linearstore recovery process.
If the broker is started on the command-line, use the option --log-enable info+ for the first restart, otherwise change the broker configuration file to use this log level prior to starting the broker as a service.
Once it has been established that all queues were successfully recovered and that all expected messages have been recovered, the broker may be stopped and the log level returned to its previous or default settings

1.1.7. Configure the Firewall for Message Broker Traffic

Before installing and configuring the message broker, you must allow incoming connections on the port it will use. The default port for message broker (AMQP) traffic is 5672.
To allow this the firewall must be altered to allow network traffic on the required port. All steps must be run while logged in to the server as the root user.

Procedure 1.3. Configuring the firewall for message broker traffic

  1. Open the /etc/sysconfig/iptables file in a text editor.
  2. Add an INPUT rule allowing incoming connections on port 5672 to the file. The new rule must appear before any INPUT rules that REJECT traffic.
    -A INPUT -p tcp -m tcp --dport 5672  -j ACCEPT
  3. Save the changes to the /etc/sysconfig/iptables file.
  4. Restart the iptables service for the firewall changes to take effect.
    # service iptables restart
The firewall is now configured to allow incoming connections to the MariaDB database service on port 5672.