Messaging Installation and Configuration Guide
Install and Configure the Red Hat Enterprise MRG Messaging Server
Red Hat Customer Content Services
Abstract
MRG 3 Overview
1. The Top Six Differences between MRG Messaging 2 and 3
- The broker and the C++ messaging library (
qpid::messaging
) now offer amqp1.0 support via the Apache Proton library (note that transactions are not yet available over amqp1.0). - Clustering has been replaced with a new High Availability implementation.
- Queue Threshold alerts are now edge-triggered, rather than level-triggered. This improves alert rate limiting.
- The flow-to-disk implementation has been changed to disk-paged queues to more efficiently use memory.
- The
ring-strict
limit policy has been dropped. - The messaging journal has been replaced with a new implementation - the dynamically-expanding Linear Store.
See Also:
Chapter 1. Quickly Install MRG Messaging
1.1. The Messaging Server
1.1.1. The Messaging Server
1.1.2. Messaging Broker
1.1.3. Install MRG-M 3 Messaging Server on Red Hat Enterprise Linux 6
- If you are using RHN classic management for your system, subscribe your system to the base channel for Red Hat Enterprise Linux 6.
- Additionally, subscribe to the available MRG Messaging software channels relevant to your installation and requirements:
MRG Messaging Software Channels
- Base Channel
- Subscribe to the
Additional Services Channels for Red Hat Enterprise Linux 6
/MRG Messaging v.3 (for RHEL-6 Server)
channel to enable full MRG Messaging Platform installations. - High Availability Channel
- Subscribe to the
Additional Services Channels for Red Hat Enterprise Linux 6 / RHEL Server High Availability
channel to enable High Availability installations.
- Install the MRG Messaging server and client using the following commands:
Note
If only Messaging Client support is required go directly to Step 4.- MRG Messaging Server and Client
- Install the "MRG Messaging" group using the following
yum
command (as root):yum groupinstall "MRG Messaging"
- High Availability Support
- If High Availability support is required, install the package using the following yum command (as root):
yum install qpid-cpp-server-ha
Alternative: Install Messaging Client Support Only
If only messaging client support is required, install the "Messaging Client Support" group using the followingyum
command (as root):yum groupinstall "Messaging Client Support"
You do not need to install this group if you have already installed the "MRG Messaging" group. It is included by default.Note
Both Qpid JMS AMQP 0.10 and 1.0 clients require Java 1.7 to run. Ensure the Java version installed on your system is 1.7 or higher.
1.1.4. Upgrade a MRG Messaging 2 Server to MRG Messaging 3
- If you are using RHN classic management for your system, subscribe your system to the base channel for Red Hat Enterprise Linux 6.
- Remove incompatible components. Run the following command as root:
yum erase qpid-cpp-server-cluster sesame cumin cumin-messaging python-wallaby
- Unsubscribe the system from the MRG v2 channels.
- Additionally, subscribe to the available MRG Messaging software channels relevant to your installation and requirements:
MRG Messaging Software Channels
- Base Channel
- Subscribe to the
Additional Services Channels for Red Hat Enterprise Linux 6
/MRG Messaging v.3 (for RHEL-6 Server)
channel to enable full MRG Messaging Platform installations. - High Availability Channel
- Subscribe to the
Additional Services Channels for Red Hat Enterprise Linux 6 / RHEL Server High Availability
channel to enable High Availability installations.
- Update the MRG Messaging server and client using the following commands:
Note
If only Messaging Client support is required go directly to Step Six.- MRG Messaging Server and Client
- Update the "MRG Messaging" group using the following
yum
command (as root):yum groupinstall "MRG Messaging"
- High Availability Support
- If High Availability support is required, update the package using the following yum command (as root):
yum install qpid-cpp-server-ha
- If only messaging client support is required, update the "Messaging Client Support" group using the following
yum
command (as root):yum groupinstall "Messaging Client Support"
You do not need to update this group if you have already updated the "MRG Messaging" group. It is included by default.Note
Both Qpid JMS AMQP 0.10 and 1.0 clients require Java 1.7 to run. Ensure the Java version installed on your system is 1.7 or higher.
1.1.5. Linearstore Custom Broker EFP Partitions
See Also:
1.1.6. Upgrade a MRG Messaging 3.1 Server to MRG Messaging 3.2
Procedure 1.1. How to Upgrade MRG Messaging 3.1 to 3.2
- Verify that all required software channels are still correctly subscribed to in Section 1.1.3, “Install MRG-M 3 Messaging Server on Red Hat Enterprise Linux 6”
- Stop the server by doing one of the following:
- Press Ctrl+C to shutdown the server correctly if started from the command line.
- Run
service qpidd stop
to stop the service correctly.
- Run
sudo yum update qpid-cpp-server-ha
to upgrade to the latest packages. Important
If you intend to set up custom EFP partitions, complete the steps in Procedure 1.2, “How To Manually Upgrade Linearstore EFP to the New Partitioning Structure” before completing this step.Restart the server by runningqpidd
orservice qpidd start
depending on requirements.
Directory Changes
- qls/dat
- This directory is now
qls/dat2
. There is no other change other than the directory name. - qls/tpl
- This directory is now
qls/tpl2
.The journal files previously stored in this directory are now links to journal files. The actual files now reside inqls/pNNN/efp/[size]k/in_use
directory in the EFP. This allows the files to be contained within the partition in which the EFP exists. - qls/jrnl
- This directory is now
qls/jrnl2
, and contains the [queue-name] directories.The [queue-name] directories previously stored inqls/jrnl
are now links to journal directories. The actual directories now reside inqls/pNNN/efp/[size]k/in_use
directory in the EFP. This allows the directories to be contained within the partition in which the EFP exists. - qls/pNNN/efp/[size]k
- Directories of this type now contain an
/in_use
and/returned
subdirectory, along with the empty files.pNNN
relates to the broker partition ID, which is set on the command line using the--efp-partition
parameter.[size]k
is the size in MiB of the broker partition, which is set on the command line using the--efp-file-size
parameter.
Note
- You have queues that cannot be recreated.
- There is message data that cannot be expunged before the upgrade.
Example 1.1. Old directory structure
qls
├── dat (contains Berkeley DB database files)
├── p001
│ └── efp
│ └── 2028k (contains empty/returned journal files)
├── jrnl
│ ├── queue_1 (contains in-use journal files belonging to queue_1)
│ ├── queue_2 (contains in-use journal files belonging to queue_2)
│ ├── queue_3 (contains in-use journal files belonging to queue_3)
│ ...
└── tpl (contains in-use journal files belonging to the TPL)
Possible variations
- It is possible to use any number of different EFP file sizes, and there may be a number of other directories besides the default of 2048k.
- It is possible to have a number of different partition directories, but in the old Linearstore, these don't perform any useful function other than providing a separate directory for EFP files. These directories must be named
pNNN
, where NNN is a 3-digit number. The partition numbers need not be sequential.
Example 1.2. New directory structure
qls
├── dat2 (contains Berkeley DB database files)
├── p001
│ └── efp
│ └── 2028k (contains empty/returned journal files)
│ ├── in_use (contains in-use journal files)
│ └── returned (contains files recently returned from being in-use, but not yet processed before being returned to the 2048k directory)
│
├── jrnl2
│ ├── queue_1 (contains in-use journal files belonging to queue_1)
│ ├── queue_2 (contains in-use journal files belonging to queue_2)
│ ├── queue_3 (contains in-use journal files belonging to queue_3)
│ ...
└── tpl2 (contains in-use journal files belonging to the TPL)
Note
Procedure 1.2. How To Manually Upgrade Linearstore EFP to the New Partitioning Structure
- Create new directories
qls/dat2
.# mkdir dat2
- Copy the contents of the Berkeley DB database from
qls/dat
to the newqls/dat2
directory.# cp dat/* dat2/
- For each EFP directory in
qls/
, add 2 additional subdirectories;pNNN
/efp/[size]k- in_use
# mkdir p001/efp/2048k/in_use
- returned
# mkdir p001/efp/2048k/returned
By default, there is only one partition;qls/p001
, and only one EFP size;2048k
. - Create a
jrnl2
directory.# mkdir jrnl2
For each directory in the oldjrnl
directory (each of which is named for an existing queue), create an identically named directory in the newjrnl2
directory.# mkdir jrnl2/[queue-name-1] # mkdir jrnl2/[queue-name-2] ...
You can list the directories present in thejrnl2
directory with the following command:# dir jrnl
- Each journal file must be first copied to the
in_use
directory of the correct partition directory with correct efp size directory. Then a link must be created to this journal file in the newjrnl2/[queue-name]
directory.Two pieces of information are needed for every journal file:- Which partition it originated from.
- Which size within that partition it is.
The default setting is a single partition number (in directoryqls/p001
), and a single EFP size of2048k
(which is the approximate size of each journal file). If the old directory structure has only these defaults, then proceed as follows:- For each queue in
qls/jrnl
, note the journal files present. Once they are moved, it will be difficult to distinguish which journal files are from which queue as other journal files from other queues will also be present.# ls -la jrnl/queue-name/*
- Copy all the journal files from the old queue directory into the partition's
2048k
in_use
directory.# cp jrnl/queue-name/* p001/efp/2048k/in_use/
- Finally, create a symbolic link to these files in the new queue directory created in step 3 above. This step requires the names of the files copied in step b. above.
# ln -s /abs_path_to/qls/p001/efp/2048k/in_use/journal_1_file_name.jrnl jrnl2/queue-name/ # ln -s /abs_path_to/qls/p001/efp/2048k/in_use/journal_2_file_name.jrnl jrnl2/queue-name/ ...
Note
When creating a symlink, use an absolute path to the source file. - Repeat the previous step for each journal file in queue.If more than one partition exists, it is important to know which journal files belong to which partition.You can inspect a hexdump of the file header for each journal file to obtain this information. Note the 2-byte value at offset 26 (0x1a):
# hexdump -Cn 4096 path/to/uuid.jrnl 00000000 51 4c 53 66 02 00 00 00 1c 62 0c f1 e2 4c 42 0d |QLSf.....b...LB.| 00000010 5a 6b 00 00 00 00 00 00 01 00 01 00 00 00 00 00 |Zk..............| 00000020 00 02 00 00 00 00 00 00 00 10 00 00 00 00 00 00 |................| 00000030 34 63 b9 54 00 00 00 00 8e 61 ef 2c 00 00 00 00 |4c.T.....a.,....| 00000040 2f 00 00 00 00 00 00 00 08 00 54 70 6c 53 74 6f |/.........TplSto| 00000050 72 65 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |re..............| 00000060 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
In the event that there are several size directories inpNNN/efp/
directory, it is necessary to consider the size of the files being copied in step b. above, and ensure that they are copied to thein_use
directory of correct efp size.Example 1.3. More than one size in use in a partition
qls └── jrnl ├── queue-1 │ └──jrnl1_file.jrnl (size 2101248) └── queue-2 └──jrnl2_file.jrnl (size 4198400)
Assuming that both these files belong to partitionpNNN
, thenjrnl1_file.jrnl
will be copied to the newpNNN/efp/2048k/
directory, andjrnl2_file.jrnl
will be copied to the new pNNN/efp/4096k/ directory.
- The Transaction Prepared List (TPL) is a special queue which records transaction prepare and commit/abort boundaries for a transaction. In the new store, it is located in a new directory called
tpl2
.- Create the
tpl2
directory:# mkdir tpl2
- Repeat the process described in step 4 above, except that the journal files are located in the
tpl
directory, and the symlinks must be created in the newtpl2
directory:- List current journal files:
# ls -la tpl
- Copy journal files to from the
tpl
directory to the correctpNNN/efp/[size]k/in_use
alongside other files copied as part of step 4 above.# cp tpl/* p001/efp/2048k/in_use/
- Create symbolic links in the new
tpl2
directory to these files:# ln -s abs_path_to/qls/p001/efp/2048k/in_use/efp_journal_1_file_name.jrnl tpl2/
- Repeat the above step for each file copied in
tpl
.
See thenote
in step 4 above if more than one partition and/or more than one EFP size is in use, and make the appropriate adjustments as described there if necessary. - Restore the correct ownership of the
qls
directory:# chown -R qpidd:qpidd /absolute_path_to/qls
- Restore SELinux contexts for qls directory
# restorecon -FvvR /abs_path_to/qls
--log-enable info+
for the first restart, otherwise change the broker configuration file to use this log level prior to starting the broker as a service.
See Also:
1.1.7. Configure the Firewall for Message Broker Traffic
5672
.
root
user.
Procedure 1.3. Configuring the firewall for message broker traffic
- Open the
/etc/sysconfig/iptables
file in a text editor. - Add an
INPUT
rule allowing incoming connections on port5672
to the file. The new rule must appear before anyINPUT
rules thatREJECT
traffic.-A INPUT -p tcp -m tcp --dport 5672 -j ACCEPT
- Save the changes to the
/etc/sysconfig/iptables
file. - Restart the
iptables
service for the firewall changes to take effect.#
service iptables restart
5672
.
1.2. Memory Requirements and Limitations
1.2.1. Memory Allocation Limit (32-bit)
1.2.2. Impact of Transactions on the Journal
1.2.3. Messaging Broker Memory Requirements
Calculate message size
Procedure 1.4. Estimate message size
- Default message header content (such as Java timestamp and message-id): 55 bytes
- Routing Key (for example: a routing key of "testQ" = 5 bytes)
- Java clients add:
content-type
(for "text/plain" it is 10 bytes)user-id
(user name passed to SASL for authentication, number of bytes equal to string size)
- Application headers:
- Application header overhead: 8 bytes
- For any textual header properties:
property_name_size + property_value_size + 4
bytes
spout
such as the following:
./run_example.sh org.apache.qpid.example.Spout -c=1 -b="guest:guest@localhost:5672" -P=property1=value1 -P=property2=value2 "testQ; {create:always}" "123456789"
- 55 bytes for the default size
- 5 bytes for the routing key "testQ"
- 10 bytes for content-type "text/plain"
- 5 bytes for user-id "guest"
- 8 bytes for using application headers
- 9+6+4 bytes for the first property
- 9+6+4 bytes for the second property
- Total header size: 121 bytes
- Total message size: 130 bytes
Procedure 1.5. Determine message size from logs
- Enable trace logging by adding the following to
/etc/qpid/qpidd.conf
:log-enable=trace+:qpid::SessionState::receiverRecord log-enable=info+ log-to-file=/path/to/file.log
Note that this logging will consume significant disk space, and should be turned off by removing these lines after the test is performed. - (Re)start the Broker.
- Send a sample message pattern from a qpid client. This sample message pattern should correspond to your normal utilization, so that the message header and body average sizes match your projected real-world use case.
- After the message pattern is sent, grep in the logfile for log records:
2012-10-16 08:56:20 trace guest@QPID.2fa0df51-6131-463e-90cc-45895bea072c: recv cmd 2: header (121 bytes); properties={{MessageProperties: content-length=9; message-id=d096f253-56b9-33df-9673-61c55dcba4ae; content-type=text/plain; user-id=guest; application-headers={property1:V2:6:str16(value1),property2:V2:6:str16(value2)}; }{DeliveryProperties: priority=4; delivery-mode=2; timestamp=1350370580363; exchange=; routing-key=testQ; }}
This example log entry contains both header size (121 bytes in this case) and message body size (9 bytes in this case, as content-length=9).
Message memory utilization on Broker
- A second instance of the message header is kept - one is stored as raw bytes, the other as a map.
- The Message object uses 600 bytes.
- Each message is guarded by three mutexes and monitor. These require 208 bytes.
1.3. MRG 2 Features - Where Are They Now?
1.3.1. Configuration file changes
/etc/qpidd.conf
. The configuration file for the MRG 3 broker is now located in /etc/qpid/qpidd.conf
.
1.3.2. Cluster configuration changes
cluster-name
cluster-mechanism
cluster-url
cluster-username
cluster-password
cluster-cman
cluster-size
cluster-clock-interval
cluster-read-max
1.3.3. Flow-to-disk replacement
flow_to_disk
must be removed from the configuration file.
See Also:
1.3.4. Linear Store
See Also:
1.3.5. Address string and connection options
1.4. Application Migration
1.4.1. API support in MRG 3
qpid::types
and qpid::messaging
APIs are supported in MRG 3.
1.4.2. qpid::messaging Message::get/setContentObject()
Message::getContentObject()
and Message::setContentObject()
to access the semantic content of structured AMQP 1.0 messages. These methods allow the body of the message to be accessed or manipulated as a Variant. Using these methods produces the most widely applicable code as they work for both protocol versions and work with map-, list-, text- or binary- messages.
bool Formatter::isMapMsg(qpid::messaging::Message& msg) { return(msg.getContentObject().getType() == qpid::types::VAR_MAP); } bool Formatter::isListMsg(qpid::messaging::Message& msg) { return(msg.getContentObject().getType() == qpid::types::VAR_LIST); } qpid::types::Variant::Map Formatter::getMsgAsMap(qpid::messaging::Message& msg) { qpid::types::Variant::Map intMap; intMap = msg.getContentObject().asMap(); return(intMap); } qpid::types::Variant::List Formatter::getMsgAsList(qpid::messaging::Message& msg) { qpid::types::Variant::List intList; intList = msg.getContentObject().asList(); return(intList); }
Message::getContent()
and Message::setContent()
continue to refer to the raw bytes of the content. The encode()
and decode()
methods in the API continue to decode map- and list- messages in the AMQP 0-10 format.
1.4.3. Ambiguous Addresses in AMQP 1.0
Ambiguous address, please specify queue or topic as node type
".
Chapter 2. Start the Messaging Broker
2.1. Starting the Broker via command line vs as a service
2.2. Running the Broker at the command line
2.2.1. Start the Broker at the command line
Start the Broker
- By default, the broker is installed in
/usr/sbin/
. If this is not on your path, you need to type the whole path to start the broker:/usr/sbin/qpidd -t
You will see output similar to the following when the broker starts:[date] [time] info Loaded Module: libbdbstore.so.0 [date] [time] info Locked data directory: /var/lib/qpidd [date] [time] info Management enabled [date] [time] info Listening on port 5672
The-t
or--trace
option enables debug tracing, printing messages to the terminal.Note: The locked data directory/var/lib/qpidd
is used for persistence, which is enabled by default.
2.2.2. Stop the Broker when started at the command line
- To stop the broker, type CTRL+ C at the shell prompt
[date] [time] notice Shutting down. [date] [time] info Unlocked data directory: /var/lib/qpidd
2.3. Running the Broker as a service
2.3.1. Run the Broker as a service
- In production scenarios, Red Hat Enterprise Messaging is usually run as a service. To start the broker as a service, with root privileges run the command:
service qpidd start
The message broker starts with the message:Starting Qpid AMQP daemon: [ OK ]
2.3.2. Stop the Broker service
- To check the status of a broker running as a service use the
service status
command. Stop the broker with the commandservice stop
.# service qpidd status qpidd (pid PID) is running... # service qpidd stop Stopping Qpid AMQP daemon: [ OK ]
2.3.3. Configure the Broker service to start automatically when the server is started
qpidd
service.
- Run the following command as root:
chkconfig qpidd on
2.4. Run multiple Brokers on one machine
2.4.1. Running multiple Brokers
2.4.2. Start multiple Brokers
- Select two available ports, for example 5555 and 5556.
- Start each new broker, using the
--data-dir
command to specify a new data directory for each:$ qpidd -p 5555 --data-dir /tmp/qpid/store/1
$ qpidd -p 5556 --data-dir /tmp/qpid/store/2
Chapter 3. Give Yourself (Broker) Options
3.1. Set Broker options at the command line
- This example uses the command line option
-t
to start the broker with debug tracing.$ /usr/sbin/qpidd -t
3.2. Set Broker options in a configuration file
- Become the root user, and open the
/etc/qpid/qpidd.conf
file in a text editor. - This example uses the configuration file to enable debug tracing. Changes will take effect from the next time the broker is started and will be used in every subsequent session.
# Configuration file for qpidd trace=1
- If you are running the broker as a service, you need to restart the service to reload the configuration options.
# service qpidd restart Stopping qpidd daemon: [ OK ] Starting qpidd daemon: [ OK ]
- If you are running the broker from the command-line, start the broker with no command-line options to use the configuration file.
# /usr/sbin/qpidd [date] [time] info Locked data directory: /var/lib/qpidd [date] [time] info Management enabled [date] [time] info Listening on port 5672
3.3. Broker options
3.3.1. Options for running the Broker as a Daemon
- Changes
- New in MRG 3.
Options for running the broker as a daemon | |
---|---|
-d | Run in the background as a daemon. Log messages from the broker are sent to syslog (/var/log/messages ) by default. |
-q | Shut down the broker that is currently running. |
-c | Check if the daemon is already running. If it is running, return the process ID. |
--wait=<seconds> | Wait <seconds> seconds during initialization and shutdown. If the daemon has not successfully completed initialization or shutdown within this time, an error is returned. On shutdown, the daemon will wait this period of time to allow the broker to shutdown before reporting success or failure. This option must be used in conjunction with the -d option, or it will be ignored. |
3.3.2. General Broker options
List of General Broker Command-line Options
- -h
- Displays the help message.
- --interface <ipaddr>
- Listen on the specified network interface. Can be used multiple times for multiple network interfaces. This option supports IPv4 and IPv6 addresses. You can use an explicit address, or the name of a network adapter (for example eth0 or em1). If you specify a network adapter name the broker will bind to all addresses bound to that adapter.
- --link-heartbeat-interval <seconds>
- The number of seconds to wait for a federated link heart beat. By default this is 120 seconds.
- --link-maintenance-interval <seconds>
- The number of seconds to wait for backup brokers to verify the link to the primary, and reconnect if required. This value defaults to 2.
- -p <Port_Number>
- Instructs the broker to use the specified port. Defaults to port 5672. It is possible to run multiple brokers simultaneously by using different port numbers.
- --paging-dir <directory>
- Directory to use for disk-paged queues.
- --socket-fd <fd>
- Use an existing socket specified by its file descriptor. Can be used multiple times for multiple sockets. This is useful when the broker is started by a parent process, for example during testing.
- -t
- This option enables verbose log messages, for debugging only.
- --tcp-nodelay on|off
- Disable ack on TCP. This increases throughput, especially in synchronous operations. This is set to
on
by default. You can set this in the configuration file usingQPID_TCP_NODELAY=on|off
- -v
- Displays the installed version.
3.3.3. Logging
stderr
if the broker is run on the command line, or to syslog
(/var/log/messages/
), if the broker is run as a service.
Table 3.1. Logging Options
Options for logging with syslog | |
---|---|
-t [--trace] | Enables all logging |
--log-disable RULE | Disables logging for selected levels and components (opt-out). RULE is in the form LEVEL[+]:[:PATTERN] . Levels are one of: trace, debug, info, notice, warning, error, critical. . This allows uninteresting log messages to be dropped during debugging. This can be used multiple times. |
--log-enable RULE (notice+) | Enables logging for selected levels and components. RULE is in the form LEVEL[+]:[:PATTERN] . Levels are one of: trace, debug, info, notice, warning, error, critical. . For example: --log-enable warning+ logs all warning, error, and critical messages. --log-enable debug:framing logs debug messages from the framing namespace. This can be used multiple times. |
--log-time yes|no | Include time in log messages |
--log-level yes|no | Include severity level in log messages |
--log-source | Include source file:line in log messages |
--log-thread yes|no | Include thread ID in messages |
--log-function yes|no | Include function signature in log messages |
--log-hires-timestamp yes|no (0) | Use hi-resolution timestamps in log messages |
--log-category yes|no (1) | Include category in log messages |
--log-prefix STRING | Prefix to append to all log messages |
--log-to-stderr yes|no | Send logging output to stderr . Enabled by default when run from command line. |
--log-to-stdout yes|no | Send logging output to stdout . |
--log-to-file FILE | Send log output to the specified filename. FILE. |
--log-to-syslog yes|no | Send logging output to syslog . Enabled by default when run as a service. |
--syslog-name NAME | Specify the name to use in syslog messages. The default is qpidd . |
--syslog-facility LOG_XXX | Specify the facility to use in syslog messages. The default is LOG_DAEMON . |
See Also:
3.3.4. Modules
Table 3.2. Options for using modules with the broker
Options for using modules with the broker | |
---|---|
--load-module MODULENAME | Use the specified module as a plug-in. |
--module-dir <DIRECTORY> | Use a different module directory. |
--no-module-dir | Ignore module directories. |
To see the help text for modules, use the --help
command:
# /usr/sbin/qpidd --help
3.3.5. Default Modules
- XML exchange type
- Persistence
- Clustering
3.3.6. Persistence Options
Table 3.3. Journal Options
Option | Default | Description |
---|---|---|
--store-dir DIR
| See the description for more information. |
Store directory location for persistence journals. The default is
/var/lib/qpidd when run as a daemon, or ~/.qpidd when run from the command line. This option can be used to override the default location, or the location specified by --data-dir . It is required if --no-data-dir is used.
|
--truncate yes|no
|
no
|
If yes|true|1, will truncate the store (discard any existing records). If no|false|0, will preserve the existing store files for recovery.
|
--wcache-page-size N
|
32
|
Size of the pages in the write page cache in KiB. Allowable values - powers of two, starting at 4: 4, 8, 16, 32... Lower values decrease latency at the expense of throughput.
|
--wcache-num-pages N
|
16
|
Number of pages in the write page cache. Minimum value: 4.
|
--tpl-wcache-page-size N
|
4
|
Size of the pages in the transaction prepared list write page cache in KiB. Allowable values - powers of two, starting at 4: 4, 8, 16, 32... Lower values decrease latency at the expense of throughput.
|
--tpl-wcache-num-pages N
|
16
|
Number of pages in the transaction prepared list write page cache. Minimum value: 4.
|
--efp-partition N
|
1
|
Empty File Pool broker partition to use for finding empty journal files. If this option is not specified, the default partition value of 1 is used. This value translates to the broker partition
p001 ).
To select a partition and journal file size other than the broker default, use
qpid-config and the --efp-partition and --efp-file-size options to select another partition and/or size combination. For example:
qpid-config add queue test-queue-5 --durable --efp-partition 5 --efp-file-size 8192
Important
The partition must exist prior to starting the broker.
|
--efp-file-size N
|
2048
|
Empty File Pool broker journal file size in KiB. Must be a multiple of 4 KiB. If this option is not specified, the default file size of 2048 KiB is used. To use the option, see the command example in
--efp-partition .
|
3.3.7. Queue Options
Table 3.4. Queue Options
Option | Default | Description |
---|---|---|
--queue-purge-interval
|
600
|
Specifies the time in seconds that the broker browses all queues and purges all messages with an expired Time To Live (TTL).
Use this option for queues where consumers are consistently behind producers in message processing to ensure expired messages are not held past their TTL.
|
3.3.8. Resource Quota Options
--max-connections
broker option.
Table 3.5. Resource Quota Options
Option | Description | Default Value |
---|---|---|
--max-connections N
|
Total concurrent connections to the broker.
|
500
|
--max-negotiate-time N
|
The time during which initial protocol negotiation must succeed. This prevents resource starvation by badly behaved clients or transient network issues that prevent connections from completing.
|
500
|
--session-max-unacked N
|
The broker will send messages on a session without waiting for acknowledgement up to this limit (or sooner, if the aggregate link credit for the session is lower). When this limit is reached, the broker will wait for acknowledgement from the client before sending more messages.
|
5000 (or approxiamately 625 KB / session)
|
Notes
--max-connections
is a qpid core limit and is enforced whether ACL is enabled or not.--max-connections
is enforced per Broker. In a cluster of N nodes where all Brokers set the maximum connections to 20 the total number of allowed connections for the cluster will be N*20.--session-max-unacked
helps control memory use in cases where a large number of sessions are used with AMQP 1.0, which allocates a per-session buffer for unacknowledged message deliveries.--session-max-unacked
can be used to make each session's buffer smaller, if the broker has a large number of sessions and memory overhead is an issue.
ACL-based Quotas
Table 3.6. ACL Command-line Option
Option | Description | Default Value |
---|---|---|
--acl-file FILE (policy.acl)
|
The policy file to load from, loaded from data dir.
|
Table 3.7. ACL-based Resource Quota Options
Option
| Description | Default Value |
---|---|---|
--connection-limit-per-user N
|
The maximum number of connections allowed per user. 0 implies no limit.
|
0
|
--connection-limit-per-ip N
|
The maximum number of connections allowed per host IP address. 0 implies no limit.
| 0 |
--max-queues-per-user N
|
Total concurrent queues created by individual user
|
0
|
Notes
- In a cluster system the actual number of connections may exceed the connection quota value
N
by one less than the number of member nodes in the cluster. For example: in a 5-node cluster, with a limit of 20 connections, the actual number of connections can reach 24 before limiting takes place. - Cluster connections are checked against the connection limit when they are established. The cluster connection is denied if a free connection is not available. After establishment, however, a cluster connection does not consume a connection.
- Allowed values for
N
are 0..65535. - These limits are enforced per cluster.
- A value of zero (0) disables that option's limit checking.
- Per-user connections are identified by the authenticated user name.
- Per-ip connections are identified by the
<broker-ip><broker-port>-<client-ip><client-port>
tuple which is also the management connection index.- With this scheme host systems may be identified by several names such as
localhost
IPv4,127.0.0.1
IPv4, or::1
IPv6, and a separate set of connections is allowed for each name. - Per-IP connections are counted regardless of the user credentials provided with the connections. An individual user may be allowed 20 connections but if the client host has a 5 connection limit then that user may connect from that system only 5 times.
3.3.9. Security Options
- Changes
- New for MRG 3.
Table 3.8. General Broker Options
Security options for running the broker | |
---|---|
--ssl-use-export-policy | Use NSS export policy |
--ssl-cert-password-file <PATH> | Required. Plain-text file containing password to use for accessing certificate database. |
--ssl-cert-name <NAME> | Name of the certificate to use. Default is localhost.localdomain . |
--ssl-cert-db <PATH> | Required. Path to directory containing certificate database. |
--ssl-port <NUMBER> | Port on which to listen for SSL connections. If no port is specified, port 5671 is used. If the SSL port chosen is the same as the port for non-SSL connections (i.e. if the --ssl-port and --port options are the same), both SSL encrypted and unencrypted connections can be established to the same port. However in this configuration there is no support for IPv6. |
--ssl-require-client-authentication |
Require SSL client authentication (i.e. verification of a client certificate) during the SSL handshake. This occurs before SASL authentication, and is independent of SASL.
This option enables the EXTERNAL SASL mechanism for SSL connections. If the client chooses the EXTERNAL mechanism, the client's identity is taken from the validated SSL certificate, using the CN, and appending any DC's to create the domain. For instance, if the certificate contains the properties
CN=bob , DC=acme , DC=com , the client's identity is bob@acme.com .
If the client chooses a different SASL mechanism, the identity take from the client certificate will be replaced by that negotiated during the SASL handshake.
|
--ssl-sasl-no-dict | Do not accept SASL mechanisms that can be compromised by dictionary attacks. This prevents a weaker mechanism being selected instead of EXTERNAL, which is not vulnerable to dictionary attacks. |
--require-encryption | This will cause qpidd to only accept encrypted connections. This means only clients with EXTERNAL SASL on the SSL-port, or with GSSAPI on the TCP port. |
--listen-disable PROTOCOL | Disable connections over the specified protocol. For example: --listen-disable tcp disables connections over TCP and forces the broker to only accept connections on the SSL-port. |
3.3.10. Transactions Options
Table 3.9. Options for transactions
Option | Description |
---|---|
--dtx-default-timeout <seconds>
|
By default: 60 seconds.
Journal records for DTX transactions are deleted after the specified number of seconds. This occurs when an external Transaction Manager (TM) prepares a DTX transaction but does not commit or abort it. After the specified number of seconds these are considered to be orphaned entries and are expunged.
|
Chapter 4. Queues
4.1. Message Queue
--queue-purge-interval
. While this is not a qpid-config option, it is worth understanding that message TTL can be configured, and when the purge attempt is successful the messages are subsequently removed.
4.2. Create and Configure Queues using qpid-config
qpid-config
command line tool can be used to create and configure queues.
qpid-config
is available by running the command with the --help
switch:
qpid-config --help
qpid-config
runs against the message broker on the current machine. To interact with a message broker on another machine, use the -a
or --broker-addr
switch. For example:
qpid-config -a server2.testing.domain.com
qpid-config -a user1/secretpassword@server2.testing.domain.com:5772
qpid-config add queue
command. This command takes the name for the new queue as an argument, and [optionally] queue options.
testqueue1
on the message broker running on the local machine:
qpid-config add queue testqueue1
qpid-config
:
Table 4.1. Options for qpid-config add queue
Options for qpid-config add queues | |
---|---|
--alternate-exchange queue name | Name of the alternate exchange. When the queue is deleted, all remaining messages in this queue are routed to this exchange. Messages rejected by a queue subscriber are also sent to the alternate exchange. |
--durable | The new queue is durable. It will be recreated if the server is restarted, along with any undelivered messages marked as PERSISTENT sent to this queue. |
--file-count integer | The number of files in the queue's persistence journal. Up to a maximum of 64. Attempts to specify more than 64 result in the creation of 64 journal files. |
--file-size integer | File size in pages (64KiB/page). |
--max-queue-size integer | Maximum in-memory queue size as bytes. Note that on 32-bit systems queues will not go over 3GB, regardless of the declared size. |
--max-queue-count integer | Maximum in-memory queue size as a number of messages. |
--limit-policy [none, reject, ring] | Action to take when queue limit is reached. |
--flow-stop-size integer | Turn on sender flow control when the number of queued bytes exceeds this value. |
--flow-resume-size integer | Turn off sender flow control when the number of queued bytes drops below this value. |
--flow-stop-count integer | Turn on sender flow control when the number of queued messages exceeds this value. |
--flow-resume-count | Turn off sender flow control when the number of queued messages drops below this value. |
--group-header | Enable message groups. Specify name of header that holds group identifier. |
--shared-groups | Allow message group consumption across multiple consumers. |
--argument name=value | Specify a key-value pair to add to the queue arguments. This can be used, for example, to specify no-local=true to suppress loopback delivery of self-generated messages. |
qpid-config
, as an exclusive queue is only available in the session where it is created.
See Also:
4.3. Memory Allocation Limit (32-bit)
4.4. Exclusive Queues
4.5. Ignore Locally Published Messages
no-local
key in the queue declaration as a key:value pair. The value of the key is ignored; the presence of the key is sufficient.
qpid-config
:
qpid-config add queue noloopbackqueue1 --argument no-local=true
4.6. Last Value (LV) Queues
4.6.1. Last Value Queues
4.6.2. Declaring a Last Value Queue
qpid.last_value_queue_key
when creating the queue.
stock-ticker
that uses stock-symbol
as the key, using qpid-config
:
qpid-config add queue stock-ticker --argument qpid.last_value_queue_key=stock-symbol
- Python
myLastValueQueue = mySession.sender("stock-ticker;{create:always, node:{type:queue, x-declare:{arguments:{'qpid.last_value_queue_key': 'stock-symbol'}}}}")
RHT
", "JAVA
", and other string values; and also 3
, 15
, and other integer values.
4.7. Message Groups
4.7.1. Message Groups
4.7.2. Message Group Consumer Requirements
redelivered=True
, and the rest of the group is missing.
4.7.3. Configure a Queue for Message Groups using qpid-config
qpid-config
command creates a queue called "MyMsgQueue", with message grouping enabled and using the header key "GROUP_KEY" to identify message groups.
qpid-config add queue MyMsgQueue --group-header="GROUP_KEY" --shared-groups
4.7.4. Default Group
qpid.no-group
. If a message cannot be assigned to any other group, it is assigned to this group.
4.7.5. Override the Default Group Name
qpid.no-group
. You can change this default group name by supplying a value for the default-message-group
configuration parameter to the broker at start-up. For example, using the command line:
qpidd --default-message-group "EMPTY-GROUP"
4.8. Alternate Exchanges
4.8.1. Rejected and Orphaned Messages
4.8.2. Alternate Exchange
- Messages that are acquired and then rejected by a message consumer (rejected messages).
- Unacknowledged messages in a queue that is deleted (orphaned messages).
- Messages sent to the exchange with a routing key for which there is no matching binding on the exchange.
4.9. Queue Sizing
4.9.1. Controlling Queue Size
qpid.max_size
) and maximum message count (qpid.max_count
) for the queue.
qpid.max_size
is specified in bytes. qpid.max_count
is specified as the number of messages.
qpid-config
creates a queue with a maximum size in memory of 200MB, and a maximum number of 5000 messages:
qpid-config add queue my-queue --max-queue-size=204800000 --max-queue-count 5000
qpid.max_count
and qpid.max_size
directives go inside the arguments
of the x-declare
of the node
. For example, the following address will create the queue as the qpid-config
command above:
- Python
tx = ssn.sender("my-queue; {create: always, node: {x-declare: {'auto-delete': True, arguments:{'qpid.max_count': 5000, 'qpid.max_size': 204800000}}}}")
qpid.max_count
attribute will only be applied if the queue does not exist when this code is executed.
qpid.policy_type
The behavior when a queue reaches these limits is configurable. By default, on non-durable
queues the behavior is reject
: further attempts to send to the queue result in a TargetCapacityExceeded
exception being thrown at the sender.
qpid.policy_type
option. The possible values are:
- reject
- Message publishers throw an exception
TargetCapacityExceeded
. This is the default behavior for non-durable
queues. - ring
- The oldest messages are removed to make room for newer messages.
qpid-config
command sets the limit policy to ring
:
qpid-config add queue my-queue --max-queue-size=204800 --max-queue-count 5000 --limit-policy ring
- Python
tx = ssn.sender("my-queue; {create: always, node: {x-declare: {'auto-delete': True, arguments:{'qpid.max_count': 5000, 'qpid.max_size': 204800, 'qpid.policy_type': 'ring'}}}}")
See Also:
4.9.2. Disk-paged Queues
flow-to-disk
queue policy with more performant paged queues. Paged queues are backed by a file, and a configurable number of pages of messages are held in memory. Paged queues balance responsive performance (by holding messages in-memory and writing pages of messages rather than individual messages to disk) with load capacity (by allowing the queue to use the file system for additional storage).
cat /proc/sys/vm/max_map_count
, and set at run-time with:
echo 100000 >/proc/sys/vm/max_map_count
/etc/sysctl.conf
file.
durable
, which provides persistence to messages that request it.
- A paged queue cannot handle a message larger than the page size, so the queue must be configured with pages at least as big as the largest anticipated message.
- A paged queue cannot also be a LVQ or Priority queue. An exception is thrown by an attempt to create a paged queue with LVQ or Priority specified.
To configure a queue as a paged queue, specify the argument qpid.paging
as true
when declaring the queue.
qpid.max_pages_loaded
- Controls how many pages are allowed to be held in memory at any given time. Default value is 4.
qpid.page_factor
- Controls the size of the page. Default value is 1. The value is a multiple of the platform-defined page size. On Linux the platform-defined page size can be examined using the command
getconf PAGESIZE
. It is typically 4k, depending on your CPU architecture.
The following command line example demonstrates creation of a paged queue:
qpid-config add queue my-paged-queue --argument qpid.paging=True --argument qpid.max_pages_loaded=100 --argument qpid.page_factor=1
- Python
tx = session.sender("my-paged-queue; {create: always, node: {x-declare: {'auto-delete': True, arguments:{'qpid.page_factor': 1, 'qpid.max_pages_loaded': 100, 'qpid.paging': True}}}}")
4.9.3. Detect Overwritten Messages in Ring Queues
qpid.queue_msg_sequence
argument.
qpid.queue_msg_sequence
argument accepts a single string value as its parameter. This string value is added by the broker as a message property on each message that comes through the ring queue, and the property is set to a sequentially incrementing integer value.
qpid.queue_msg_sequence
specified property on each message to determine if interim messages have been overwritten in the ring queue, and response appropriately.
qpid.queue_msg_sequence
:
- Python
import sys from qpid.messaging import * from qpid.datatypes import Serial conn = Connection.establish("localhost:5672") ssn = conn.session() name="ring-sequence-queue" key="my_sequence_key" addr = "%s; {create:sender, delete:always, node: {x-declare: {arguments: {'qpid.queue_msg_sequence':'%s', 'qpid.policy_type':'ring', 'qpid.max_count':4}}}}" % (name, key) sender = ssn.sender(addr) msg = Message() sender.send(msg) receiver = ssn.receiver(name) msg = receiver.fetch(1) try: seqNo = Serial(long(msg.properties[key])) if seqNo != 1: print "Unexpected sequence number. Should be 1. Received (%s)" % seqNo else: print "Received message with sequence number 1" except: print "Unable to get key (%s) from message properties" % key """ Test that sequence number for ring queues shows gaps when queue messages are overwritten """ msg = Message() sender.send(msg) msg = receiver.fetch(1) seqNo = Serial(long(msg.properties[key])) print "Received second message with sequence number %s" % seqNo # send 5 more messages to overflow the queue for i in range(5): sender.send(msg) msg = receiver.fetch(1) seqNo = msg.properties[key] if seqNo != 3: print "Unexpected sequence number. Should be 3. Received (%s) - Message overwritten in ring queue." % seqNo receiver.close() ssn.close()
Serial
class from qpid.datatype
to handle the wrapping.
4.9.4. Enforcing Queue Size Limits via ACL
Table 4.2. Queue Size ACL Rules
User Option | ACL Limit Property | Units |
---|---|---|
qpid.max_size
|
queuemaxsizelowerlimit
|
bytes
|
queuemaxsizeupperlimit
|
bytes
| |
qpid.max_count
|
queuemaxcountlowerlimit
|
messages
|
queuemaxcountupperlimit
|
messages
| |
qpid.max_pages_loaded
|
pageslowerlimit
|
pages
|
pagesupperlimit
|
pages
| |
qpid.page_factor
|
pagefactorlowerlimit
|
integer (multiple of the platform-defined page size)
|
pagefactorupperlimit
|
integer (multiple of the platform-defined page size)
|
Example:
# Example of ACL specifying queue size constraints # Note: for legibility this acl line has been split into multiple lines. acl allow bob@QPID create queue name=q6 queuemaxsizelowerlimit=500000 queuemaxsizeupperlimit=1000000 queuemaxcountlowerlimit=200 queuemaxcountupperlimit=300
- C++
int main(int argc, char** argv) { const char* url = argc>1 ? argv[1] : "amqp:tcp:127.0.0.1:5672"; const char* address = argc>2 ? argv[2] : "message_queue; “ “ { create: always, “ “ node: “ “ { type: queue, “ “ x-declare: ” “ { arguments: “ “ { qpid.max_count:101,” “ qpid.max_size:1000000” “ }” “ }” “ }” “ }"; std::string connectionOptions = argc > 3 ? argv[3] : ""; Connection connection(url, connectionOptions); try { connection.open(); Session session = connection.createSession(); Sender sender = session.createSender(address); ...
qpid-config
command:
qpid-config add queue --max-queue-size=1000000 --max-queue-count=101
queue_option max_count
is 101 then the size limit is violated (it is too low) and the allow rule is returned with a deny decision.
4.9.5. Queue Threshold Alerts (Edge-triggered)
qpid.alert_count_up
- upper threshold (messages)qpid.alert_size_up
- upper threshold (bytes)qpid.alert_count_down
- lower threshold (messages)qpid.alert_size_down
- lower threshold (bytes)
--default-event-threshold-ratio
command line option, otherwise it defaults to 80%.
There are two different events:
- Threshold crossed increasing
- The increasing event is raised when the queue depth goes from (
upper-threshold - 1
) toupper-threshold
and the increasing event flag is not already set. When an increasing event occurs the increasing event flag is set. The increasing event flag must be cleared (by a decreasing event) before further increasing events will be raised. This prevents multiple retriggering of this event by fluctuation of queue depth around the upper-threshold. - Threshold crossed decreasing
- The decreasing event is raised when the increasing event flag is set and the queue depth goes from (
lower-threshold + 1
) tolower-threshold
. The decreasing event clears the increasing event flag, allowing further increasing events to be triggered and preventing multiple retriggering of this event by fluctuation of queue depth around the lower-threshold.
qmf.default.topic/agent.ind.event.org_apache_qpid_broker.queueThresholdCrossedUpward.#
qmf.default.topic/agent.ind.event.org_apache_qpid_broker.queueThresholdCrossedDownward.#
qmf::org::apache::qpid::broker::EventQueueThresholdCrossedUpward(name, count, size)
qmf::org::apache::qpid::broker::EventQueueThresholdCrossedDownward(name, count, size)
Window One
- Python
import sys from qpid.messaging import * conn = Connection.establish("localhost:5672") session = conn.session() rcv = session.receiver("qmf.default.topic/agent.ind.event.org_apache_qpid_broker.queueThresholdCrossedUpward.#") while True: event = rcv.fetch() print "Threshold exceeded on queue %s" % event.content[0]["_values"]["qName"] print "at a depth of %s messages, %s bytes" % (event.content[0]["_values"]["msgDepth"], event.content[0]["_values"]["byteDepth"]) session.acknowledge()
Window Two
- Python
import sys from qpid.messaging import * connection = Connection.establish("localhost:5672") session = connection.session() rcv = session.receiver("threshold-queue; {create:always, node:{x-declare:{auto-delete:True, arguments:{'qpid.alert_count_down':1,'qpid.alert_count_up':3}}}}") snd = session.sender("threshold-queue") snd.send("Message1") snd.send("Message2") snd.send("Message3") rcv.fetch() rcv.fetch() rcv.fetch()
4.10. Deleting Queues
4.10.1. Delete a Queue with qpid-config
qpid-config
command deletes an empty queue:
qpid-config del queue queue-name
--force
switch:
qpid-config del queue queue-name --force
4.10.2. Automatically Deleted Queues
qpid-config
utility to receive information from the message broker are an example of this pattern.
auto-delete
is deleted by the broker after the last consumer has released its subscription to the queue. After the auto-delete
queue is created, it becomes eligible for deletion as soon as a consumer subscribes to the queue. When the number of consumers subscribed to the queue reaches zero, the queue is deleted.
- Python
responsequeue = session.receiver('my-response-queue; {create:always, node:{x-declare:{auto-delete:True}}}')
Note
default
exchange: a pre-configured nameless direct exchange.
A custom timeout can be configured to provide a grace period before the deletion occurs.
Note
qpid.auto_delete_timeout:0
is specified, the parameter has no effect: setting the parameter to 0 turns off the delayed auto-delete function.
- Python
responsequeue = session.receiver("my-response-queue; {create:always, node:{x-declare:{auto-delete:True, arguments:{'qpid.auto_delete_timeout':120}}}}")
- Python
testqueue = session.sender("my-test-queue; {create:always, node:{x-declare:{auto-delete:True}}}") testqueuehandle = session.receiver("my-test-queue") ..... connection.close() # testqueuehandle is now released
exclusive
and auto-delete
; these queues are deleted by the broker when the session that declared the queue ends, since the session that declared the queue is only possible subscriber.
4.10.3. Queue Deletion Checks
- If ACL is enabled, the broker will check that the user who initiated the deletion has permission to do so.
- If the
ifEmpty
flag is passed the broker will raise an exception if the queue is not empty - If the
ifUnused
flag is passed the broker will raise an exception if the queue has subscribers - If the queue is exclusive the broker will check that the user who initiated the deletion owns the queue
4.11. Producer Flow Control
4.11.1. Flow Control
ring
do not have queue flow thresholds enabled. These queues deal with reaching capacity through the ring
mechanism. All other queues with limits have two threshold values that are set by the broker when the queue is created:
- flow_stop_threshold
- the queue resource utilization level that enables flow control when exceeded. Once crossed, the queue is considered in danger of overflow, and the broker will cease acknowledging sent messages to induce producer flow control. Note that either queue size or message count capacity utilization can trigger this.
- flow_resume_threshold
- the queue resource utilization level that disables flow control when dropped below. Once crossed, the queue is no longer considered in danger of overflow, and the broker again acknowledges sent messages. Note that once trigger by either, both queue size and message count must fall below this threshold before producer flow control is deactivated.
qpid.max_size
of 204800 (200MB), and a flow_stop_threshold
of 80
, then the broker will initiate producer flow control if the queue reaches 80% of 204800, or 163840 bytes of enqueued messages.
flow_resume_threshold
, producer flow control is stopped. Setting the flow_resume_threshold
above the flow_stop_threshold
has the obvious consequence of locking producer flow control on, so don't do it.
4.11.2. Queue Flow State
flowState
boolean in the queue's QMF management object. When this is true
flow control is active.
flowStoppedCount
that increments each time flow control becomes active for the queue.
4.11.3. Broker Default Flow Thresholds
--default-flow-stop-threshold
= flow control activated at this percentage of capacity (size or count)--default-flow-resume-threshold
= flow control de-activated at this percentage of capacity (size or count)
qpidd --default-flow-stop-threshold=90 --default-flow-resume-threshold=75
4.11.4. Disable Broker-wide Default Flow Thresholds
qpidd --default-flow-stop-threshold=100 --default-flow-resume-threshold=100
4.11.5. Per-Queue Flow Thresholds
qpid.flow_stop_size
integer
flow stop threshold value in bytes.qpid.flow_resume_size
integer
flow resume threshold value in bytes.qpid.flow_stop_count
integer
flow stop threshold value as a message count.qpid.flow_resume_count
integer
flow resume threshold value as a message count.
Chapter 5. Reliably Deliver Messages with Persistence
5.1. Persistent Messages
Message.setDurable(true)
to mark a message as persistent.
5.2. Durable Queues and Guaranteed Delivery
5.2.1. Configure persistence stores
notice Journal "TplStore": Created
Important
--store-dir
command specifies the directory used for the persistence store and any configuration information. The default directory is /var/lib/qpidd
when qpidd
is run as a service, or ~/.qpidd
when qpidd
is run from the command line. If --store-dir
is not specified, a subdirectory is created within the directory identified by --data-dir
; if --store-dir
is not specified, and --no-data-dir
is specified, an error is raised.
Important
Exception: Data directory is locked by another process.
5.2.2. Durable Queues
5.2.3. Create a durable queue using qpid-config
--durable
option with qpid-config add queue
to create a durable queue. For example:
qpid-config add queue --durable durablequeue
5.2.4. Mark a message as persistent
PERSISTENT
. For instance, in C++, the following code makes a message persistent:
message.getDeliveryProperties().setDeliveryMode(PERSISTENT);
Table 5.1. Persistent Message and Durable Queue Disk States
A persistent message AND durable queue | Written to disk |
A persistent message AND non-durable queue | Not written to disk |
A non-persistent message AND non-durable queue | Not written to disk |
A non-persistent message AND durable queue | Not written to disk |
5.2.5. Durable Message State After Restart
redelivered
flag on all recovered persistent messages.
redelivered
flag as a suggestion.
5.3. Message Journal
5.3.1. Journal Description
file-size
, num-jfiles
). The EFP uses a default file size of 2MB per file.
--store-dir
specifies where the store is located. The broker creates a "qls
" (Qpid linear store) directory under the specified store dir, where it locates the Empty File Pool, the db4 database and the journals. If a specific --store-dir
is not specified, the directory specified by --data-dir
will be used, otherwise the default location is used.
5.3.2. Configuring the Journal
See Also:
Chapter 6. Increase Message Throughput with Performance Tuning
6.1. Run the JMS Client with real-time Java
- The client must be run on a realtime operating system, and supported by your realtime java vendor. Red Hat supports only Sun and IBM implementations.
- Place the realtime .jar files provided by your vendor in the classpath.
- Set the following JVM argument:
-Dqpid.thread_factory="org.apache.qpid.thread.RealtimeThreadFactory"
This ensures that the JMS Client will usejavax.realtime.RealtimeThread
s instead ofjava.lang.Thread
s.Optionally, the priority of the Threads can be set using:-Dqpid.rt_thread_priority=30
By default, the priority is set at 20. - Based on your workload, the JVM will need to be tuned to achieve the best results. Refer to your vendor's JVM tuning guide for more information.
6.2. qpid-latency-test
qpid-latency-test
is a command-line utility for measuring latency. It is supplied as part of the qpid-cpp-client-devel
package.
qpid-latency-test
provides statistics on the performance of your Messaging Server. You can compare the results of qpid-latency-test
with the performance of your application to determine whether your application or the Messaging Server is a performance bottleneck.
qpid-latency-test --help
provides further information on running the utility.
6.3. Infiniband
6.3.1. Using Infiniband
6.3.2. Prerequisites for using Infiniband
- The kernel driver and the user space driver for your Infiniband hardware must both be installed.
- Allocate lockable memory for Infiniband.By default, the operating system can swap out all user memory. Infiniband requires lockable memory, which can not be swapped out. Each connection requires 8 Megabytes (8192 bytes) of lockable memory.To allocate lockable memory, edit
/etc/security/limits.conf
to set the limit, which is the maximum amount of lockable memory that a given process can allocate. - The Infiniband interface must be configured to allow IP over Infiniband. This is used for RDMA connection management.
6.3.3. Configure Infiniband on the Messaging Server
Prerequisites
- The package
qpid-cpp-server-rdma
must be installed for Qpid to use RDMA. - The RDMA plugin,
rdma.so
, must be present in theplugins
directory.
Procedure 6.1. Configure Infiniband on the Messaging Server
Allocate lockable memory for Infiniband
Edit/etc/security/limits.conf
to allocate lockable memory for Infiniband.For example, if the user running the server is qpidd, and you wish to support 64 connections (64*8192=524288), add these entries:qpidd soft memlock 524288 qpidd hard memlock 524288
6.3.4. Configure Infiniband on a Messaging Client
Prerequisites
- The package
qpid-cpp-client-rdma
must be installed.
Procedure 6.2. Configure Infiniband on a Messaging Client
Allocate lockable memory for Infiniband
Edit/etc/security/limits.conf
to allocate lockable memory.To set a limit for all users, for example supporting 16 connections (16*8192=32768), add this entry:* soft memlock 32768
If you want to set a limit for a particular user, use the UID for that user when setting the limits:andrew soft memlock 32768
Chapter 7. Logging
7.1. Logging in C++
- Use
QPID_LOG_ENABLE
to set the level of logging you are interested in (trace
,debug
,info
,notice
,warning
,error
, orcritical
):export QPID_LOG_ENABLE="warning+"
- The Qpidd broker and C++ clients use
QPID_LOG_OUTPUT
to determine where logging output should be sent. This is either a file name or the special valuesstderr
,stdout
, orsyslog
:export QPID_LOG_TO_FILE="/tmp/myclient.out"
- From a Windows command prompt, use the following command format to set the environment variables:
set QPID_LOG_ENABLE=warning+ set QPID_LOG_TO_FILE=D:\tmp\myclient.out
7.2. Change Broker Logging Verbosity
- Changes
- New content - added February 2013.
--log-enable
option with the syntax:
--log-enable LEVEL[+][:PATTERN]
/etc/qpid/qpidd.conf
by default), use the line:
log-enable=LEVEL[+][:PATTERN]
Notes
LEVEL
is one of:trace debug info notice warning error critical
.- The "
+
" means log the given severity and any higher severity (without the plus, logging of the given severity only will be enabled). PATTERN
is the scope of the logging change.- The string in
PATTERN
is matched against the fully-qualified name of the C++ function with the logging statement. - To see the fully-qualified name of the C++ function with the logging statement, either check the source code or add to the qpid configuration the
log-function=yes
option to force qpid broker to log such message. - So e.g.
--log-enable debug+:
ha
matches everything in theqpid::
ha
module, while e.g.--log-enable debug+:broker::Queue::consumeNextMessage
will enable logging of one particular method only (theconsumeNextMessage
method in the given namespace in this example). PATTERN
is often set to the module one needs to debug, likeacl
,amqp_0_10
,broker
,ha
,management
orstore
.- The option can be used multiple times.
- Be aware that having just one option like "
log-enable=debug+:
ha
" enables debug logs of ha information, but does not produce any other logs; to add some more verbose logging, add an option like the above and also add the default value:log-enable=info+
7.3. Change Broker Logging Time Resolution
Procedure 7.1. Change Resolution Logging on a Running Broker
- Edit the file
/etc/qpid/qpidd.conf
and add the following:log-time=1 log-enable=info+ log-to-file=/var/lib/qpidd/771830.log
- Launch
qpid-tool
. - Now that you are running qpid-tool, issue the following command:
list broker
You might have to do this a few times before receiving an answer. It can take a while to get all the info from the broker. - When you see a response similar to this:
114 14:03:39 - amqp-broker
Use the number (in this example "114") to refer to the broker. - Issue the following command, substituting the appropriate number for your broker:
call 114 setLogHiresTimestamp 1
- Now look at the log file
/var/lib/qpidd/771830.log
and verify that it has started using highres time stamps. You might need to do something to get the broker to log a few more lines, for example start anotherqpid-tool
. - To return the logging to the normal resolution, issue the following command in
qpid-tool
:call 114 setLogHiresTimestamp 0
- Now look at the logfile again, and verify that it has stopped using highres.
7.4. Tracking Object Lifecycles
[Model]
log category tracks the creation, destruction, and major state changes to Connection, Session, and Subscription objects, and to Exchange, Queue, and Binding objects.
debug
log level are log entries that mirror the corresponding management events. Debug level statements include user names, remote host information, and other references using the user-specified names for the referenced objects.
trace
log level are log entries that track the construction and destruction of managed resources. Trace level statements identify the objects using the internal management keys. The trace statement for each deleted object includes the management statistics for that object.
Enabling the Model log
- Use the switch:
--log-enable trace+:Model
to receive both flavors of log. - Use the switch:
--log-enable debug+:Model
for a less verbose log.
Managed Objects in the logs
Connection, Queue, Exchange, Binding, Subscription
.
qpid-printevents
.
1. Connection
event: Fri Jul 13 17:46:23 2012 org.apache.qpid.broker:clientConnect rhost=[::1]:5672-[::1]:34383 user=anonymous debug: 2012-07-13 13:46:23 [Model] debug Create connection. user:anonymous rhost:[::1]:5672-[::1]:34383 trace: 2012-07-13 13:46:23 [Model] trace Mgmt create connection. id:[::1]:5672-[::1]:34383
event: Fri Jul 13 17:46:23 2012 org.apache.qpid.broker:clientDisconnect rhost=[::1]:5672-[::1]:34383 user=anonymous debug: 2012-07-13 13:46:23 [Model] debug Delete connection. user:anonymous rhost:[::1]:5672-[::1]:34383 trace: 2012-07-13 13:46:29 [Model] trace Mgmt delete connection. id:[::1]:5672-[::1]:34383 Statistics: {bytesFromClient:1451, bytesToClient:892, closing:False, framesFromClient:25, framesToClient:21, msgsFromClient:1, msgsToClient:1}
2. Session
event: TBD debug: TBD trace: 2012-07-13 13:46:09 [Model] trace Mgmt create session. id:18f52c22-efc5-4c2f-bd09-902d2a02b948:0
event: TBD debug: TBD trace: 2012-07-13 13:47:13 [Model] trace Mgmt delete session. id:18f52c22-efc5-4c2f-bd09-902d2a02b948:0 Statistics: {TxnCommits:0, TxnCount:0, TxnRejects:0, TxnStarts:0, clientCredit:0, unackedMessages:0}
3. Exchange
event: Fri Jul 13 17:46:34 2012 org.apache.qpid.broker:exchangeDeclare disp=created exName=myE exType=topic durable=False args={} autoDel=False rhost=[::1]:5672-[::1]:34384 altEx= user=anonymous debug: 2012-07-13 13:46:34 [Model] debug Create exchange. name:myE user:anonymous rhost:[::1]:5672-[::1]:34384 type:topic alternateExchange: durable:F trace: 2012-07-13 13:46:34 [Model] trace Mgmt create exchange. id:myE
event: Fri Jul 13 18:19:33 2012 org.apache.qpid.broker:exchangeDelete exName=myE rhost=[::1]:5672-[::1]:37199 user=anonymous debug: 2012-07-13 14:19:33 [Model] debug Delete exchange. name:myE user:anonymous rhost:[::1]:5672-[::1]:37199 trace: 2012-07-13 14:19:42 [Model] trace Mgmt delete exchange. id:myE Statistics: {bindingCount:0, bindingCountHigh:0, bindingCountLow:0, byteDrops:0, byteReceives:0, byteRoutes:0, msgDrops:0, msgReceives:0, msgRoutes:0, producerCount:0, producerCountHigh:0, producerCountLow:0}
4. Queue
event: Fri Jul 13 18:19:35 2012 org.apache.qpid.broker:queueDeclare disp=created durable=False args={} qName=myQ autoDel=False rhost=[::1]:5672-[::1]:37200 altEx= excl=False user=anonymous debug: 2012-07-13 14:19:35 [Model] debug Create queue. name:myQ user:anonymous rhost:[::1]:5672-[::1]:37200 durable:F owner:0 autodelete:F alternateExchange: trace: 2012-07-13 14:19:35 [Model] trace Mgmt create queue. id:myQ
event: Fri Jul 13 18:19:37 2012 org.apache.qpid.broker:queueDelete user=anonymous qName=myQ rhost=[::1]:5672-[::1]:37201 debug: 2012-07-13 14:19:37 [Model] debug Delete queue. name:myQ user:anonymous rhost:[::1]:5672-[::1]:37201 trace: 2012-07-13 14:19:42 [Model] trace Mgmt delete queue. id:myQ Statistics: {acquires:0, bindingCount:0, bindingCountHigh:0, bindingCountLow:0, byteDepth:0, byteFtdDepth:0, byteFtdDequeues:0, byteFtdEnqueues:0, bytePersistDequeues:0, bytePersistEnqueues:0, byteTotalDequeues:0, byteTotalEnqueues:0, byteTxnDequeues:0, byteTxnEnqueues:0, consumerCount:0, consumerCountHigh:0, consumerCountLow:0, discardsLvq:0, discardsOverflow:0, discardsPurge:0, discardsRing:0, discardsSubscriber:0, discardsTtl:0, flowStopped:False, flowStoppedCount:0, messageLatencyAvg:0, messageLatencyCount:0, messageLatencyMax:0, messageLatencyMin:0, msgDepth:0, msgFtdDepth:0, msgFtdDequeues:0, msgFtdEnqueues:0, msgPersistDequeues:0, msgPersistEnqueues:0, msgTotalDequeues:0, msgTotalEnqueues:0, msgTxnDequeues:0, msgTxnEnqueues:0, releases:0, reroutes:0, unackedMessages:0, unackedMessagesHigh:0, unackedMessagesLow:0}
5. Binding
event: Fri Jul 13 17:46:45 2012 org.apache.qpid.broker:bind exName=myE args={} qName=myQ user=anonymous key=myKey rhost=[::1]:5672-[::1]:34385 debug: 2012-07-13 13:46:45 [Model] debug Create binding. exchange:myE queue:myQ key:myKey user:anonymous rhost:[::1]:5672-[::1]:34385 trace: 2012-07-13 13:46:23 [Model] trace Mgmt create binding. id:org.apache.qpid.broker:exchange:,org.apache.qpid.broker:queue:myQ,myQ
event: Fri Jul 13 17:47:06 2012 org.apache.qpid.broker:unbind user=anonymous exName=myE qName=myQ key=myKey rhost=[::1]:5672-[::1]:34386 debug: 2012-07-13 13:47:06 [Model] debug Delete binding. exchange:myE queue:myQ key:myKey user:anonymous rhost:[::1]:5672-[::1]:34386 trace: 2012-07-13 13:47:09 [Model] trace Mgmt delete binding. id:org.apache.qpid.broker:exchange:myE,org.apache.qpid.broker:queue:myQ,myKey Statistics: {msgMatched:0}
6. Subscription
event: Fri Jul 13 18:19:28 2012 org.apache.qpid.broker:subscribe dest=0 args={} qName=b78b1818-7a20-4341-a253-76216b40ab4a:0.0 user=anonymous excl=False rhost=[::1]:5672-[::1]:37198 debug: 2012-07-13 14:19:28 [Model] debug Create subscription. queue:b78b1818-7a20-4341-a253-76216b40ab4a:0.0 destination:0 user:anonymous rhost:[::1]:5672-[::1]:37198 exclusive:F trace: 2012-07-13 14:19:28 [Model] trace Mgmt create subscription. id:org.apache.qpid.broker:session:b78b1818-7a20-4341-a253-76216b40ab4a:0,org.apache.qpid.broker:queue:b78b1818-7a20-4341-a253-76216b40ab4a:0.0,0
event: Fri Jul 13 18:19:28 2012 org.apache.qpid.broker:unsubscribe dest=0 rhost=[::1]:5672-[::1]:37198 user=anonymous debug: 2012-07-13 14:19:28 [Model] debug Delete subscription. destination:0 user:anonymous rhost:[::1]:5672-[::1]:37198 trace: 2012-07-13 14:19:32 [Model] trace Mgmt delete subscription. id:org.apache.qpid.broker:session:b78b1818-7a20-4341-a253-76216b40ab4a:0,org.apache.qpid.broker:queue:b78b1818-7a20-4341-a253-76216b40ab4a:0.0,0 Statistics: {delivered:1}
Chapter 8. Secure Your Connections and Resources
8.1. Simple Authentication and Security Layer - SASL
8.1.1. SASL - Simple Authentication and Security Layer
8.1.2. SASL Support in Windows Clients
ANONYMOUS
and PLAIN
and EXTERNAL
authentication mechanisms.
8.1.3. SASL Mechanisms
- Changes
- Updated April 2013.
/etc/sasl2/qpidd.conf
on the broker. To narrow the allowed mechanisms to a smaller subset, edit this file and remove mechanisms.
Important
SASL Mechanisms
- ANONYMOUS
- Clients are able to connect anonymously.Note that when the broker is started with
auth=no
, authentication is disabled.PLAIN
andANONYMOUS
authentication mechanisms are available as identification mechanisms, but they have no authentication value. - PLAIN
- Passwords are passed in plain text between the client and the broker. This is not a secure mechanism, and should be used in development environments only. If PLAIN is used in production, it should only be used over SSL connections, where the SSL encryption of the transport protects the password.Note that when the broker is started with
auth=no
, authentication is disabled. ThePLAIN
andANONYMOUS
authentication mechanisms are available as identification mechanisms, but they have no authentication value. - DIGEST-MD5
- MD5 hashed password exchange using HTTP headers. This is a medium strength security protocol.
- CRAM-MD5
- A challenge-response protocol using MD5 encryption.
- KERBEROS/GSSAPI
- The Generic Security Service Application Program Interface (GSSAPI) is a framework that allows for the connection of different security providers. By far the most frequently used is Kerberos. GSSAPI security provides centralized management of security, including single sign-on, opaque token exchange, and transport security.
- EXTERNAL
- EXTERNAL SASL authentication uses an SSL-encrypted connection between the client and the server. The client presents a certificate to encrypt the connection, and this certificate contains both the cryptographic key for the connection and the identity of the client.
8.1.4. SASL Mechanisms and Packages
cyrus-sasl-*
package(s) that need to be installed on the server for each authentication mechanism.
Table 8.1.
Method | Package | /etc/sasl2/qpidd.conf entry |
---|---|---|
ANONYMOUS
|
-
|
-
|
PLAIN
| cyrus-sasl-plain
| mech_list: PLAIN
|
DIGEST-MD5
| cyrus-sasl-md5
| mech_list: DIGEST-MD5
|
CRAM-MD5
| cyrus-sasl-md5
| mech_list: CRAM-MD5
|
KERBEROS/GSSAPI
| cyrus-sasl-gssapi
| mech_list: GSSAPI
|
EXTERNAL
|
-
| mech_list: EXTERNAL
|
8.1.5. Configure SASL using a Local Password File
Procedure 8.1. Configure SASL using a Local Password File
- Add new users to the database by using the
saslpasswd2
command. The User ID for authentication and ACL authorization uses the formuser-id@domain
.Ensure that the correct realm has been set for the broker. This can be done by editing the configuration file or using the-u
option. The default realm for the broker isQPID
.# saslpasswd2 -f /var/lib/qpidd/qpidd.sasldb -u QPID new_user_name
- Existing user accounts can be listed by using the
-f
option:# sasldblistusers2 -f /var/lib/qpidd/qpidd.sasldb
Note
The user database at/var/lib/qpidd/qpidd.sasldb
is readable only by theqpidd
user. If you start the broker from a user other than theqpidd
user, you will need to either modify the configuration file, or turn authentication off.Note also that this file must be readable by theqpidd
user. If you delete and recreate this file, make sure the qpidd user has read permissions, or authentication attempts will fail. - To switch authentication on or off, use the
auth yes|no
option when you start the broker:# /usr/sbin/qpidd --auth yes # /usr/sbin/qpidd --auth no
You can also set authentication to be on or off by adding the appropriate line to to the/etc/qpid/qpidd.conf
configuration file:auth=no auth=yes
The SASL configuration file is in/etc/sasl2/qpidd.conf
for Red Hat Enterprise Linux.
8.1.6. Configure SASL with ACL
- To start using the ACL, specify the path and filename using the
--acl-file
command. The filename should have a.acl
extension:$ qpidd --acl-file ./aclfilename.acl
- Optionally, you can limit the number of active connections per user with the
--connection-limit-per-user
and--connection-limit-per-ip
commands. These limits can only be enforced if the--acl-file
command is specified. - You can now view the file with the
cat
command and edit it in your preferred text editor. If the path and filename is not found,qpidd
will fail to start.
8.1.7. Configure Kerberos 5
Note
- Install the Kerberos workstation software and Cyrus SASL GSSAPI on each machine that runs a qpidd broker or a qpidd messaging client:
$ sudo yum install cyrus-sasl-gssapi krb5-workstation
- Change the mech_list line in
/etc/sasl2/qpidd.conf
to:mech_list: GSSAPI
- Add the following lines to
/etc/qpid/qpidd.conf
:auth=yes realm=QPID
- Register the Qpid broker in the Kerberos database.Traditionally, a Kerberos principal is divided into three parts: the primary, the instance, and the realm. A typical Kerberos V5 has the format
primary/instance@REALM
. For a broker, the primary isqpidd
, the instance is the fully qualified domain name, and the REALM is the Kerberos domain realm. By default, this realm isQPID
, but a different realm can be specified in qpid.conf per the following example.realm=EXAMPLE.COM
For instance, if the fully qualified domain name isdublduck.example.com
and the Kerberos domain realm isEXAMPLE.COM
, then the principal name isqpidd/dublduck.example.com@EXAMPLE.COM
.FDQN=`hostname --fqdn` REALM="EXAMPLE.COM" kadmin -r $REALM -q "addprinc -randkey -clearpolicy qpidd/$FQDN"
Now create a Kerberos keytab file for the broker. The broker must have read access to the keytab file. The following script creates a keytab file and allows the broker read access:QPIDD_GROUP="qpidd" kadmin -r $REALM -q "ktadd -k /etc/qpidd.keytab qpidd/$FQDN@$REALM" chmod g+r /etc/qpidd.keytab chgrp $QPIDD_GROUP /etc/qpidd.keytab
The default location for the keytab file is/etc/krb5.keytab
. If a different keytab file is used, the KRB5_KTNAME environment variable must contain the name of the file as the following example shows.export KRB5_KTNAME=/etc/qpidd.keytab
If this is correctly configured, you can now enable Kerberos support on the broker by setting theauth
andrealm
options in/etc/qpid/qpidd.conf
:CDATA[# /etc/qpid/qpidd.conf auth=yes realm=EXAMPLE.COM
Restart the broker to activate these settings. - Make sure that each Qpid user is registered in the Kerberos database, and that Kerberos is correctly configured on the client machine. The Qpid user is the account from which a Qpid messaging client is run. If it is correctly configured, the following command should succeed:
$ kinit user@REALM.COM
Additional configuration for Java JMS clients
Java JMS clients require a few additional steps.- The Java JVM must be run with the following arguments:
- -Djavax.security.auth.useSubjectCredsOnly=false
- Forces the SASL GSSAPI client to obtain the Kerberos credentials explicitly instead of obtaining from the "subject" that owns the current thread.
- -Djava.security.auth.login.config=myjas.conf
- Specifies the jass configuration file. Here is a sample JASS configuration file:
com.sun.security.jgss.initiate { com.sun.security.auth.module.Krb5LoginModule required useTicketCache=true; };
- -Dsun.security.krb5.debug=true
- Enables detailed debug info for troubleshooting
- The client Connection URL must specify the following Kerberos-specific broker properties:
sasl_mechs
must be set toGSSAPI
.sasl_protocol
must be set to the principal for the qpidd broker, e.g.qpidd
/sasl_server
must be set to the host for the SASL server, e.g.sasl.com
.
Here is a sample connection URL for a Kerberos connection:amqp://guest@clientid/testpath?brokerlist='tcp://localhost:5672?sasl_mechs='GSSAPI'&sasl_protocol='qpidd'&sasl_server='<server-host-name>''
8.2. Configuring TLS/SSL
8.2.1. Encryption Using SSL
qpidd
is provided by Mozilla's Network Security Services Library (NSS).
8.2.2. A Note on Installing Client Certificates
8.2.3. Enable SSL on the Broker
- Changes
- Updated April 2013.
- You will need a certificate that has been signed by a Certification Authority (CA). This certificate will also need to be trusted by your client. If you require client authentication in addition to server authentication, the clients certificate will also need to be signed by a CA and trusted by the broker.The certificate database is created and managed by the Mozilla Network Security Services (NSS)
certutil
tool. Information on this utility can be found on the Mozilla website, including tutorials on setting up and testing SSL connections. The certificate database will generally be password protected. The safest way to specify the password is to place it in a protected file, use the password file when creating the database, and specify the password file with thessl-cert-password-file
option when starting the broker.The following script shows how to create a certificate database using certutil:mkdir ${CERT_DIR} certutil -N -d ${CERT_DIR} -f ${CERT_PW_FILE} certutil -S -d ${CERT_DIR} -n ${NICKNAME} -s "CN=${NICKNAME}" -t "CT,," -x -f ${CERT_PW_FILE} -z /usr/bin/certutil
When starting the broker, setssl-cert-password-file
to the value of ${CERT_PW_FILE}, setssl-cert-db
to the value of ${CERT_DIR}, and setssl-cert-name
to the value of ${NICKNAME}. - The following SSL options can be used when starting the broker:
--ssl-use-export-policy
- Use NSS export policy. When this option is specified, the server will conform with US export restrictions on encryption using the NSS export policy. When it is not specified, the server will use the domestic policy. Refer to the Mozilla SSL Export Policy Functions documentation for more details.
--ssl-cert-password-file PATH
- Required. Plain-text file containing password to use for accessing certificate database.
--ssl-cert-db PATH
- Required. Path to directory containing certificate database.
--ssl-cert-name NAME
- Name of the certificate to use. Default is
localhost.localdomain
. --ssl-port NUMBER
- Port on which to listen for SSL connections. If no port is specified, port 5671 is used.If the SSL port chosen is the same as the port for non-SSL connections (i.e. if the
--ssl-port
and--port
options are the same), both SSL encrypted and unencrypted connections can be established to the same port. However in this configuration there is no support for IPv6. --ssl-require-client-authentication
- Require SSL client authentication (i.e. verification of a client certificate) during the SSL handshake. This occurs before SASL authentication, and is independent of SASL.This option enables the
EXTERNAL
SASL mechanism for SSL connections. If the client chooses theEXTERNAL
mechanism, the client's identity is taken from the validated SSL certificate, using theCN
, and appending anyDC
's to create the domain. For instance, if the certificate contains the propertiesCN=bob
,DC=acme
,DC=com
, the client's identity isbob@acme.com
.If the client chooses a different SASL mechanism, the identity take from the client certificate will be replaced by that negotiated during the SASL handshake. --ssl-sasl-no-dict
- Do not accept SASL mechanisms that can be compromised by dictionary attacks. This prevents a weaker mechanism being selected instead of
EXTERNAL
, which is not vulnerable to dictionary attacks. --require-encryption
- This will cause
qpidd
to only accept encrypted connections. This means only clients with EXTERNAL SASL on the SSL-port, or with GSSAPI on the TCP port.
8.2.4. Export an SSL Certificate for Clients
pk12util -o <p12exportfile> -n <certname> -d <certdir> -w <p12filepwfile> openssl pkcs12 -in <p12exportfile> -out <clcertname> -nodes -clcerts -passin pass:<p12pw>
man openssl
.
8.2.5. Enable SSL on Windows
Procedure 8.2. Create SSL certificates on the broker
- Execute the following commands on the broker to export a certificate:
# cd /var/lib/qpidd # mkdir qpid_nss_db # cd qpid_nss_db # ls # echo password > ssl_pw_file # cat ssl_pw_file password # certutil -S -d . -n qrootCA -s "CN=qrootCA" -t "CT,," -x -m 1000 -v 120 -f ssl_pw_file # certutil -S -n "fully-qualified-server-name.com" -s "CN="fully-qualified-server-name.com -c qrootCA -t ",," -m 1001 -v 120 -d . -f ssl_pw_file # certutil -S -n client -s "CN=client" -t ",," -m 1005 -v 120 -c qrootCA -d . -f ssl_pw_file # pk12util -d . -o client.p12 -n client Enter Password or Pin for "NSS Certificate DB": Enter Password or Pin for "NSS Certificate DB": Enter password for PKCS12 file: Re-enter password: pk12util: PKCS12 EXPORT SUCCESSFUL # openssl pkcs12 -in client.p12 -out client.pem -nodes -clcerts Enter Import Password: MAC verified OK
- Verify that the files exist:
# ls cert8.db client.p12 client.pem key3.db secmod.db ssl_pw_file
Procedure 8.3. Copy the qpid_nss_db
folder to other broker machines and set qpidd
as its owner
- Execute the following commands on the other brokers to copy the files from the first broker:
# scp -r qpid_nss_db root@other-broker.com:/var/lib/qpidd # chown -R qpidd:qpidd qpid_nss_db
- Verify the files and their permissions:
# ll total 89896 -rw-r-----. 1 qpidd qpidd 0 Jul 16 06:27 lock -rw-r--r--. 1 qpidd qpidd 91989014 Nov 1 06:52 qpidd.log -rw-------. 1 qpidd qpidd 12288 Oct 7 05:32 qpidd.sasldb drwxr-xr-x. 2 qpidd qpidd 4096 Nov 6 04:32 qpid_nss_db -rw-r-----. 1 qpidd qpidd 37 Jul 16 06:27 systemId
Procedure 8.4. Modify broker configuration file
- Edit the broker configuration file
/etc/qpid/qpidd.conf
:ssl-require-client-authentication=no log-to-file=/var/lib/qpidd/qpidd.log ssl-port=5671 log-enable=info+ ssl-cert-password-file=/var/lib/qpidd/qpid_nss_db/ssl_pw_file ssl-cert-name=fully-qualified-server-name.com auth=no ssl-cert-db=/var/lib/qpidd/qpid_nss_db
Procedure 8.5. Start the broker
- Start the broker and verify that it is listening on the SSL port:
# service qpidd restart Stopping Qpid AMQP daemon: [ OK ] Starting Qpid AMQP daemon: [ OK ] # netstat -nap | grep qpidd tcp 0 0 0.0.0.0:5671 0.0.0.0:* LISTEN 25184/qpidd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 25184/qpidd tcp 0 0 :::5671 :::* LISTEN 25184/qpidd tcp 0 0 :::5672 :::* LISTEN 25184/qpidd
Procedure 8.6. Create a folder to export onto Windows machines
- Execute the following instructions to:
- Create a folder to export onto Windows machines
- Create a new password file in .txt format
- Export certification authority certificate to .cer format
- Export client certificate to .pfx format
# mkdir windir # echo password2 > windir/win_pw_file.txt # cat windir/win_pw_file.txt password2 # certutil -L -d qpid_nss_db -n qrootCA -f ssl_pw_file -a > windir/qrootCA.cer # pk12util -d qpid_nss_db -n client -k qpid_nss_db/ssl_pw_file -w windir/win_pw_file.txt -o windir/client.pfx pk12util: PKCS12 EXPORT SUCCESSFUL
- Verify that the files exist:
# ls windir client.pfx qrootCA.cer win_pw_file.txt
Procedure 8.7. Copy files to Windows machine
- Copy the
windir
folder onto the Windows machine.
The following procedure, to install the Certificate on the Windows machine has two options - using the GUI, or using the command-line.
Procedure 8.8. Install Certification Authority - GUI
- On the Windows machine, run
mmc
- Click File / Add/Remove Snap-in...
- Select Certificates -> Add -> Computer account -> Local computer -> Finish -> OK
- In the console unpack Certificates (Local Computer)
- Right click on Trusted Root Certification Authorities, and select All Tasks/Import...
- Set the path to the
qrootCA.cer
file, select Trusted Root Certification Authorities certificate store, confirm the action and save the console settings.
Procedure 8.9. Install Certification Authority - Command-line
- Execute the following command to import the certificate at the command-line:
certmgr.exe -add -c C:\windir\qrootca.cer -s -r localMachine root
Procedure 8.10. Test connection
- Execute the following at the command line to test the connection (no environment variables must be set):
C:\qpid_VS2008\bin\Release>spout.exe --broker broker-server.com:5671 --connection-options {transport:ssl} "amq.topic"
You can install the certificate in the Windows machine certificate store, or specify it via environment variables.
Procedure 8.11. Install Certificate in Windows Certificate Store
client.pfx
into Current User/Personal certificate store:
- Run
mmc
- Click File / Add/Remove Snap-in...
- Select Certificates -> Add> -> My user account -> Finish -> OK
- In the console unpack Certificates - Current User
- Right click on Personal.
- Select All Tasks / Import.
- Assign path to the
client.pfx
file - Click on Next.
- Type a password from
win_pw_file.txt
(password2 in our case). - Choose Certificate Store Personal and save the console settings.
- Modify broker configuration to require client authentication and restart it .
- Set up environment variables:
>set QPID_SSL_CERT_STORE=My >set QPID_SSL_CERT_NAME=client
- Test it by sending a message:
>C:\qpid_VS2008\bin\Release>spout.exe --broker broker-server.com:5671 --connection-options {transport:ssl,sasl-mechanisms:EXTERNAL} amq.topic
Procedure 8.12. Specify Certificate via Environment
- Set up environmental variables on the Windows machine:
>set QPID_SSL_CERT_FILENAME=<path_to_the_client.pfx> >set QPID_SSL_CERT_PASSWORD_FILE=<path_to_the_win_pw_file.txt> >set QPID_SSL_CERT_NAME=client
For example:>C:\qpid_VS2008\bin\Release>set QPID_SSL_CERT_FILENAME=C:\windir\client.pfx >C:\qpid_VS2008\bin\Release>set QPID_SSL_CERT_PASSWORD_FILE=C:\windir\win_pw_fil e.txt >C:\qpid_VS2008\bin\Release>set QPID_SSL_CERT_NAME=client
- Test it by sending a message:
C:\qpid_VS2008\bin\Release>spout.exe --broker broker-server.com:5671 --connection-options {transport:ssl,sasl-mechanisms:EXTERNAL} amq.topic
8.2.6. Enable SSL in C++ Clients
Table 8.2. SSL Client Environment Variables for C++ clients
SSL Client Options for C++ clients | |
---|---|
QPID_SSL_USE_EXPORT_POLICY | Use NSS export policy |
QPID_SSL_CERT_PASSWORD_FILE PATH | File containing password to use for accessing certificate database |
QPID_SSL_CERT_DB PATH | Path to directory containing certificate database |
QPID_SSL_CERT_NAME NAME | Name of the certificate to use. When SSL client authentication is enabled, a certificate name should normally be provided. |
QPID_SSL_CERT_DB
to the full pathname of the directory. If a connection uses SSL client authentication, the client's password is also needed - the password should be placed in a protected file, and the QPID_SSL_CERT_PASSWORD_FILE
variable should be set to the location of the file containing this password.
transport
connection option to ssl
.
See Also:
8.2.7. Enable SSL in Java Clients
- For both server and client authentication, import the trusted CA to your trust store and keystore and generate keys for them. Create a certificate request using the generated keys and then create a certificate using the request. You can then import the signed certificate into your keystore. Pass the following arguments to the Java JVM when starting your client:
-Djavax.net.ssl.keyStore=/home/bob/ssl_test/keystore.jks -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=/home/bob/ssl_test/certstore.jks -Djavax.net.ssl.trustStorePassword=password
- For server side authentication only, import the trusted CA to your trust store and pass the following arguments to the Java JVM when starting your client:
-Djavax.net.ssl.trustStore=/home/bob/ssl_test/certstore.jks -Djavax.net.ssl.trustStorePassword=password
- Java clients must use the SSL option in the connection URL to enable SSL encryption, per the following example.
amqp://username:password@clientid/test?brokerlist='tcp://localhost:5672?ssl='true''
- If you need to debug problems in an SSL connection, enable Java's SSL debugging by passing the argument
-Djavax.net.debug=ssl
to the Java JVM when starting your client.
See Also:
8.2.8. Enable SSL in Python Clients
- Use a URL of the form
amqps://<host>:<port>
, where host is the brokers hostname and port is the SSL port (usually 5671), or - Set the '
transport
' attribute of the connection to "ssl
".
EXTERNAL
SASL mechanism for authentication.
- The Python clients has an optional parameter
ssl_trustfile
(see Python SSL Parameters). When this parameter is specified, trust store validation of the certificate is performed. - The Python client matches the server's SSL certificate against the connection hostname when the optional parameter
ssl_trustfile
is supplied. - When using the EXTERNAL SASL mechanism for authentication, you must provide the client name in the connection string. This client name provided in the connection string must match the identity of the SSL certificate. Missing either these two will cause the connection to fail: by not providing the client name in the connection string, or providing a client name that does match the identity of the SSL certificate.
The QPID Python client accepts the following SSL-related configuration parameters:
ssl_certfile
- the path to a file that contains the PEM-formatted certificate used to identify the local side of the connection (the client). This is needed if the server requires client-side authentication.ssl_keyfile
- In some cases the client's private key is stored in the same file as the certificate (i.e. ssl_certfile). If thessl_certfile
does not contain the client's private key, this parameter must be set to the path to a file containing the private key in PEM file format.ssl_skip_hostname_check
- When set to true the connection hostname verification against the server certificate is skipped.ssl_trustfile
- this parameter contains a path to a PEM-formatted file containing a chain of trusted Certificate Authority (CA) certificates. These certificates are used to authenticate the remote server.- These parameters are passed as arguments to the
qpid.Connection()
object when it is constructed. For example:Connection("amqps://client@127.0.0.1:5671", ssl_certfile="/path/to/certfile", ssl_keyfile="/path/to/keyfile")
See Also:
8.3. Authorization
8.3.1. Access Control List (ACL)
8.3.2. Default ACL File
/etc/qpidd.acl
.
/etc/qpid/qpidd.acl
. Unmodified existing installations will continue to use the previous ACL file and location, while any new installations will use the new default location and file.
8.3.3. Load an Access Control List (ACL)
--acl-file
command to load the access control list. The filename should have a .acl
extension:
$ qpidd --acl-file ./aclfilename.acl
8.3.4. Reloading the ACL
qpid-tool
or from program code.
Reload the ACL using qpid-tool
qpid-tool
with a account with sufficient privileges to reload the ACL.
- Start
qpid-tool
:$ qpid-tool admin/mysecretpassword@mybroker:5672 Management Tool for QPID qpid:
- Check the ACL list to obtain the object ID:
qpid: list acl Object Summary: ID Created Destroyed Index ================================= 103 12:57:41 - 116
- Optionally, you can examine the ACL:
qpid: show 103 Object of type: org.apache.qpid.acl:acl:_data(23510fc1-dc51-a952-39c2-e18475c1677e) Attribute 103 ================================================= brokerRef 116 policyFile /tmp/reload.acl enforcingAcl True transferAcl False lastAclLoad Tue Oct 30 12:57:41 2012 maxConnectionsPerIp 0 maxConnectionsPerUser 0 maxQueuesPerUser 0 aclDenyCount 0 connectionDenyCount 0 queueQuotaDenyCount 0
- To reload the ACL, call the reload method of the ACL object:
qpid: call 103 reloadACLFile qpid: OK (0) - {}
Reload ACL from program code
- Python
import qmf.console qmf = qmf.console.Session() qmf_broker = qmf.addBroker('localhost:5672') acl = qmf.getObjects(_class="acl")[0] result = acl.reloadACLFile() print result
8.3.5. Writing an Access Control List
- The user id in the ACL file is of the form <user-id>@<domain>. The Domain is configured via the SASL configuration for the broker, and the domain/realm for qpidd is set using
--realm
and default to 'QPID'. - Each line in an ACL file grants or denies specific rights to a user.
- If the last line in an ACL file is
acl deny all all
, the ACL uses deny mode, and only those rights that are explicitly allowed are granted:acl allow user@QPID all all acl deny all all
On this server, deny mode is the default.user@QPID
can perform any action, but nobody else can. - If the last line in an ACL file is
acl allow all all
, the ACL uses allow mode, and all rights are granted except those that are explicitly denied.acl deny user@QPID all all acl allow all all
On this server, allow mode is the default. The ACL allows everyone else to perform any action, but deniesuser@QPID
all permissions.
- ACL processing ends when one of the following lines is encountered:
acl allow all all
acl deny all all
Any lines after one of these statements will be ignored:acl allow all all acl deny user@QPID all all # This line is ignored !!!
- ACL syntax allows fine-grained access rights for specific actions:
acl allow carlt@QPID create exchange name=carl.* acl allow fred@QPID create all acl allow all consume queue acl allow all bind exchange acl deny all all
- An ACL file can define user groups, and assign permissions to them:
group admin ted@QPID martin@QPID acl allow admin create all acl deny all all
8.3.6. ACL Syntax
acl permission {<group-name>|<user-name>|"all"} {action|"all"} [object|"all"] [property=<property-value>]
- The default (anonymous) exchange is identified using
name=amq.default
. - A line starting with the
#
character is considered a comment and is ignored. - Empty lines and lines that contain only whitespace are ignored
- All tokens are case sensitive.
name1
is not the same asName1
andcreate
is not the same asCREATE
- Group lists can be extended to the following line by terminating the line with the
\
character - Additional whitespace - that is, where there is more than one whitespace character - between and after tokens is ignored. Group and ACL definitions must start with either
group
oracl
and with no preceding whitespace. - All ACL rules are limited to a single line
- Rules are interpreted from the top of the file down until the name match is obtained; at which point processing stops.
- The keyword
all
matches all individuals, groups and actions - The last line of the file - whether present or not - will be assumed to be
acl deny all all
. If present in the file, all lines below it are ignored. - Names and group names may contain only
a-z
,A-Z
,0-9
,-
and_
- Rules must be preceded by any group definitions they can use. Any name not defined as a group will be assumed to be that of an individual.
- Qpid fails to start if ACL file is not valid
- ACL rules can be reloaded at runtime by calling a QMF method
See Also:
8.3.7. ACL Definition Reference
permission
, action
, object
, and property
in an ACL rules file.
Table 8.3. ACL Rules: permission
allow |
Allow the action
|
allow-log |
Allow the action and log the action in the event log
|
deny |
Deny the action
|
deny-log |
Deny the action and log the action in the event log
|
Table 8.4. ACL Rules: action
consume |
Applied when subscriptions are created
|
publish |
Applied on a per message basis on publish message transfers, this rule consumes the most resources
|
create |
Applied when an object is created, such as bindings, queues, exchanges, links
|
access |
Applied when an object is read or accessed
|
bind |
Applied when objects are bound together
|
unbind |
Applied when objects are unbound
|
delete |
Applied when objects are deleted
|
purge |
Similar to delete but the action is performed on more than one object
|
update |
Applied when an object is updated
|
Table 8.5. ACL Rules: object
queue |
A queue
|
exchange |
An exchange
|
broker |
The broker
|
link |
A federation or inter-broker link
|
method |
Management or agent or broker method
|
Table 8.6. ACL Rules: property
name |
String. Object name, such as a queue name or exchange name.
|
durable |
Boolean. Indicates the object is durable
|
routingkey |
String. Specifies routing key
|
autodelete |
Boolean. Indicates whether or not the object gets deleted when the connection is closed
|
exclusive |
Boolean. Indicates the presence of an
exclusive flag
|
type |
String. Type of object, such as topic, fanout, or xml
|
alternate |
String. Name of the alternate exchange
|
queuename |
String. Name of the queue (used only when the object is something other than
queue
|
schemapackage |
String. QMF schema package name
|
schemaclass |
String. QMF schema class name
|
policytype |
String. The limit policy for a queue. Only used in rules for queue creation.
|
maxqueuesize |
Integer. The largest value of the maximum queue size (in bytes) with which a queue is allowed to be created. Only used in rules for queue creation.
|
maxqueuecount |
Integer. The largest value of the maximum queue depth (in messages) that a queue is allowed to be created. Only used in rules for queue creation.
|
8.3.8. Enforcing Queue Size Limits via ACL
Table 8.7. Queue Size ACL Rules
User Option | ACL Limit Property | Units |
---|---|---|
qpid.max_size
|
queuemaxsizelowerlimit
|
bytes
|
queuemaxsizeupperlimit
|
bytes
| |
qpid.max_count
|
queuemaxcountlowerlimit
|
messages
|
queuemaxcountupperlimit
|
messages
| |
qpid.max_pages_loaded
|
pageslowerlimit
|
pages
|
pagesupperlimit
|
pages
| |
qpid.page_factor
|
pagefactorlowerlimit
|
integer (multiple of the platform-defined page size)
|
pagefactorupperlimit
|
integer (multiple of the platform-defined page size)
|
Example:
# Example of ACL specifying queue size constraints # Note: for legibility this acl line has been split into multiple lines. acl allow bob@QPID create queue name=q6 queuemaxsizelowerlimit=500000 queuemaxsizeupperlimit=1000000 queuemaxcountlowerlimit=200 queuemaxcountupperlimit=300
- C++
int main(int argc, char** argv) { const char* url = argc>1 ? argv[1] : "amqp:tcp:127.0.0.1:5672"; const char* address = argc>2 ? argv[2] : "message_queue; “ “ { create: always, “ “ node: “ “ { type: queue, “ “ x-declare: ” “ { arguments: “ “ { qpid.max_count:101,” “ qpid.max_size:1000000” “ }” “ }” “ }” “ }"; std::string connectionOptions = argc > 3 ? argv[3] : ""; Connection connection(url, connectionOptions); try { connection.open(); Session session = connection.createSession(); Sender sender = session.createSender(address); ...
qpid-config
command:
qpid-config add queue --max-queue-size=1000000 --max-queue-count=101
queue_option max_count
is 101 then the size limit is violated (it is too low) and the allow rule is returned with a deny decision.
8.3.9. Resource Quota Options
--max-connections
broker option.
Table 8.8. Resource Quota Options
Option | Description | Default Value |
---|---|---|
--max-connections N
|
Total concurrent connections to the broker.
|
500
|
--max-negotiate-time N
|
The time during which initial protocol negotiation must succeed. This prevents resource starvation by badly behaved clients or transient network issues that prevent connections from completing.
|
500
|
Notes
--max-connections
is a qpid core limit and is enforced whether ACL is enabled or not.--max-connections
is enforced per Broker. In a cluster of N nodes where all Brokers set the maximum connections to 20 the total number of allowed connections for the cluster will be N*20.
ACL-based Quotas
Table 8.9. ACL Command-line Option
Option | Description | Default Value |
---|---|---|
--acl-file FILE (policy.acl)
|
The policy file to load from, loaded from data dir.
|
Table 8.10. ACL-based Resource Quota Options
Option
| Description | Default Value |
---|---|---|
--connection-limit-per-user N
|
The maximum number of connections allowed per user. 0 implies no limit.
|
0
|
--connection-limit-per-ip N
|
The maximum number of connections allowed per host IP address. 0 implies no limit.
| 0 |
--max-queues-per-user N
|
Total concurrent queues created by individual user
|
0
|
Notes
- In a cluster system the actual number of connections may exceed the connection quota value
N
by one less than the number of member nodes in the cluster. For example: in a 5-node cluster, with a limit of 20 connections, the actual number of connections can reach 24 before limiting takes place. - Cluster connections are checked against the connection limit when they are established. The cluster connection is denied if a free connection is not available. After establishment, however, a cluster connection does not consume a connection.
- Allowed values for
N
are 0..65535. - These limits are enforced per cluster.
- A value of zero (0) disables that option's limit checking.
- Per-user connections are identified by the authenticated user name.
- Per-ip connections are identified by the
<broker-ip><broker-port>-<client-ip><client-port>
tuple which is also the management connection index.- With this scheme host systems may be identified by several names such as
localhost
IPv4,127.0.0.1
IPv4, or::1
IPv6, and a separate set of connections is allowed for each name. - Per-IP connections are counted regardless of the user credentials provided with the connections. An individual user may be allowed 20 connections but if the client host has a 5 connection limit then that user may connect from that system only 5 times.
8.3.10. Per-user Resource Quotas
The per-user ACL rule syntax is:
quota connections|queues value <group-name-list>|<user-name-list> [ <group-name-list>|<user-name-list>]
Connection quotas work in conjunction with the command line switch '--connection-limit-per-user N
' to limit users to some number of concurrent connections.
- If the command line switch '
--connection-limit-per-user
' is absent and there are no 'quota connections' rules in the ACL file then connection limits are not enforced. - If the command line switch '
--connection-limit-per-user
' is present then it assigns an initial value for the pseudo-user 'all'. - If the ACL file specifies a quota for pseudo user 'all' than that value is applied to all users who are otherwise unnamed in the ACL file.
- Connection quotas for users are registered in order as the rule file is processed. A user may be assigned any number of connection quota values but only the final value is retained and enforced.
- Connection quotas for groups are applied as connection quotas for each individual user in the group at the time the 'quota connections' line is processed.
- Quota values range from 0 to 65530. A value of zero (0) denies connections.
Queue quotas work in conjunction with the command line switch '--max-queues-per-user N
' to limit users to some number of concurrent queues.
- If the command line switch '
--max-queues-per-user
' is absent and there are no 'quota queues' rules in the ACL file then queue limits are not enforced. - If the command line switch '
--max-queues-per-user
' is present then it assigns an initial value for the pseudo-user 'all'. - If the ACL file specifies a quota for pseudo user 'all' than that value is applied to all users who are otherwise unnamed in the ACL file.
- Queue quotas for users are registered in order as the rule file is processed. A user may be assigned any number of queue quota values but only the final value is retained and enforced.
- Queue quotas for groups are applied as queue quotas for each individual user in the group at the time the 'quota queues' line is processed.
- Quota values range from 0 to 65530. A value of zero (0) denies queue creation actions.
8.3.11. Connection Limits by Hostname
acl allow user create connection host=host1 acl allow user create connection host=host1,host2 acl deny user create connection host=all
host=host1
specifies a single host. With a single host the name may resolve to multiple TCP/IP addresses. For example localhost resolves to both 127.0.0.1 and ::1 and possibly many other addresses. A connection from any of the addresses associated with this host matches the rule and the connection is allowed or denied accordingly.
host=host1,host2
specifies a range of TCP/IP addresses. With a host range each host must resolve to a single TCP/IP address and the second address must be numerically larger than the first. A connection from any host where host >= host1 and host <= host2 match the rule and the connection is allowed or denied accordingly.
host=all
specifies all TCP/IP addresses. A connection from any host matches the rule and the connection is allowed or denied accordingly.
- User = all, host != allThese define global rules and are applied before any specific user rules. These rules may be used to reject connections before any AMPQ protocol is run and before any user names have been negotiated.
- User != all, host = any legal host or 'all'These define user rules. These rules are applied after the global rules and after the AMQP protocol has negotiated user identities.
- User = all, host = allThis rule defines what to do if no other rule matches. The default value is "ALLOW". Only one rule of this type may be defined.
Example 8.1. Connection Limits by Host Name
group admins alice bob chuck group Company1 c1_usera c1_userb group Company2 c2_userx c2_usery c2_userz acl allow admins create connection host=localhost acl allow admins create connection host=10.0.0.0,10.255.255.255 acl allow admins create connection host=192.168.0.0,192.168.255.255 acl allow admins create connection host=[fc00::],[fc00::ff] acl allow Company1 create connection host=company1.com acl deny Company1 create connection host=all acl allow Company2 create connection host=company2.com acl deny Company2 create connection host=all
group admins alice bob chuck group Company1 c1_usera c1_userb group Company2 c2_userx c2_usery c2_userz acl allow admins create connection host=localhost acl allow admins create connection host=10.0.0.0,10.255.255.255 acl allow admins create connection host=192.168.0.0,192.168.255.255 acl allow admins create connection host=[fc00::],[fc00::ff] acl allow Company1 create connection host=company1.com acl allow Company2 create connection host=company2.com acl deny all create connection host=all
8.3.12. Routing Key Wildcards
routingkey
property. These rules include:
- bind exchange <name> routingkey=X
- unbind exchange <name> routingkey=X
- publish exchange <name> routingkey=X
routingkey
property is now matched using the same logic as the Topic Exchange match. This allows administrators to express user limits in flexible terms that map to the namespace where routingkey
values are used.
Wildcard matching and Topic Exchanges
#
matches any number of period-separated terms, and *
matches a single term.
#.news
will match messages with subjects such as usa.news
and germany.europe.news
, while a binding key of *.news
will match messages with the subject usa.news
, but not germany.europe.news
.
Example:
acl allow-log uHash1@COMPANY publish exchange name=X routingkey=a.#.b acl deny all all
uHash1@COMPANY
publishes to exchange X:
Table 8.11.
routingkey in publish to exchange X | result |
---|---|
a.b
|
allow-log
|
a.x.b
|
allow-log
|
a..x.y.zz.b
|
allow-log
|
a.b.
|
deny
|
q.x.b
|
deny
|
8.3.13. Routing Key Wildcard Examples
X
indicates that the message with the given header will not be routed given an ACL rule that allows messages with the specified routing key. 'Routed' indicates that the message with the given header will be routed by an ACL rule that allows message with the specified routing key.
Table 8.12. Routing Keys, Message Headers, and Resultant Routing.
Routing Keys -> | a.# | #.e | a.#.e | a.#.c.#.e | #.c.# | # |
---|---|---|---|---|---|---|
Message Headers:
| | | | | | |
ax
|
X
|
X
|
X
|
X
|
X
|
Routed
|
a.x
|
Routed
|
X
|
X
|
X
|
X
|
Routed
|
ex
|
X
|
X
|
X
|
X
|
X
|
Routed
|
e.x
|
X
|
X
|
X
|
X
|
X
|
Routed
|
ae
|
X
|
X
|
X
|
X
|
X
|
Routed
|
a.e
|
Routed
|
Routed
|
Routed
|
X
|
X
|
Routed
|
a..e
|
Routed
|
Routed
|
Routed
|
X
|
X
|
Routed
|
a.x.e
|
Routed
|
Routed
|
Routed
|
X
|
X
|
Routed
|
a.c.e
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
a..c..e
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
a.b.c.d.e
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
a.b.x.c.d.y.e
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
a.#
|
Routed
|
X
|
X
|
X
|
X
|
Routed
|
#.e
|
X
|
Routed
|
X
|
X
|
X
|
Routed
|
a.#.e
|
Routed
|
Routed
|
Routed
|
X
|
X
|
Routed
|
a.#.c.#.e
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
Routed
|
#.c.#
|
X
|
X
|
X
|
X
|
Routed
|
Routed
|
#
|
X
|
X
|
X
|
X
|
X
|
Routed
|
8.3.14. User Name and Domain Name Symbol Substitution
bob.user@QPID.COM
has his substitution keywords expanded.
Table 8.13.
Keyword | Expansion |
---|---|
${userdomain}
|
bob_user_QPID_COM
|
${user}
|
bob_user
|
${domain}
|
QPID_COM
|
Using Symbol Substitution and Wildcards in Routing Keys
*
symbol can be used a wildcard match for any number of characters in a single field in a routing key. For example:
acl allow user_group publish exchange name=users routingkey=${user}-delivery-*
#
wildcard symbol in routing keys, for example:
acl allow user_group bind exchange name=${user}-work2 routingkey=news.#.${user}
ACL Matching of Wildcards in Routing Keys
${userdomain}
before matching either ${user}
or ${domain}
. In most circumstances ACL processing treats ${user}_${domain}
and ${userdomain}
as equivalent and the two forms may be used interchangeably. The exception to this is rules that specify wildcards within routing keys. In this case the combination ${user}_${domain}
will never match, and the form ${userdomain}
should be used.
acl allow all publish exchange name=X routingkey=${user}_${domain}.c
${userdomain}.c
.
ACL Symbol Substitution Example
# # Create primary queue and exchange: acl allow all create queue name=${user}-work alternate=${user}-work2 acl deny all create queue name=${user}-work alternate=* acl allow all create queue name=${user}-work acl allow all create exchange name=${user}-work alternate=${user}-work2 acl deny all create exchange name=${user}-work alternate=* acl allow all create exchange name=${user}-work # # Create backup queue and exchange # acl deny all create queue name=${user}-work2 alternate=* acl allow all create queue name=${user}-work2 acl deny all create exchange name=${user}-work2 alternate=* acl allow all create exchange name=${user}-work2 # # Bind/unbind primary exchange # acl allow all bind exchange name=${user}-work routingkey=${user} queuename=${user}-work acl allow all unbind exchange name=${user}-work routingkey=${user} queuename=${user}-work # # Bind/unbind backup exchange # acl allow all bind exchange name=${user}-work2 routingkey=${user} queuename=${user}-work2 acl allow all unbind exchange name=${user}-work2 routingkey=${user} queuename=${user}-work2 # # deny mode # acl deny all all
8.3.15. ACL Definition Examples
group admin ted@QPID martin@QPID group user-consume martin@QPID ted@QPID group group2 kim@QPID user-consume rob@QPID group publisher group2 \ tom@QPID andrew@QPID debbie@QPID
acl allow carlt@QPID create exchange name=carl.* acl allow rob@QPID create queue acl allow guest@QPID bind exchange name=amq.topic routingkey=stocks.rht.# acl allow user-consume create queue name=tmp.* acl allow publisher publish all durable=false acl allow publisher create queue name=RequestQueue acl allow consumer consume queue durable=true acl allow fred@QPID create all acl allow bob@QPID all queue acl allow admin all acl allow all consume queue acl allow all bind exchange acl deny all all
acl deny all all
, denies all authorizations that have not been specifically granted. This is the default, but it is useful to include it explicitly on the last line for the sake of clarity. If you want to grant all rights by default, you can specify acl allow all all
in the last line.
guest
to access and log QMF management methods that could cause security breaches:
group allUsers guest@QPID .... acl deny-log allUsers create link acl deny-log allUsers access method name=connect acl deny-log allUsers access method name=echo acl allow all all
Chapter 9. High Availability
9.1. Clustering (High Availability)
9.1.1. Changes to Clustering in MRG 3
cluster
module with the new ha
module. This module provides active-passive clustering functionality for high availability.
cluster
module in MRG 2 was active-active: clients could connect to any broker in the cluster. The new ha
module is active-passive. Exactly one broker acts as primary the other brokers act as backup. Only the primary accepts client connections. If a client attempts to connect to a backup broker, the connection is aborted and the client fails-over until it connects to the primary.
ha
module also supports a virtual IP address. Clients can be configured with a single IP address that is automatically routed to the primary broker. This is the recommended configuration.
In MRG 2, a clustered broker would only utilize a single CPU thread. Some users worked around this by running multiple clustered brokers on a single machine, to utilize the multiple cores.
9.1.2. Active-Passive Messaging Clusters
rgmanager
, to detect failures, choose the new primary and handle network partitions.
9.1.3. Avoiding Message Loss
9.1.4. HA Broker States
- Joining
- Initial status of a new broker that has not yet connected to the primary.
- Catch-up
- A backup broker that is connected to the primary and catching up on queues and messages.
- Ready
- A backup broker that is fully caught-up and ready to take over as primary.
- Recovering
- The newly-promoted primary, waiting for backups to connect and catch up.
- Active
- The active primary broker with all backups connected and caught-up.
9.1.5. Limitations in HA in MRG 3
- HA replication is limited to 65434 queues.
- Manual reallocation of
qpidd-primary
service cannot be done to a node where the qpid broker is not in ready state (is stopped, or either in catchup or joining state). - Failback with cluster ordered failover-domains ('
ordered=1
' incluster.conf
) can cause an infinite failover loop under certain conditions. To avoid this, use cluster ordered failover-domains withnofailback=1
specified incluster.conf
. - Local transactional changes are replicated atomically. If the primary crashes during a local transaction, no data is lost. Distributed transactions are not yet supported by HA Cluster.
- Configuration changes (creating or deleting queues, exchanges and bindings) are replicated asynchronously. Management tools used to make changes will consider the change complete when it is complete on the primary, however it may not yet be replicated to all the backups.
- Federation links to the primary will not fail over correctly. Federated links from the primary will be lost in fail over, they will not be re-connected to the new primary. It is possible to work around this by replacing the qpidd-primary start up script with a script that re-creates federation links when the primary is promoted.
9.1.6. Broker HA Options
Options for the qpid-ha
Broker Utility
- ha-cluster yes|no
- Set to "yes" to have the broker join a cluster.
- ha-queue-replication yes|no
- Enable replication of specific queues without joining a cluster.
- ha-brokers-url URL
- The URL used by cluster brokers to connect to each other. The URL must contain a comma separated list of the broker addresses, rather than a virtual IP address.The full format of the URL is given by this grammar:
url = ["amqp:"][ user ["/" password] "@" ] addr ("," addr)* addr = tcp_addr / rmda_addr / ssl_addr / ... tcp_addr = ["tcp:"] host [":" port] rdma_addr = "rdma:" host [":" port] ssl_addr = "ssl:" host [":" port]'>
- ha-public-url URL
- This option is only needed for backwards compatibility if you have been using the
amq.failover
exchange. This exchange is now obsolete, it is recommended to use a virtual IP address instead.If set, this URL is advertized by theamq.failover
exchange and overrides the broker optionknown-hosts-url
. - ha-replicate VALUE
- Specifies whether queues and exchanges are replicated by default. VALUE is one of:
none
,configuration
,all
. - ha-username USER, ha-password PASS, ha-mechanism MECHANISM
- Authentication settings used by HA brokers to connect to each other. If you are using authorization then this user must have all permissions.
- ha-backup-timeout SECONDS
- Maximum time that a recovering primary will wait for an expected backup to connect and become ready.Values specified as SECONDS can be a fraction of a second, e.g. "0.1" for a tenth of a second. They can also have an explicit unit, e.g. 10s (seconds), 10ms (milliseconds), 10us (microseconds), 10ns (nanoseconds)
- link-maintenance-interval SECONDS
- HA uses federation links to connect from backup to primary. Backup brokers check the link to the primary on this interval and re-connect if need be. Default 2 seconds. Can be set lower for faster failover (e.g. 0.1 seconds). Setting too low will result in excessive link-checking on the backups.
- link-heartbeat-interval SECONDS
- The number of seconds to wait for a federated link heart beat or the timeout for broker status checks.By default this is 120 seconds. Provide a lower value (for example, 10 seconds) to enable faster failover detection in a HA scenario. If the value is set too low, a slow broker may be considered as failed and will be killed.If no heartbeat is received for twice this interval the primary will consider that backup dead (e.g. if backup is hung or partitioned.)It may take up to this interval for rgmanager to detect a hung or partitioned broker. The primary may take up to twice this interval to detect a hung or partitioned backup. Clients sending messages may also be delayed during this time.
ha-cluster
and ha-brokers-url
.
9.1.7. Firewall Configuration for Clustering
Table 9.1. Ports Used by Clustered Systems
Port | Protocol | Component |
---|---|---|
5404 | UDP | cman |
5405 | UDP | cman |
5405 | TCP | luci |
8084 | TCP | luci |
11111 | TCP | ricci |
14567 | TCP | gnbd |
16851 | TCP | modclusterd |
21064 | TCP | dlm |
50006 | TCP | ccsd |
50007 | UDP | ccsd |
50008 | TCP | ccsd |
50009 | TCP | ccsd |
iptables
commands, when run with root privileges, will configure the system to allow communication on these ports.
iptables -I INPUT -p udp -m udp --dport 5405 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 5405 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 8084 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 11111 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 14567 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 16851 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 21064 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 50006 -j ACCEPT iptables -I INPUT -p udp -m udp --dport 50007 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 50008 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 50009 -j ACCEPT service iptables save service iptables restart
9.1.8. ACL Requirements for Clustering
auth=yes
, all federation links are disallowed by default. The following ACL rule is required to allow the federation used by HA Clustering:
acl allow <ha-username> all all
9.1.9. Cluster Resource Manager (rgmanager)
rgmanager
.
qpidd
broker on each node in the cluster. The resource manager then promotes one of the brokers to be the primary. The other brokers connect to the primary as backups, using the URL provided in the ha-brokers-url
configuration option.
9.1.10. Install HA Cluster Components
Procedure 9.1. Qpidd HA Component Installation Steps
- Open a terminal and switch to the superuser account.
- Run
yum install qpid-cpp-server-ha
to install all required components.
Procedure 9.2. Red Hat Linux HA Cluster Components Installation Steps
- Subscribe the system to the "RHEL Server High Availability" channel.
- Open a terminal and switch to the superuser account.
- Run
yum install -y rgmanager ccs
to install all required components. - Disable the Network Manager before starting HA Clustering. HA Clustering will not work correctly with Network Manager started or enabled
# chkconfig NetworkManager off
- Activate rgmanager, cman and ricci services.
# chkconfig rgmanager on # chkconfig cman on # chkconfig ricci on
- Deactivate the qpidd service.
# chkconfig qpidd off
Theqpidd
service must be off inchkconfig
because rgmanager will start and stop qpidd. If the normal system init process also attempts to start and stop qpidd it can cause rgmanager to lose track of qpidd processes.If qpidd is not turned off,clustat
shows a qpidd service to be stopped when in fact there is a qpidd process running. In this situation, the qpidd log shows errors similar to this:critical Unexpected error: Daemon startup failed: Cannot lock /var/lib/qpidd/lock: Resource temporarily unavailable
9.1.11. Virtual IP Addresses
See Also:
9.1.12. Configure HA Cluster
cman
and rgmanager
to create an active-passive, hot-standby qpidd HA cluster. For further information on the underlying clustering technologies cman
and rgmanager
, refer to the Red Hat Enterprise Linux Cluster Administration Guide.
/etc/cluster/cluster.conf
file to configure cman
and rgmanager
.
Note
mgmt-enable
must not be set to "no".
Note
ccs
provides a high-level user-friendly mechanism to configure the cluster.conf
file, and is the recommended method for configuring a cluster. Refer to the Red Hat Enterprise Linux Cluster Administration Guide for more information on using the ccs
tool.
ccs
to create an example cluster of 3 nodes named node1
, node2
and node3
. Run the following as the root
user:
- Start the
ricci
service:service ricci start
- If you have not previously set the
ricci
password, set it now:passwd ricci
- Create a new cluster:
ccs -h localhost --createcluster qpid-test
- Add three nodes:
ccs -h localhost --addnode node1.example.com ccs -h localhost --addnode node2.example.com ccs -h localhost --addnode node3.example.com
- Add a
failoverdomain
for each:ccs -h localhost --addfailoverdomain node1-domain restricted ccs -h localhost --addfailoverdomain node2-domain restricted ccs -h localhost --addfailoverdomain node3-domain restricted
- Add a
failoverdomainnode
for each:ccs -h localhost --addfailoverdomainnode node1-domain node1.example.com ccs -h localhost --addfailoverdomainnode node2-domain node2.example.com ccs -h localhost --addfailoverdomainnode node3-domain node3.example.com
- Add the scripts:
ccs -h localhost --addresource script name=qpidd file=/etc/init.d/qpidd ccs -h localhost --addresource script name=qpidd-primary file=/etc/init.d/qpidd-primary
- Add the Virtual IP Address:
ccs -h localhost --addresource ip address=20.0.20.200 monitor_link=1
- Add the
qpidd
service for each node. It should be restarted if it fails:ccs -h host --addservice node1-qpidd-service domain=node1-domain recovery=restart ccs -h localhost --addsubservice node1-qpidd-service script ref=qpidd ccs -h localhost --addservice node2-qpidd-service domain=node2-domain recovery=restart ccs -h localhost --addsubservice node2-qpidd-service script ref=qpidd ccs -h localhost --addservice node3-qpidd-service domain=node3-domain recovery=restart ccs -h localhost --addsubservice node3-qpidd-service script ref=qpidd
- Add the primary
qpidd
service. It only runs on a single node at a time, and can run on any node:ccs --host localhost --addservice qpidd-primary-service recovery=relocate autostart=1 exclusive=0 ccs -h localhost --addsubservice qpidd-primary-service script ref=qpidd-primary ccs -h localhost --addsubservice qpidd-primary-service ip ref=20.0.20.200
/etc/cluster/cluster.conf
file produced by the previous steps:
<?xml version="1.0"?> <!-- This is an example of a cluster.conf file to run qpidd HA under rgmanager. This example configures a 3 node cluster, with nodes named node1, node2 and node3. NOTE: fencing is not shown, you must configure fencing appropriately for your cluster. --> <cluster name="qpid-test" config_version="18"> <!-- The cluster has 3 nodes. Each has a unique nodeid and one vote for quorum. --> <clusternodes> <clusternode name="node1.example.com" nodeid="1"/> <clusternode name="node2.example.com" nodeid="2"/> <clusternode name="node3.example.com" nodeid="3"/> </clusternodes> <!-- Resouce Manager configuration. --> <rm> <!-- There is a failoverdomain for each node containing just that node. This specifies that the qpidd service should always run on each node. --> <failoverdomains> <failoverdomain name="node1-domain" restricted="1"> <failoverdomainnode name="node1.example.com"/> </failoverdomain> <failoverdomain name="node2-domain" restricted="1"> <failoverdomainnode name="node2.example.com"/> </failoverdomain> <failoverdomain name="node3-domain" restricted="1"> <failoverdomainnode name="node3.example.com"/> </failoverdomain> </failoverdomains> <resources> <!-- This script starts a qpidd broker acting as a backup. --> <script file="/etc/init.d/qpidd" name="qpidd"/> <!-- This script promotes the qpidd broker on this node to primary. --> <script file="/etc/init.d/qpidd-primary" name="qpidd-primary"/> <!-- This is a virtual IP address on a seprate network for client traffic. --> <ip address="20.0.20.200" monitor_link="1"/> </resources> <!-- There is a qpidd service on each node, it should be restarted if it fails. --> <service name="node1-qpidd-service" domain="node1-domain" recovery="restart"> <script ref="qpidd"/> </service> <service name="node2-qpidd-service" domain="node2-domain" recovery="restart"> <script ref="qpidd"/> </service> <service name="node3-qpidd-service" domain="node3-domain" recovery="restart"> <script ref="qpidd"/> </service> <!-- There should always be a single qpidd-primary service, it can run on any node. --> <service name="qpidd-primary-service" autostart="1" exclusive="0" recovery="relocate"> <script ref="qpidd-primary"/> <!-- The primary has the IP addresses for brokers and clients to connect. --> <ip ref="20.0.20.200"/> </service> </rm> </cluster>
failoverdomain
for each node containing just that one node. This specifies that the qpidd service always runs on all nodes.
resources
section defines the qpidd
script used to start the qpidd
service. It also defines the qpidd-primary
script which does not actually start a new service, rather it promotes the existing qpidd
broker to primary status. The qpidd-primary
script is installed by the qpid-cpp-server-ha
package.
qpidd.conf
should contain these lines:
ha-cluster = yes ha-public-url = 20.0.20.200 ha-brokers-url = 20.0.10.1, 20.0.10.2, 20.0.10.3
ha-brokers-url
), and the Virtual IP address for the cluster, which clients should connect to: 20.0.10.200
.
service
section defines 3 qpidd
services, one for each node. Each service is in a restricted fail-over domain containing just that node, and has the restart
recovery policy. This means that the rgmanager will run qpidd
on each node, restarting if it fails.
qpidd-primary-service
using the qpidd-primary
script. It is not restricted to a domain and has the relocate
recovery policy. This means rgmanager
will start qpidd-primary
on one of the nodes when the cluster starts and will relocate it to another node if the original node fails. Running the qpidd-primary
script does not start a new broker process, it promotes the existing broker to become the primary.
9.1.13. Shutting Down qpidd on a HA Node
qpidd
service and the re-locatable qpidd-primary
service are implemented by the same qpidd
daemon.
qpidd
service will not stop a qpidd
daemon that is acting as primary, and stopping the qpidd-primary
service will not stop a qpidd
process that is acting as backup.
qpidd
service and relocate the primary:
clusvcadm -d somenode-qpidd-service clusvcadm -r qpidd-primary-service
qpidd
daemon on that node. It will also prevent the primary service from relocating back to the node because the qpidd
service is no longer running on that location.
9.1.14. Start and Stop HA Cluster
To start the HA Cluster on a node:
ccs [-h host] --start
ccs [-h host] --stop
To start the HA Cluster on all configured nodes:
ccs [-h host] --startall
ccs [-h host] --stopall
9.1.15. Configure Clustering to use a non-privileged (non-root) user
# diff -u /etc/rc.d/init.d/qpidd /etc/rc.d/init.d/qpidd-mod --- /etc/rc.d/init.d/qpidd.orig 2014-01-15 19:06:19.000000000 +0100 +++ /etc/rc.d/init.d/qpidd 2014-02-07 16:02:47.136001472 +0100 @@ -38,6 +38,9 @@ prog=qpidd lockfile=/var/lock/subsys/$prog pidfile=/var/run/qpidd.pid + +CFG_DIR=/var/lib/qpidd +QPIDD_OPTIONS="--config ${CFG_DIR}/qpidd.conf --client-config ${CFG_DIR}/qpidc.conf" # Source configuration if [ -f /etc/sysconfig/$prog ] ; then
/etc/rc.d/init.d/qpidd
, the configuration files for the broker are read from the /var/lib/qpidd
directory, rather than from /etc/qpid
as they are by default.
9.1.16. Broker Administration Tools and HA
qpid-ha
allows you to view and change HA configuration settings.
qpid-config
, qpid-route
and qpid-stat
will connect to a backup if you pass the flag --ha-admin
on the command line.
9.1.17. Controlling replication of queues and exchanges
all
- Replicate everything automatically: queues, exchanges, bindings and messages.
configuration
- Replicate the existence of queues, exchange and bindings but don't replicate messages.
none
- Don't replicate anything, this is the default.
qpid.replicate
when creating the queue or exchange. It takes the same values as ha-replicate
.
qpid-config
management tool like this:
qpid-config add queue myqueue --replicate all
"myqueue;{create:always,node:{x-declare:{arguments:{'qpid.replicate':all}}}}"
amq.direct
, amq.topic
, amq.fanout
and amq.match
) and the management exchanges (qpid.management
, qmf.default.direct
and qmf.default.topic
)
9.1.18. Client Connection and Fail-over
ha-public-url
contains multiple addresses, the client will try them all in rotation. If it is a virtual IP address the client will retry on the same address until reconnected.
- The URL contains a single virtual IP address that is assigned to the primary broker by the resource manager. This is the recommended configuration.
- The URL contains multiple addresses, one for each broker in the cluster.
node1
, node2
and node3
all using the default AMQP port, and you are not using a virtual IP address. To connect a client you need to specify the address(es) and set the reconnect property to true. The following sub-sections show how to connect each type of client.
With the C++ client, you specify multiple cluster addresses in a single URL. You also need to specify the connection option reconnect
to be true. For example:
qpid::messaging::Connection c("node1,node2,node3","{reconnect:true}");
heartbeat
option. For example:
qpid::messaging::Connection c("node1,node2,node3","{reconnect:true,heartbeat:10}");
With the Python client, you specify reconnect=True
and a list of host:port
addresses as reconnect_urls
when calling Connection.establish
or Connection.open
:
connection = qpid.messaging.Connection.establish("node1", reconnect=True, reconnect_urls=["node1", "node2", "node3"])
heartbeat
' option. For example:
connection = qpid.messaging.Connection.establish("node1", reconnect=True, reconnect_urls=["node1", "node2", "node3"], heartbeat=10)
In Java JMS clients, client fail-over is handled automatically if it is enabled in the connection. You can configure a connection to use fail-over using the failover
property:
connectionfactory.qpidConnectionfactory = amqp://guest:guest@clientid/test?brokerlist='tcp://localhost:5672'&failover='failover_exchange'
Fail-over Modes
failover_exchange
- If the connection fails, fail over to any other broker in the cluster.
roundrobin
- If the connection fails, fail over to one of the brokers specified in the brokerlist.
singlebroker
- Fail-over is not supported; the connection is to a single broker only.
idle_timeout
property, which is an integer corresponding to the heartbeat period in seconds. For instance, the following line from a JNDI properties file sets the heartbeat time out to 3 seconds:
connectionfactory.qpidConnectionfactory = amqp://guest:guest@clientid/test?brokerlist='tcp://localhost:5672',idle_timeout=3
9.1.19. Security
Note
auth=no
in your configuration, you must set the options below and you must have an ACL file with at least the entry described below.
Table 9.2. HA Security Options
HA Security Options | |
---|---|
ha-username USER
|
User name for HA brokers. Note this must not include the
@QPID suffix.
|
ha-password PASS
|
Password for HA brokers.
|
ha-mechanism MECHANISM
|
Mechanism for HA brokers. Any mechanism you enable for broker-to-broker communication can also be used by a client, so do not use
ANONYMOUS in a secure environment.
|
ha-username
=USER
acl allow USER@QPID all all
9.1.20. HA Clustering and Persistence
9.1.21. Queue Replication and HA
HA
module supports individual queue replication, even if the brokers are not in a clustered environment. The original queue is used as normal, however the replica queue is updated automatically as messages are added to or removed from the original queue.
HA
module must be loaded on both the original and replica brokers, which is done automatically by default.
ha-queue-replication=yes
configuration option must be specified. This option is not required for brokers that are part of a clustered environment, because the option is loaded automatically.
Important
HA
module does not enforce restricted access to the replica queue (as it does in the case of a cluster). The application must ensure the replica is not used until it has been disconnected from the original.
Example 9.1. Replicate a Queue Between Nodes
myqueue
is a queue on node1
.
myqueue
on node2
, run the following command:
qpid-config --broker=node2 add queue --start-replica node1 myqueue
myqueue
already exists on the replica broker, run the following command to start replication from the original queue:
qpid-ha replicate -b node2 node1 myqueue
9.2. Cluster management
9.2.1. Cluster Management using qpid-ha
qpid-ha
is a command-line utility that allows you to view information on a cluster and its brokers, disconnect a client connection, shut down a broker in a cluster, or shut down the entire cluster. It accepts a command and options.
qpid-ha
has the following commands and parameters:
Commands
- status
- Print HA status. Returns information whether the specified broker is acting as a primary (active) or a backup (ready). With the
--all
option will list the whole cluster.Examples:# qpid-ha status ready # qpid-ha status --all 192.168.6.60:5672 ready 192.168.6.61:5672 active 192.168.6.62:5672 ready
- ping
- Check if the broker is alive and responding.
- query
- Print HA configuration and status. The following information is returned:
- broker status: primary (active) or backup (ready).
- list of HA broker URLs
- public (virtual) HA URL
- replication status
Example:# qpid-ha query Status: ready Brokers URL: amqp:tcp:192.168.6.60:5672,tcp:192.168.6.61:5672,tcp:192.168.6.62:5672 Public URL: amqp:tcp:192.168.6.251:5672 Replicate: all
- replicate
- Set up replication from <queue> on <remote-broker> to <queue> on the current broker.
Parameters
- --broker=BROKER
- The address of qpidd broker. The syntax is shown below:
[username/password@] hostname | ip-address [:port]
- --sasl-mechanism=SASL_MECH
- SASL mechanism for authentication (e.g. EXTERNAL, ANONYMOUS, PLAIN, CRAM-MD5, DIGEST-MD5, GSSAPI). SASL automatically picks the most secure available mechanism - use this option to override.
- --ssl-certificate=SSL_CERT
- Client SSL certificate (PEM Format).
- --config=CONFIG
- Connect to the local qpidd by reading its configuration file (
/etc/qpid/qpidd.conf
, for example) . - --timeout=SECONDS
- Give up if the broker does not respond within the timeout.
0
means wait forever. The default is10.0
. - --ssl-key=KEY
- Client SSL private key (PEM Format)
- --help-all
- Outputs all of the above commands and parameters.
--help-all
option:
$ qpid-ha --help-all
9.3. Cluster Troubleshooting
9.3.1. Troubleshooting Cluster configuration
info SASL: Authentication failed: SASL(-13): user not found: Password verification failed
warning Client closed connection with 320: User anonymous@QPID federation connection denied. Systems with authentication enabled must specify ACL create link rules.
warning Client closed connection with 320: ACL denied anonymous@QPID creating a federation link.
Procedure 9.3. Troubleshooting Cluster SASL Configuration
- Set the HA security configuration and ACL file as described in Section 9.1.19, “Security”
- Once the cluster is running and the primary is promoted , run:
qpid-ha status --all
to ensure that the brokers are running as one cluster. - Once the cluster is running, run
qpid-ha
to make sure that the brokers are running as one cluster.
9.3.2. Slow Recovery Times
<rm status_poll_interval=1>
- status_poll_interval is the interval in seconds that the resource manager checks the status of managed services. This affects how quickly the manager will detect failed services.
<ip address="20.0.20.200" monitor_link="yes" sleeptime="0"/>
- This is a virtual IP address for client traffic.
monitor_link="yes"
means monitor the health of the NIC used for the VIP.sleeptime="0"
means don't delay when failing over the VIP to a new address.
link-maintenance-interval=0.1
- The number of seconds to wait for back-up brokers to check the link to the primary re-connect if required. This value defaults to
2
. The value can be set lower for a faster failover (for example,0.1
).Note
Setting the value too low will result in excessive link-checking activity on the brokers. link-heartbeat-interval=5
- Heartbeat interval for federation links. The HA cluster uses federation links between the primary and each backup. The primary can take up to twice the heartbeat interval to detect a failed backup. When a sender sends a message the primary waits for all backups to acknowledge before acknowledging to the sender. A disconnected backup may cause the primary to block senders until it is detected via heartbeat.This interval is also used as the timeout for broker status checks by rgmanager. It may take up to this interval for
rgmanager
to detect a hung broker.The default is 120 seconds. This may be too high for many productions scenarios where availability and response time is important. However, if set too low, under network congestion or heavy load a slow-to-respond broker may be restarted byrgmanager
.
9.3.3. Total Cluster Failure
standalone
: not part of a HA clusterjoining
: newly started backup, not yet joined to the cluster.catch-up
: backup has connected to the primary and is downloading queues, messages etc.ready
: backup is connected and actively replicating from primary, it is ready to take over.recovering
: newly-promoted to primary, waiting for backups to catch up before serving clients. Only a single primary broker can be recovering at a time.active
: serving clients, only a single primary broker can be active at a time.
All brokers are in joining or catch-up mode. rgmanager
tries to promote a new primary but cannot find any candidates and so gives up. clustat will show that the qpidd
services are running but the the qpidd-primary
service has stopped, something like this:
Table 9.3.
Service Name | Owner (Last) | State |
---|---|---|
service:mrg33-qpidd-service
|
20.0.10.33
|
started
|
service:mrg34-qpidd-service
|
20.0.10.34
|
started
|
service:mrg35-qpidd-service
|
20.0.10.35
|
started
|
service:qpidd-primary-service
|
(20.0.10.33)
|
stopped
|
qpid-ha status --all
.
- In luci:<your-cluster>:Nodes click reboot to restart the entire cluster.
- or stop and restart the cluster with
ccs --stopall
;ccs --startall
- In luci:<your-cluster>:Service Groups:
- select all the qpidd (not primary) services, click restart.
- select the qpidd-primary service, click restart.
- or stop the primary and qpidd services with clusvcadm, then restart (primary last)
A new primary is promoted and the cluster is functional. All non-persistent data from before the failure is lost.
9.3.4. Fencing and Network Partitions
Chapter 10. Broker Federation
10.1. Broker Federation
10.2. Broker Federation Use Cases
- Geography: Customer requests can be routed to a processing location close to the customer.
- Service Type: High value customers can be routed to more responsive servers.
- Load balancing: Routing among brokers can be changed dynamically to account for changes in actual or anticipated load.
- High Availability: Routing can be changed to a new broker if an existing broker becomes unavailable.
- WAN Connectivity: Federated routes can connect disparate locations across a wide area network, while clients connect to brokers on their own local area network. Each broker can provide persistent queues that can hold messages even if there are gaps in WAN connectivity.
- Functional Organization: The flow of messages among software subsystems can be configured to mirror the logical structure of a distributed application.
- Replicated Exchanges: High-function exchanges like the XML exchange can be replicated to scale performance.
- Interdepartmental Workflow: The flow of messages among brokers can be configured to mirror interdepartmental workflow at an organization.
10.3. Broker Federation Overview
10.3.1. Message Routes
- Queue routes
- Exchange routes
- Dynamic exchange routes
10.3.2. Queue Routes
10.3.3. Exchange Routes
10.3.4. Dynamic Exchange Routes
10.3.5. Federation Topologies
10.3.6. Federation Among High Availability Clusters
10.4. Configuring Broker Federation
10.4.1. The qpid-route Utility
qpid-route
is a command line utility used to configure federated networks of brokers and to view the status and topology of networks. It can be used to configure routes among any brokers that qpid-route
can connect to.
10.4.2. qpid-route Syntax
qpid-route
is as follows:
qpid-route [OPTIONS] dynamic add <dest-broker> <src-broker> <exchange> qpid-route [OPTIONS] dynamic del <dest-broker> <src-broker> <exchange> qpid-route [OPTIONS] route add <dest-broker> <src-broker> <exchange> <routing-key> qpid-route [OPTIONS] route del <dest-broker> <src-broker> <exchange> <routing-key> qpid-route [OPTIONS] queue add <dest-broker> <src-broker> <dest-exchange> <src-queue> qpid-route [OPTIONS] queue del <dest-broker> <src-broker> <dest-exchange> <src-queue> qpid-route [OPTIONS] route list [<broker>] qpid-route [OPTIONS] route flush [<broker>] qpid-route [OPTIONS] route map [<broker>] qpid-route [OPTIONS] link add <dest-broker> <src-broker> qpid-route [OPTIONS] link del <dest-broker> <src-broker> qpid-route [OPTIONS] link list [<dest-broker>]
broker
, dest-broker
, and src-broker
is as follows:
[username/password@] hostname | ip-address [:<port>]
localhost
, 10.1.1.7:10000
, broker-host:10000
, guest/guest@localhost
.
10.4.3. qpid-route Options
- Changes
- New for MRG-M 3.
Table 10.1. qpid-route
options to manage federation
Option | Description |
---|---|
-v | Verbose output. |
-q | Quiet output, will not print duplicate warnings. |
-d | Make the route durable. |
-e | Delete link after deleting the last route on the link. |
--timeout N | Maximum time to wait when qpid-route connects to a broker, in seconds. Default is 10 seconds. |
--ack N | Acknowledge transfers of routed messages in batches of N. Default is 0 (no acknowledgments). Setting to 1 or greater enables acknowledgments; when using acknowledgments, values of N greater than 1 can significantly improve performance, especially if there is significant network latency between the two brokers. |
--credit N | Specifies a finite credit to use with acknowledgements. By default, credit is 0 and credit flow control is disabled. Backpressure can be tuned by means of an explicit --credit argument. |
-s [ --src-local ] | Configure the route in the source broker (create a push route). |
-t <transport> [ --transport <transport>] | Transport protocol to be used for the route.
|
--client-sasl-mechanism <mech> | SASL mechanism for authentication when the client connects to the destination broker (for example: EXTERNAL, ANONYMOUS, PLAIN, CRAM-MD, DIGEST-MD5, GSSAPI). |
10.4.4. Create and Delete Queue Routes
- To create and delete queue routes, use the following syntax:
qpid-route [OPTIONS] queue add <dest-broker> <src-broker> <dest-exchange> <src-queue> qpid-route [OPTIONS] queue del <dest-broker> <src-broker> <dest-exchange> <src-queue>
- For example, use the following command to create a queue route that routes all messages from the queue named
public
on the source brokerlocalhost:10002
to theamq.fanout
exchange on the destination brokerlocalhost:10001
:$ qpid-route queue add localhost:10001 localhost:10002 amq.fanout public
- Optionally, specify the
-d
option to persist the queue route. The queue route will be restored if one or both of the brokers is restarted:$ qpid-route -d queue add localhost:10001 localhost:10002 amq.fanout public
- The
del
command takes the same arguments as theadd
command. Use the following command to delete the queue route described above:$ qpid-route queue del localhost:10001 localhost:10002 amq.fanout public
10.4.5. Create and Delete Exchange Routes
- To create and delete exchange routes, use the following syntax:
qpid-route [OPTIONS] route add <dest-broker> <src-broker> <exchange> <routing-key> qpid-route [OPTIONS] route del <dest-broker> <src-broker> <exchange> <routing-key> qpid-route [OPTIONS] flush [<broker>]
- For example, use the following command to create an exchange route that routes messages that match the binding key
global.#
from theamq.topic
exchange on the source brokerlocalhost:10002
to theamq.topic
exchange on the destination brokerlocalhost:10001
:$ qpid-route route add localhost:10001 localhost:10002 amq.topic global.#
- In many applications, messages published to the destination exchange must also be routed to the source exchange. Create a second exchange route, reversing the roles of the two exchanges:
$ qpid-route route add localhost:10002 localhost:10001 amq.topic global.#
- Specify the
-d
option to persist the exchange route. The exchange route will be restored if one or both of the brokers is restarted:$ qpid-route -d route add localhost:10001 localhost:10002 amq.fanout public
- The
del
command takes the same arguments as theadd
command. Use the following command to delete the first exchange route described above:$ qpid-route route del localhost:10001 localhost:10002 amq.topic global.#
10.4.6. Delete All Routes for a Broker
- Use the
flush
command to delete all routes for a given broker:qpid-route [OPTIONS] route flush [<broker>]
For example, use the following command to delete all routes for the brokerlocalhost:10001
:$ qpid-route route flush localhost:10001
10.4.7. Create and Delete Dynamic Exchange Routes
- To create and delete dynamic exchange routes, use the following syntax:
qpid-route [OPTIONS] dynamic add <dest-broker> <src-broker> <exchange> qpid-route [OPTIONS] dynamic del <dest-broker> <src-broker> <exchange>
- Create a new topic exchange on each of two brokers:
$ qpid-config -a localhost:10003 add exchange topic fed.topic $ qpid-config -a localhost:10004 add exchange topic fed.topic
- Create a dynamic exchange route that routes messages from the
fed.topic
exchange on the source brokerlocalhost:10004
to thefed.topic
exchange on the destination brokerlocalhost:10003
:$ qpid-route dynamic add localhost:10003 localhost:10004 fed.topic
Internally, this creates a private autodelete queue on the source broker, and binds that queue to thefed.topic
exchange on the source broker, using each binding associated with thefed.topic
exchange on the destination broker. - In many applications, messages published to the destination exchange must also be routed to the source exchange. Create a second dynamic exchange route, reversing the roles of the two exchanges:
$ qpid-route dynamic add localhost:10004 localhost:10003 fed.topic
- Specify the
-d
option to persist the exchange route. The exchange route will be restored if one or both of the brokers is restarted:$ qpid-route -d dynamic add localhost:10004 localhost:10003 fed.topic
When an exchange route is durable, the private queue used to store messages for the route on the source exchange is also durable. If the connection between the brokers is lost, messages for the destination exchange continue to accumulate until it can be restored. - The
del
command takes the same arguments as theadd
command. Delete the first exchange route described above:$ qpid-route dynamic del localhost:10004 localhost:10003 fed.topic
Internally, this deletes the bindings on the source exchange for the private queues associated with the message route.
10.4.8. View Routes
Procedure 10.1. Using the route list
command
- Create the following two routes:
$ qpid-route dynamic add localhost:10003 localhost:10004 fed.topic $ qpid-route dynamic add localhost:10004 localhost:10003 fed.topic
- Use the
route list
command to show the routes associated with the broker:$ qpid-route route list localhost:10003 localhost:10003 localhost:10004 fed.topic <dynamic>
Note that this shows only one of the two routes created, namely the route for whichlocalhost:10003
is a destination. - To view the route for which
localhost:10004
is a destination, runroute list
onlocalhost:10004
:$ qpid-route route list localhost:10004 localhost:10004 localhost:10003 fed.topic <dynamic>
Procedure 10.2. Using the route map
command
- The
route map
command shows all routes associated with a broker, and recursively displays all routes for brokers involved in federation relationships with the given broker. For example, run theroute map
command for the two brokers configured above:$ qpid-route route map localhost:10003 Finding Linked Brokers: localhost:10003... Ok localhost:10004... Ok Dynamic Routes: Exchange fed.topic: localhost:10004 <=> localhost:10003 Static Routes: none found
Note that the two dynamic exchange links are displayed as though they were one bidirectional link. Theroute map
command is helpful for larger, more complex networks. - Configure a network with 16 dynamic exchange routes:
qpid-route dynamic add localhost:10001 localhost:10002 fed.topic qpid-route dynamic add localhost:10002 localhost:10001 fed.topic qpid-route dynamic add localhost:10003 localhost:10002 fed.topic qpid-route dynamic add localhost:10002 localhost:10003 fed.topic qpid-route dynamic add localhost:10004 localhost:10002 fed.topic qpid-route dynamic add localhost:10002 localhost:10004 fed.topic qpid-route dynamic add localhost:10002 localhost:10005 fed.topic qpid-route dynamic add localhost:10005 localhost:10002 fed.topic qpid-route dynamic add localhost:10005 localhost:10006 fed.topic qpid-route dynamic add localhost:10006 localhost:10005 fed.topic qpid-route dynamic add localhost:10006 localhost:10007 fed.topic qpid-route dynamic add localhost:10007 localhost:10006 fed.topic qpid-route dynamic add localhost:10006 localhost:10008 fed.topic qpid-route dynamic add localhost:10008 localhost:10006 fed.topic
- Use the
route map
command starting with any one broker to see the entire network:$ qpid-route route map localhost:10001 Finding Linked Brokers: localhost:10001... Ok localhost:10002... Ok localhost:10003... Ok localhost:10004... Ok localhost:10005... Ok localhost:10006... Ok localhost:10007... Ok localhost:10008... Ok Dynamic Routes: Exchange fed.topic: localhost:10002 <=> localhost:10001 localhost:10003 <=> localhost:10002 localhost:10004 <=> localhost:10002 localhost:10005 <=> localhost:10002 localhost:10006 <=> localhost:10005 localhost:10007 <=> localhost:10006 localhost:10008 <=> localhost:10006 Static Routes: none found
10.4.9. Resilient Connections
10.4.10. View Resilient Connections
link list
can be used to show the resilient connections for a broker:
$ qpid-route link list localhost:10001 Host Port Transport Durable State Last Error ============================================================================= localhost 10002 tcp N Operational localhost 10003 tcp N Operational localhost 10009 tcp N Waiting Connection refused
Last Error
contains the string representation of the last connection error received for the connection. State
represents the state of the connection, and may be one of the following values:
Table 10.2. State values in $ qpid-route link list
Option | Description |
---|---|
Waiting | Waiting before attempting to reconnect. |
Connecting | Attempting to establish the connection. |
Operational | The connection has been established and can be used. |
Failed | The connection failed and will not retry (usually because authentication failed). |
Closed | The connection has been closed and will soon be deleted. |
Passive | If a cluster is federated to another cluster, only one of the nodes has an actual connection to remote node. Other nodes in the cluster have a passive connection. |
10.4.11. Broker Federation Limitations Between 2.x and 3.x
qpid-route
must be provisioned to ensure backwards compatibility with 2.x brokers. This is due to differences in argument count mismatch handling of 2.x brokers.
qpid-route
attempts to create a link where the source broker uses a newer version, compatibility will break because 2.x versions of qpid-route do not include the improved argument count handling.
Chapter 11. Qpid JCA Adapter
11.1. JCA Adapter
11.2. Qpid JCA Adapter
11.3. Install the Qpid JCA Adapter
qpid-jca
and qpid-jca-xarecovery
packages. These RPM packages are included with the default MRG Messaging installation.
JCA Adapter <JCA-VERSION>
and JCA Adapter <JCA-VERSION> detached signature
packages. These ZIP files can be obtained from the Downloads
section of the MRG Messaging v. 2 (for non-Linux platforms)
channel on the Red Hat Network.
11.4. Qpid JCA Adapter Configuration
11.4.1. Per-Application Server Configuration Information
README-<server-platform>.txt
.
See Also:
11.4.2. JCA Adapter ra.xml Configuration
ra.xml
file contains configuration parameters for the JCA Adapter. In some application server environments this file is edited directly to change configuration, in other environments (such as JBoss EAP 5) it is overridden by configuration in a *-ds.xml
file.
ra.xml
file that can be configured or overridden:
Table 11.1. ResourceAdapter Properties
Parameter | Description | Default Value |
---|---|---|
ClientId
|
Client ID for the connection
|
client_id
|
SetupAttempts
|
Number of setup attempts before failing
|
5
|
SetupInterval
|
Interval between setup attempts in milliseconds
|
5000
|
UseLocalTx
|
Use local transations rather than XA
|
false
|
Host
|
Broker host
|
localhost
|
Port
|
Broker port
|
5672
|
Path
|
Virtual Path for Connection Factory
|
test
|
ConnectionURL
|
Connection URL
|
amqp://anonymous:passwd@client/test?brokerlist='tcp://localhost?sasl_mechs='PLAIN''
|
UseConnectionPerHandler
|
Use a JMS Connection per MessageHandler
|
true
|
Table 11.2. Outbound ResourceAdapter Properties
Parameter | Description | Default Value |
---|---|---|
SessionDefaultType
|
Default session type
|
javax.jms.Queue
|
UseTryLock
|
Specify lock timeout in seconds
|
0
|
UseLocalTx
|
Use local transactions rather than XA
|
false
|
ClientId
|
Client ID for the connection
|
client_id
|
ConnectionURL
|
Connection URL
| |
Host
|
Broker host
|
localhost
|
Port
|
Broker port
|
5672
|
Path
|
Virtual Path for Connection Factory
|
test
|
Additionally, the ra.xml
file contains configuration for the Inbound Resource Adapter and Administered Objects (AdminObject
). The configuration for these two differs from the Resource Adapter and Outbound Resource Adapter configuration. The Resource Adapter and Outbound Resource Adapter configuration sets values for configuration parameters in the ra.xml
file. The Inbound Resource Adapter and Admin Object define configuration parameters, but do not set them. It is the responsibility of the Administered Object mbean definition to set the properties defined in ra.xml
.
Table 11.3. AdminObject Properties
AdminObject Class | Property |
---|---|
org.apache.qpid.ra.admin.QpidQueue
| DestinationAddress
|
org.apache.qpid.ra.admin.QpidTopic
| DestinationAddress
|
org.apache.qpid.ra.admin.QpidConnectionFactoryProxy
| ConnectionURL
|
See Also:
11.4.3. Transaction Support
XA
, LocalTransactions
and NoTransaction
.
11.4.4. Transaction Limitations
- The Qpid C++ broker does not support the use of XA within the context of clustered brokers. If you are running a cluster, you must configure the adapter to use LocalTransactions.
- XARecovery is currently not implemented. In case of a system failure, incomplete (or in doubt) transactions must be manually resolved by an administrator or other qualified personnel.
11.5. Deploying the Qpid JCA Adapter on JBoss EAP 5
11.5.1. Deploy the Qpid JCA adapter on JBoss EAP 5
Procedure 11.1. To deploy the Qpid JCA adapter for JBoss EAP
- Locate the
qpid-ra-<version>.rar
file. It is a zip archive data file which contains the resource adapter, Qpid Java client.jar
files and theMETA-INF
directory. - Copy the
qpid-ra-<version>.rar
file to your JBoss deploy directory. The JBoss deploy directory isJBOSS_ROOT/server/<server-name>/deploy
, where JBOSS_ROOT denotes the root directory of your JBoss installation and <server-name> denotes the name of your deployment server. - A successful adapter installation is accompanied by the following message:
INFO [QpidResourceAdapter] Qpid resource adaptor started
At this point, the adapter is deployed and ready for configuration.
11.5.2. JCA Configuration on JBoss EAP 5
11.5.2.1. JCA Adapter Configuration File
- Changes
- Updated April 2013.
*-ds.xml
file. The Qpid JCA adapter has a global ra.xml
file, per the JCA specification, but the default set of values in this file are almost always overridden via the *-ds.xml
file configuration file.
ResourceAdapter
configuration provides generic properties for inbound and outbound connectivity. However, these properties can be overridden when deploying ManagedConnectionFactories
and inbound activations using the standard JBoss configuration artifacts, the *-ds.xml
file and MDB ActivationSpec. A sample *-ds.xml
file, qpid-jca-ds.xml
, is located in the directory, /usr/share/doc/qpid-jca-<VERSION>/example/conf/
.
/usr/share/qpid-jca
contains the general README.txt
file, which provides a detailed description of all the properties associated with the Qpid JCA Adapter.
11.5.2.2. ConnectionFactory Configuration
11.5.2.2.1. ConnectionFactory
11.5.2.2.2. ConnectionFactory Configuration in EAP 5
*-ds.xml
file. A sample file (qpid-jca-ds.xml
) is provided with your distribution. This file can be modified to suit your development or deployment needs.
11.5.2.2.3. XAConnectionFactory Example
<tx-connection-factory> <jndi-name>QpidJMSXA</jndi-name> <xa-transaction/> <rar-name>qpid-ra-<ra-version>.rar</rar-name> <connection-definition>org.apache.qpid.ra.QpidRAConnectionFactory</connection-definition> <config-property name="ConnectionURL">amqp://guest:guest@/test?brokerlist='tcp://localhost:5672?sasl_mechs='ANONYMOUS''</config-property> <max-pool-size>20</max-pool-size> </tx-connection-factory>
- The
QpidJMSXA
connection factory defines an XA-capable ManagedConnectionFactory. - You must insert your particular ra version for the
rar-name
property. - The
jndi-name
andConnectionURL
properties can be modified to suit your environment.
java:<jndi-name>
java:QpidJMSXA
11.5.2.2.4. Local ConnectionFactory Example
ConnectionFactory
portion of the sample file for local transactions:
<tx-connection-factory> <jndi-name>QpidJMS</jndi-name> <rar-name> qpid-ra-<ra-version>.rar </rar-name> <local-transaction/> <config-property name="useLocalTx" type="java.lang.Boolean"> true </config-property> <config-property name="ConnectionURL"> amqp://anonymous:@client/test?brokerlist='tcp://localhost:5672?sasl_mechs='ANONYMOUS'' </config-property> <connection-definition> org.apache.qpid.ra.QpidRAConnectionFactory </connection-definition> <max-pool-size>20<max-pool-size> </tx-connection-factory>
- The
QpidJMS
connection factory defines a non-XAConnectionFactory
. Typically this is used as a specializedConnectionFactory
where XA is not desired, or if you are running with a clustered Qpid Broker configuration that currently does not support XA. - You must insert your particular ra version for the
rar-name
property. - The
jndi-name
andConnectionURL
properties can be modified to suit your environment.
ConnectionFactory
will be bound into Java Naming and Directory Interface (JNDI) with the following syntax:
java:<jndi-name>
java:QpidJMS
11.5.2.3. Administered Object Configuration
11.5.2.3.1. Administered Objects in EAP 5
*-ds.xml
file alongside your ConnectionFactory configurations. The sample qpid-jca-ds.xml
file provides two such objects: JMS Queue/ Topic and Connection Factory.
11.5.2.3.2. JMS Queue Administered Object Example
- Changes
- Updated April 2013.
<mbean code="org.jboss.resource.deployment.AdminObject" name="qpid.jca:name=HelloQueue"> <attribute name="JNDIName">HelloQueue</attribute> <depends optional-attribute-name="RARName"> jboss.jca:service=RARDeployment,name='qpid-ra-<ra-version>.rar' </depends> <attribute name="Type"> org.apache.qpid.ra.admin.QpidQueue </attribute> <attribute name="Properties"> DestinationAddress=amq.direct </attribute> </mbean>The above XML defines a JMS Queue which is bound into JNDI as:
HelloQueue
DestinationAddress
property can be customized for your environment. Refer to Qpid Java Client documentation for specific configuration options.
11.5.2.3.3. JMS Topic Administered Object Example
- Changes
- Updated April 2013.
<mbean code="org.jboss.resource.deployment.AdminObject" name="qpid.jca:name=HelloTopic"> <attribute name="JNDIName"> HelloTopic </attribute> <depends optional-attribute-name="RARName"> jboss.jca:service=RARDeployment,name='qpid-ra-<ra-version>.rar' </depends> <attribute name="Type"> org.apache.qpid.ra.admin.QpidTopic </attribute> <attribute name="Properties"> DestinationAddress=amq.topic </attribute> </mbean>The above XML defines a JMS Topic which is bound into JNDI as:
HelloTopic
DestinationAddress
property can be customized for your environment. Refer to Qpid Java Client documentation for specific configuration options.
11.5.2.3.4. ConnectionFactory Administered Object Example
<mbean code="org.jboss.resource.deployment.AdminObject" name="qpid.jca:name=QpidConnectionFactory"> <attribute name="JNDIName"> QpidConnectionFactory </attribute> <depends optional-attribute-name="RARName"> jboss.jca:service=RARDeployment,name='qpid-ra-<ra-version>.rar' </depends> <attribute name="Type"> javax.jms.ConnectionFactory </attribute> <attribute name="Properties"> ConnectionURL=amqp://anonymous:@client/test?brokerlist='tcp://localhost:5672?sasl_mechs='ANONYMOUS'' </attribute> </mbean>The above XML defines a ConnectionFactory that can be used for JBoss EAP 5 and also other external clients. Typically, this connection factory is used by standalone or 'thin' clients which do not require an application server. This object is bound into the JBoss EAP 5 JNDI tree as:
QpidConnectionFactory
11.6. Deploying the Qpid JCA Adapter on JBoss EAP 6
11.6.1. Deploy the Qpid JCA Adapter on JBoss EAP 6
Procedure 11.2. To deploy the Qpid JCA adapter for JBoss EAP
- Locate the
qpid-ra-<version>.rar
file. It is a zip archive data file that contains the resource adapter, Qpid Java client.jar
files and theMETA-INF
directory. - Copy the
qpid-ra-<version>.rar
file to your JBoss deploy directory. The JBoss deploy directory isJBOSS_ROOT/<server-config>/deployments
, where JBOSS_ROOT is the root directory of your JBoss installation and <server-config> is the name of your deployment server configuration.
11.6.2. JCA Configuration on JBoss EAP 6
11.6.2.1. JCA Adapter Configuration Files in JBoss EAP 6
JBOSS_ROOT/<server-config>/configuration
:
- <server-config>-full.xml
- <server-config>-full-ha.xml
- <server-config>.xml
11.6.2.2. Replace the Default Messaging Provider with the Qpid JCA Adapter
<subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <mdb> <resource-adapter-ref resource-adapter-name="qpid-ra-<rar-version>.rar"/> <bean-instance-pool-ref pool-name="mdb-strict-max-pool"/> </mdb> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem>
<mdb> <resource-adapter-ref resource-adapter-name="qpid-ra-<rar-version>.rar"/> <bean-instance-pool-ref pool-name="mdb-strict-max-pool"/> </mdb>
11.6.2.3. Configuration Methods
- Directly edit the existing configuration file.
- Copy the existing configuration file, edit the copy, then start the server using the new configuration file with the command:
JBOSS_HOME/bin/standalone.sh -c your-modified-config.xml
11.6.2.4. Example Minimal EAP 6 Configuration
<subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"> <resource-adapters> <resource-adapter> <archive> qpid-ra-<rar-version>.rar </archive> <transaction-support> XATransaction </transaction-support> <config-property name="connectionURL"> amqp://anonymous:passwd@client/test?brokerlist='tcp://localhost?sasl_mechs='PLAIN'' </config-property> <config-property name="TransactionManagerLocatorClass"> org.apache.qpid.ra.tm.JBoss7TransactionManagerLocator </config-property> <config-property name="TransactionManagerLocatorMethod"> getTm </config-property> <connection-definitions> <connection-definition class-name="org.apache.qpid.ra.QpidRAManagedConnectionFactory" jndi-name="QpidJMSXA" pool-name="QpidJMSXA"> <config-property name="connectionURL"> amqp://anonymous:passwd@client/test?brokerlist='tcp://localhost?sasl_mechs='PLAIN'' </config-property> <config-property name="SessionDefaultType"> javax.jms.Queue </config-property> </connection-definition> </connection-definitions> <admin-objects> <admin-object class-name="org.apache.qpid.ra.admin.QpidTopicImpl" jndi-name="java:jboss/exported/GoodByeTopic" use-java-context="false" pool-name="GoodByeTopic"> <config-property name="DestinationAddress"> amq.topic/hello.Topic </config-property> </admin-object> <admin-object class-name="org.apache.qpid.ra.admin.QpidQueueImpl" jndi-name="java:jboss/exported/HelloQueue" use-java-context="false" pool-name="HelloQueue"> <config-property name="DestinationAddress"> hello.Queue;{create:always, node:{type:queue, x-declare:{auto-delete:true}}} </config-property> </admin-object> </admin-objects> </resource-adapter> </resource-adapters> </subsystem>
11.6.2.5. Further Resources
Chapter 12. Management Tools and Consoles
12.1. Command-line utilities
12.1.1. Command-line Management utilities
Table 12.1. Command-line Management utilities
Utility | Description |
---|---|
qpid-config | Display and configure exchanges, queues, and bindings in the broker |
qpid-tool | Access configuration, statistics, and control within the broker |
qpid-queue-stats | Monitor the size and enqueue/dequeue rates of queues in a broker |
qpid-ha | Configure and view clusters |
qpid-route | Configure federated routes among brokers |
qpid-stat | Display details and statistics for various broker objects |
qpid-printevents | Subscribes to events from a broker and prints details of events raised to the console window |
12.1.2. Using qpid-config
- View the full list of commands by running the
qpid-config --help
command from the shell prompt:$ qpid-config --help Usage: qpid-config [OPTIONS] qpid-config [OPTIONS] exchanges [filter-string] qpid-config [OPTIONS] queues [filter-string] qpid-config [OPTIONS] add exchange <type> <name> [AddExchangeOptions] qpid-config [OPTIONS] del exchange <name> ..[output truncated]...
- View a summary of all exchanges and queues by using the
qpid-config
without options:$ qpid-config Total Exchanges: 6 topic: 2 headers: 1 fanout: 1 direct: 2 Total Queues: 7 durable: 0 non-durable: 7
- List information on all existing queues by using the
queues
command:$ qpid-config queues Queue Name Attributes ================================================================= my-queue --durable qmfc-v2-hb-localhost.localdomain.20293.1 auto-del excl --limit-policy=ring qmfc-v2-localhost.localdomain.20293.1 auto-del excl qmfc-v2-ui-localhost.localdomain.20293.1 auto-del excl --limit-policy=ring reply-localhost.localdomain.20293.1 auto-del excl topic-localhost.localdomain.20293.1 auto-del excl --limit-policy=ring
- Add new queues with the
add queue
command and the name of the queue to create:$ qpid-config add queue queue_name
- To delete a queue, use the
del queue
command with the name of the queue to remove:$ qpid-config del queue queue_name
- List information on all existing exchanges with the
exchanges
command. Add the-r
option ("recursive") to also see binding information:$ qpid-config -r exchanges Exchange '' (direct) bind pub_start => pub_start bind pub_done => pub_done bind sub_ready => sub_ready bind sub_done => sub_done bind perftest0 => perftest0 bind mgmt-3206ff16-fb29-4a30-82ea-e76f50dd7d15 => mgmt-3206ff16-fb29-4a30-82ea-e76f50dd7d15 bind repl-3206ff16-fb29-4a30-82ea-e76f50dd7d15 => repl-3206ff16-fb29-4a30-82ea-e76f50dd7d15 Exchange 'amq.direct' (direct) bind repl-3206ff16-fb29-4a30-82ea-e76f50dd7d15 => repl-3206ff16-fb29-4a30-82ea-e76f50dd7d15 bind repl-df06c7a6-4ce7-426a-9f66-da91a2a6a837 => repl-df06c7a6-4ce7-426a-9f66-da91a2a6a837 bind repl-c55915c2-2fda-43ee-9410-b1c1cbb3e4ae => repl-c55915c2-2fda-43ee-9410-b1c1cbb3e4ae Exchange 'amq.topic' (topic) Exchange 'amq.fanout' (fanout) Exchange 'amq.match' (headers) Exchange 'qpid.management' (topic) bind mgmt.# => mgmt-3206ff16-fb29-4a30-82ea-e76f50dd7d15
- Add new exchanges with the
add exchange
command. Specify the type (direct, topic or fanout) along with the name of the exchange to create. You can also add the--durable
option to make the exchange durable:$ qpid-config add exchange direct exchange_name --durable
- To delete an exchange, use the
del exchange
command with the name of the exchange to remove:$ qpid-config del exchange exchange_name
12.1.3. Using qpid-tool
- The
qpid-tool
creates a connection to a broker, and commands are run within the tool, rather than at the shell prompt itself. To create the connection, run theqpid-tool
at the shell prompt with the name or IP address of the machine running the broker you wish to view. You can also append a TCP port number with a:
character:$ qpid-tool localhost Management Tool for QPID qpid:
- If the connection is successful, qpid-tool will display a
qpid:
prompt. Typehelp
at this prompt to see the full list of commands:qpid: help Management Tool for QPID Commands: list - Print summary of existing objects by class list <className> - Print list of objects of the specified class list <className> all - Print contents of all objects of specified class ...[output truncated}...
qpid-tool
uses the word objects to refer to queues, exchanges, brokers and other such devices. To view a list of all existing objects, typelist
at the prompt:# qpid-tool Management Tool for QPID qpid: list Summary of Objects by Type: Package Class Active Deleted ======================================================= org.apache.qpid.broker exchange 8 0 org.apache.qpid.broker broker 1 0 org.apache.qpid.broker binding 16 12 org.apache.qpid.broker session 2 1 org.apache.qpid.broker connection 2 1 org.apache.qpid.broker vhost 1 0 org.apache.qpid.broker queue 6 5 org.apache.qpid.broker system 1 0 org.apache.qpid.broker subscription 6 5
- You can choose which objects to list by also specifying a class:
qpid: list system Object Summary: ID Created Destroyed Index ======================================================================== 167 07:34:13 - UUID('b3e2610e-5420-49ca-8306-dca812db647f')
- To view details of an object class, use the
schema
command and specify the class:qpid: schema queue Schema for class 'qpid.queue': Element Type Unit Access Notes Description =================================================================================================================== vhostRef reference ReadCreate index name short-string ReadCreate index durable boolean ReadCreate autoDelete boolean ReadCreate exclusive boolean ReadCreate arguments field-table ReadOnly Arguments supplied in queue.declare storeRef reference ReadOnly Reference to persistent queue (if durable) msgTotalEnqueues uint64 message Total messages enqueued msgTotalDequeues uint64 message Total messages dequeued msgTxnEnqueues uint64 message Transactional messages enqueued msgTxnDequeues uint64 message Transactional messages dequeued msgPersistEnqueues uint64 message Persistent messages enqueued msgPersistDequeues uint64 message Persistent messages dequeued msgDepth uint32 message Current size of queue in messages msgDepthHigh uint32 message Current size of queue in messages (High) msgDepthLow uint32 message Current size of queue in messages (Low) byteTotalEnqueues uint64 octet Total messages enqueued byteTotalDequeues uint64 octet Total messages dequeued byteTxnEnqueues uint64 octet Transactional messages enqueued byteTxnDequeues uint64 octet Transactional messages dequeued bytePersistEnqueues uint64 octet Persistent messages enqueued bytePersistDequeues uint64 octet Persistent messages dequeued byteDepth uint32 octet Current size of queue in bytes byteDepthHigh uint32 octet Current size of queue in bytes (High) byteDepthLow uint32 octet Current size of queue in bytes (Low) enqueueTxnStarts uint64 transaction Total enqueue transactions started enqueueTxnCommits uint64 transaction Total enqueue transactions committed enqueueTxnRejects uint64 transaction Total enqueue transactions rejected enqueueTxnCount uint32 transaction Current pending enqueue transactions enqueueTxnCountHigh uint32 transaction Current pending enqueue transactions (High) enqueueTxnCountLow uint32 transaction Current pending enqueue transactions (Low) dequeueTxnStarts uint64 transaction Total dequeue transactions started dequeueTxnCommits uint64 transaction Total dequeue transactions committed dequeueTxnRejects uint64 transaction Total dequeue transactions rejected dequeueTxnCount uint32 transaction Current pending dequeue transactions dequeueTxnCountHigh uint32 transaction Current pending dequeue transactions (High) dequeueTxnCountLow uint32 transaction Current pending dequeue transactions (Low) consumers uint32 consumer Current consumers on queue consumersHigh uint32 consumer Current consumers on queue (High) consumersLow uint32 consumer Current consumers on queue (Low) bindings uint32 binding Current bindings bindingsHigh uint32 binding Current bindings (High) bindingsLow uint32 binding Current bindings (Low) unackedMessages uint32 message Messages consumed but not yet acked unackedMessagesHigh uint32 message Messages consumed but not yet acked (High) unackedMessagesLow uint32 message Messages consumed but not yet acked (Low) messageLatencySamples delta-time nanosecond Broker latency through this queue (Samples) messageLatencyMin delta-time nanosecond Broker latency through this queue (Min) messageLatencyMax delta-time nanosecond Broker latency through this queue (Max) messageLatencyAverage delta-time nanosecond Broker latency through this queue (Average)
- To exit the tool and return to the shell, type
quit
at the prompt:qpid: quit Exiting...
12.1.4. Using qpid-queue-stats
qpid-queue-stats
to launch the tool.
Queue Name Sec Depth Enq Rate Deq Rate ======================================================================================== message_queue 10.00 11224 0.00 54.01 qmfc-v2-ui-radhe.26001.1 10.00 0 0.10 0.10 topic-radhe.26001.1 10.00 0 0.20 0.20 message_queue 10.01 9430 0.00 179.29 qmfc-v2-ui-radhe.26001.1 10.01 0 0.10 0.10 topic-radhe.26001.1 10.01 0 0.20 0.20
-a
switch, and provide a remote server address, and optionally the remote port and authentication credentials.
192.168.1.145
, issue the command:
qpid-queue-stats -a 192.168.1.145
broker1.mydomain.com
:
qpid-queue-stats -a broker1.mydomain.com
broker1.mydomain.com
, where the broker is running on port 8888:
qpid-queue-stats -a broker1.mydomain.com:8888
192.168.1.145
, which requires authentication:
qpid-queue-stats -a username/password@192.168.1.145
Appendix A. Exchange and Queue Declaration Arguments
A.1. Exchange and Queue Argument Reference
- Changes
qpid.last_value_queue
andqpid.last_value_queue_no_browse
deprecated and removed.qpid.msg_sequence
queue argument replaced byqpid.queue_msg_sequence
.ring_strict
andflow_to_disk
are no longer validqpid.policy_type
values.qpid.persist_last_node
deprecated and removed.
Exchange options
qpid.exclusive-binding
(bool)- Ensures that a given binding key is associated with only one queue.
qpid.ive
(bool)- If set to “true”, the exchange is an initial value exchange, which differs from other exchanges in only one way: the last message sent to the exchange is cached, and if a new queue is bound to the exchange, it attempts to route this message to the queue, if the message matches the binding criteria. This allows a new queue to use the last received message as an initial value.
qpid.msg_sequence
(bool)- If set to “true”, the exchange inserts a sequence number named “qpid.msg_sequence” into the message headers of each message. The type of this sequence number is int64. The sequence number for the first message routed from the exchange is 1, it is incremented sequentially for each subsequent message. The sequence number is reset to 1 when the qpid broker is restarted.
qpid.sequence_counter
(int64)- Start
qpid.msg_sequence
counting at the given number.
Queue options
no-local
(bool)- Specifies that the queue should discard any messages enqueued by sessions on the same connection as that which declares the queue.
qpid.alert_count
(uint32_t)- If the queue message count goes above this size an alert should be sent.
qpid.alert_repeat_gap
(int64_t)- Controls the minimum interval between events in seconds. The default value is 60 seconds.
qpid.alert_size
(int64_t)- If the queue size in bytes goes above this size an alert should be sent.
qpid.auto_delete_timeout
(bool)- If a queue is configured to be automatically deleted, it will be deleted after the amount of seconds specified here.
qpid.browse-only
(bool)- All users of queue are forced to browse. Limit queue size with ring, LVQ, or TTL. Note that this argument name uses a hyphen rather than an underscore.
qpid.file_count
(int)- Set the number of files in the persistence journal for the queue. Default value is 8.
qpid.file_size
(int64)- Set the number of pages in the file (each page is 64KB). Default value is 24.
qpid.flow_resume_count
(uint32_t)- Flow resume threshold value as a message count.
qpid.flow_resume_size
(uint64_t)- Flow resume threshold value in bytes.
qpid.flow_stop_count
(uint32_t)- Flow stop threshold value as a message count.
qpid.flow_stop_size
(uint64_t)- Flow stop threshold value in bytes.
qpid.last_value_queue_key
(string)- Defines the key to use for a last value queue.
qpid.max_count
(uint32_t)- The maximum byte size of message data that a queue can contain before the action dictated by the
policy_type
is taken. qpid.max_size
(uint64_t)- The maximum number of messages that a queue can contain before the action dictated by the
policy_type
is taken. qpid.policy_type
(string)- Sets default behavior for controlling queue size. Valid values are
reject
andring
. qpid.priorities
(size_t)- The number of distinct priority levels recognized by the queue (up to a maximum of 10). The default value is 1 level.
qpid.queue_msg_sequence
(string)- Causes a custom header with the specified name to be added to enqueued messages. This header is automatically populated with a sequence number.
qpid.trace.exclude
(string)- Does not send on messages which include one of the given (comma separated) trace ids.
qpid.trace.id
(string)- Adds the given trace id as to the application header "
x-qpid.trace
" in messages sent from the queue. x-qpid-maximum-message-count
- This is an alias for
qpid.alert_count
. x-qpid-maximum-message-size
- This is an alias for
qpid.alert_size
. x-qpid-minimum-alert-repeat-gap
- This is an alias for
qpid.alert_repeat_gap
. x-qpid-priorities
- This is an alias for
qpid.priorities
.
Appendix B. OpenSSL Certificate Reference
B.1. Reference of Certificates
openssl
command assumes familiarity with SSL. For more background information on SSL refer to the OpenSSL documentation at www.openssl.org.
Important
Generating Certificates
Procedure B.1. Create a Private Key
- Use this command to generate a 1024-bit RSA private key with file encryption. If the key file is encrypted, the password will be needed every time an application accesses the private key.
# openssl genrsa -des3 -out mykey.pem 1024
Use this command to generate a key without file encryption:# openssl genrsa -out mykey.pem 1024
Procedure B.2. Create a Self-Signed Certificate
- The
nodes
option causes the key to be stored without encryption. OpenSSL will prompt for values needed to create the certificate.# openssl req -x509 -nodes -days 7 -newkey rsa:1024 -keyout mykey.pem -out mycert.pem
- The
subj
option can be used to specify values and avoid interactive prompts, for example:# openssl req -x509 -nodes -days 7 -subj '/C=US/ST=NC/L=Raleigh/CN=www.redhat.com' -newkey rsa:1024 -keyout mykey.pem -out mycert.pem
- The
new
andkey
options generate a certificate using an existing key instead of generating a new one.# openssl req -x509 -nodes -days 7 -new -key mykey.pem -out mycert.pem
Create a Certificate Signing Request
# openssl req -new -key mykey.pem -out myreq.pem
Create Your Own Certificate Authority
- Create a self-signed certificate for the CA, as described in Procedure B.2, “Create a Self-Signed Certificate”.
- OpenSSL needs the following files set up for the CA to sign certificates. On a Red Hat Enterprise Linux system with a fresh OpenSSL installation using a default configuration, set up the following files:
- Set the path for the CA certificate file as
/etc/pki/CA/cacert.pem
. - Set the path for the CA private key file as
/etc/pki/CA/private/cakey.pem
. - Create a zero-length index file at
/etc/pki/CA/index.txt
. - Create a file containing an initial serial number (for example, 01) at
/etc/pki/CA/serial
. - The following steps must be performed on RHEL 5:
- Create the directory where new certificates will be stored:
/etc/pki/CA/newcerts
. - Change to the certificate directory:
cd /etc/pki/tls/certs
.
- The following command signs a CSR using the CA:
# openssl ca -notext -out mynewcert.pem -infiles myreq.pem
Install a Certificate
- For OpenSSL to recognize a certificate, a hash-based symbolic link must be generated in the
certs
directory./etc/pki/tls
is the parent of thecerts
directory in Red Hat Enterprise Linux's version of OpenSSL. Use theversion
command to check the parent directory:# openssl version -d OPENSSLDIR: "/etc/pki/tls"
- Create the required symbolic link for a certificate using the following command:
# ln -s certfile `openssl x509 -noout -hash -in certfile`.0
It is possible for more than one certificate to have the same hash value. If this is the case, change the suffix on the link name to a higher number. For example:# ln -s certfile `openssl x509 -noout -hash -in certfile`.4
Examine Values in a Certificate
# openssl x509 -text -in mycert.pem
Exporting a Certificate from NSS into PEM Format
- This command exports a certificate with a specified nickname from an NSS database:
# certutil -d . -L -n "Some Cert" -a > somecert.pem
- These commands can be used together to export certificates and private keys from an NSS database and convert them to PEM format. They produce a file containing the client certificate, the certificate of its CA, and the private key.
# pk12util -d . -n "Some Cert" -o somecert.pk12 # openssl pkcs12 -in somecert.pk12 -out tmckay.pem
See documentation for theopenssl pkcs12
command for options that limit the content of the PEM output file.
Appendix C. Revision History
Revision History | |||
---|---|---|---|
Revision 3.2.0-7 | June 2017 | Susan Jay | |
| |||
Revision 3.2.0-6 | Fri Oct 16 2015 | Scott Mumford | |
| |||
Revision 3.2.0-5 | Thu Oct 8 2015 | Scott Mumford | |
| |||
Revision 3.2.0-3 | Tue Sep 29 2015 | Scott Mumford | |
| |||
Revision 3.2.0-1 | Tue Jul 14 2015 | Jared Morgan | |
| |||
Revision 3.1.0-5 | Wed Apr 01 2015 | Jared Morgan | |
| |||
Revision 3.0.0-1 | Tue Sep 23 2014 | Jared Morgan | |
|