-
Language:
English
-
Language:
English
9.7. Installing and Configuring MCollective on Node Hosts
The broker host uses MCollective to communicate with node hosts. MCollective on the node host must be configured so that the node host (Host 2) can communicate with the broker service on Host 1.
In a production environment, two or more messaging hosts would typically be configured on machines separate from the broker to provide high availability. This means that if one messaging host fails, the broker and node hosts can still communicate.
Procedure 9.5. To Install and Configure MCollective on the Node Host:
- Install all required packages for MCollective on Host 2 with the following command:
# yum install openshift-origin-msg-node-mcollective
- Replace the contents of the
/opt/rh/ruby193/root/etc/mcollective/server.cfg
file with the following configuration. Remember to change the setting forplugin.activemq.pool.1.host
frombroker.example.com
to the host name of Host 1. Use the same password for the MCollective user specified in the/etc/activemq/activemq.xml
file on Host 1. Use the same password for theplugin.psk
parameter, and the same numbers for theheartbeat
parameters specified in the/opt/rh/ruby193/root/etc/mcollective/client.cfg
file on Host 1:main_collective = mcollective collectives = mcollective libdir = /opt/rh/ruby193/root/usr/libexec/mcollective logfile = /var/log/openshift/node/ruby193-mcollective.log loglevel = debug daemonize = 1 direct_addressing = 0 # Plugins securityprovider = psk plugin.psk = asimplething connector = activemq plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host = broker.example.com plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = mcollective plugin.activemq.pool.1.password = marionette plugin.activemq.heartbeat_interval = 30 plugin.activemq.max_hbread_fails = 2 plugin.activemq.max_hbrlck_fails = 2 # Node should retry connecting to ActiveMQ forever plugin.activemq.max_reconnect_attempts = 0 plugin.activemq.initial_reconnect_delay = 0.1 plugin.activemq.max_reconnect_delay = 4.0 # Facts factsource = yaml plugin.yaml = /opt/rh/ruby193/root/etc/mcollective/facts.yaml
- Configure the
ruby193-mcollective
service to start on boot:# chkconfig ruby193-mcollective on
- Start the
ruby193-mcollective
service immediately:# service ruby193-mcollective start
Note
If you use the kickstart or bash script, theconfigure_mcollective_for_activemq_on_node
function performs these steps. - Run the following command on the broker host (Host 1) to verify that Host 1 recognizes Host 2:
# oo-mco ping
9.7.1. Facter
The Facter used by MCollective is an important architectural component of OpenShift Enterprise. Facter is a script that compiles the
/opt/rh/ruby193/root/etc/mcollective/facts.yaml
file, and lists the facts of interest about a node host for inspection using MCollective. Visit www.puppetlabs.com for more information about how Facter is used with MCollective. There is no central registry for node hosts, so any node host listening with MCollective advertises its capabilities as compiled by the Facter.
The broker host uses the
facts.yaml
file to determine the capabilities of all node hosts. The broker host issues a filtered search that includes or excludes node hosts based on entries in the facts.yaml
file to find a host for a particular gear.
The Facter script runs on the node host in one minute increments, which can be modified in the
/etc/cron.minutely/openshift-facts
cron job file. You can also run this script manually to immediately inspect the new facts.yaml
file.