-
Language:
English
-
Language:
English
Puppet Deployment Guide
Installing and Configuring OpenShift Enterprise Using Puppet
Red Hat OpenShift Documentation Team
Abstract
Chapter 1. Introduction to OpenShift Enterprise
Chapter 2. Introduction to Puppet
.pp file extension.
Note
Chapter 3. Introduction to OpenShift Enterprise Puppet Deployments
ose_version parameter in a host's Puppet manifest to the OpenShift Enterprise release version. For example, the line in the manifest would be:
ose_version => '2.2',
-
broker - Installs the broker and Management Console applications.
-
node - Installs the node component and cartridges.
-
msgserver - Installs an ActiveMQ message broker.
-
datastore - Installs MongoDB (not sharded/replicated).
-
nameserver - Installs a BIND DNS server configured with a TSIG key for dynamic updates.
Note
- Collocation of
brokerandnoderoles. - Using
keepalivedand HAProxy for broker high availability (HA) load balancing. - Avahi and Route53 DNS plug-ins.
10gen-mms-agent,jbossas, andphpmyadmincartridges (not distributed with OpenShift Enterprise).
Chapter 4. System Prerequisites
4.1. Installing Puppet
puppetlabs-stdlib module to version 4.3.2 or later with the following command:
# puppet module upgrade puppetlabs-stdlib --version '>=4.3.2'4.2. Configuring Repositories
yum repositories for OpenShift Enterprise. Before running Puppet, you must configure the appropriate subscriptions using either the Red Hat Subscription Manager (RHSM) or RHN Classic subscription method, or manually ensure that the appropriate yum repositories for each host role are available.
If you are using RHSM or RHN Classic and configuring a host to have the broker, msgserver, or datastore role, ensure that the Red Hat OpenShift Enterprise 2.2 Infrastructure channel is enabled using your chosen subscription method. See the "Configuring Broker Host Entitlements" [6] section in the OpenShift Enterprise Deployment Guide for details.
yum priorities and exclude directives are set appropriately by following the oo-admin-yum-validator tool instructions in the "Configuring Yum on Broker Hosts" [7] section that follows in the same guide.
nameserver role, only the Red Hat Enterprise Linux 6 Server base channel is required.
If you are using RHSM or RHN Classic and configuring a host to have the node role, ensure that the Red Hat OpenShift Enterprise 2.2 Application Node channel is enabled using your chosen subscription method. If you intend to install any premium cartridges, ensure the host has access to any relevant add-on subscriptions as well. See the "Configuring Node Host Entitlements" [8] section in the OpenShift Enterprise Deployment Guide for details.
yum priorities and exclude directives are set appropriately by following the oo-admin-yum-validator tool instructions in the "Configuring Yum on Node Hosts" [9] section that follows in the same guide.
4.3. Installing the OpenShift Origin Puppet Module
# puppet module install openshift/openshift_originpuppet-openshift_origin repository onto the target system with the following:
# git clone https://github.com/openshift/puppet-openshift_origin.git /etc/puppet/modules/openshift_origin4.4. Generating a BIND TSIG Key
Procedure 4.1. To Generate a BIND TSIG Key:
- The
dnssec-keygencommand, provided by the bind package, can be used to generate a TSIG key. Install the bind package on a host, if required:#
yum install bindNote
The bind package is available in the Red Hat Enterprise Linux 6 Server base channel. - Configure the
$domainenvironment variable to simplify the process in the following step, replacingCloud_Domainwith the domain name to suit your environment:# domain=Cloud_Domain - Generate a TSIG key for your chosen cloud domain:
#
dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom -K /var/named $domain#cat /var/named/K$domain.*.key | awk '{print $8}'The format for the TSIG key returned by the last command should resembleCNk+wjszKi9da9nL/1gkMY7H+GuUng==. This key is set in thebind_keyPuppet parameter in later sections. - If you want your OpenShift Enterprise hosts to be in a separate domain than the zone used for applications hosted on OpenShift Enterprise, you can create a second TSIG key at this time as well:
# infra_domain=Infrastructure_Domain#dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom -K /var/named $infra_domain#cat /var/named/K$infra_domain.*.key | awk '{print $8}'This key can be set in thedns_infrastructure_keyPuppet parameter in later sections, if thedns_infrastructure_zoneparameter is set.
4.5. Updating the Host Name
Procedure 4.2. Updating the Host Name
- Update the host name in the
/etc/sysconfig/networkfile:NETWORKING=yes HOSTNAME=New_Hostname
- Also update the host name using the
hostnamecommand:#
hostname New_Hostname
Chapter 5. Puppet Configuration and Deployment
openshift_origin) that tells Puppet which OpenShift Enterprise components to install and configure on the host. If you are new to Puppet, you can learn more about how this works in the Puppet Labs® documentation [10].
configure_ose.pp on each given host. For a comprehensive list of the installation parameters for OpenShift Enterprise you can specify with Puppet manifests, see Chapter 8, Puppet Parameters.
Important
ose_version parameter is set to 2.2 in each host's Puppet manifest to enable OpenShift Enterprise support with the module.
After creating a Puppet manifest to your specifications for a given host, you can begin the deployment process by running the Puppet utility on the host and specifying the manifest file:
# puppet apply --verbose configure_ose.ppChapter 6. Example Puppet Configurations
6.1. Configuring One Broker Host and One Node Host
Example 6.1. Broker Host Configuration
class { 'openshift_origin' :
roles => ['msgserver','datastore','nameserver','broker'],
ose_version => '2.2',
# Hostname values
broker_hostname => 'broker.openshift.local',
datastore_hostname => 'broker.openshift.local',
nameserver_hostname => 'broker.openshift.local',
msgserver_hostname => 'broker.openshift.local',
node_hostname => 'node.openshift.local',
# IP address values
broker_ip_addr => '10.10.10.24',
nameserver_ip_addr => '10.10.10.24',
node_ip_addr => '10.10.10.27',
conf_node_external_eth_dev => 'eth0',
# You must pre-configure your RPM sources
install_method => 'none',
# OpenShift Config
domain => 'example.com',
conf_valid_gear_sizes => 'small,medium,large',
conf_default_gear_capabilities => 'small,medium',
conf_default_gear_size => 'small',
openshift_user1 => 'demo',
openshift_password1 => 'IZPmHrdxOgqjqB0TMNDGQ',
# Datastore Config
mongodb_port => 27017,
mongodb_replicasets => false,
mongodb_broker_user => 'openshift',
mongodb_broker_password => 'brFZGRCiOlmAqrMbj0OYgg',
mongodb_admin_user => 'admin',
mongodb_admin_password => 'BbMsrtPxsmSi5SY1zerN5A',
# MsgServer config
msgserver_cluster => false,
mcollective_user => 'mcollective',
mcollective_password => 'eLMRLsAcytKAJmuYOPE6Q',
# DNS Config
dns_infrastructure_zone => 'openshift.local',
dns_infrastructure_names =>
[
{ hostname => 'broker.openshift.local',
ipaddr => '10.10.10.24'
},
{ hostname => 'node.openshift.local',
ipaddr => '10.10.10.27'
}
],
bind_key => 'yV9qIn/KuCqvnu7SNtRKU3oZQMMxF1ET/GjkXt5pf5JBcHSKY8tqRagiocCbUX56GOM/iuP//D0TteLc3f1N2g==',
dns_infrastructure_key => 'UjCNCJgnqJPx6dFaQcWVwDjpEAGQY4Sc2H/llwJ6Rt+0iN8CP0Bm5j5pZsvvhZq7mxx7/MdTBBMWJIA9/yLQYg==',
}
Example 6.2. Node Host Configuration
class { 'openshift_origin' :
roles => ['node'],
ose_version => '2.2',
# Hostname values
broker_hostname => 'broker.openshift.local',
datastore_hostname => 'broker.openshift.local',
msgserver_hostname => 'broker.openshift.local',
nameserver_hostname => 'broker.openshift.local',
node_hostname => 'node.openshift.local',
# IP Address values
broker_ip_addr => '10.10.10.24',
nameserver_ip_addr => '10.10.10.24',
node_ip_addr => '10.10.10.27',
conf_node_external_eth_dev => 'eth0',
# You must pre-configure your RPM sources
install_method => 'none',
# OpenShift Config
domain => 'example.com',
openshift_user1 => 'demo',
openshift_password1 => 'IZPmHrdxOgqjqB0TMNDGQ',
conf_valid_gear_sizes => 'small,medium,large',
conf_default_gear_capabilities => 'small,medium',
conf_default_gear_size => 'small',
# Datastore config
mongodb_port => 27017,
mongodb_replicasets => false,
mongodb_broker_user => 'openshift',
mongodb_broker_password => 'brFZGRCiOlmAqrMbj0OYgg',
mongodb_admin_user => 'admin',
mongodb_admin_password => 'BbMsrtPxsmSi5SY1zerN5A',
# MsgServer Config
mcollective_user => 'mcollective',
mcollective_password => 'eLMRLsAcytKAJmuYOPE6Q',
# DNS Config
dns_infrastructure_zone => 'openshift.local',
bind_key => 'yV9qIn/KuCqvnu7SNtRKU3oZQMMxF1ET/GjkXt5pf5JBcHSKY8tqRagiocCbUX56GOM/iuP//D0TteLc3f1N2g==',
dns_infrastructure_key => 'UjCNCJgnqJPx6dFaQcWVwDjpEAGQY4Sc2H/llwJ6Rt+0iN8CP0Bm5j5pZsvvhZq7mxx7/MdTBBMWJIA9/yLQYg==',
}
6.2. Configuring High Availability Deployments
broker, msgserver, and datastore roles can be deployed in high availability (HA) configurations.
6.2.1. Configuring a High Availability Broker
Example 6.3. Non-broker Host Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Use the *virtual* broker info for these values on hosts that are not Brokers
broker_hostname => 'virtbroker.openshift.local',
broker_ip_addr => '10.10.20.250',
...
}
Example 6.4. Broker Host Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Use the actual target host info for these values on hosts that are Brokers
broker_hostname => <target_hostname>,
broker_ip_addr => <target_ip_addr>,
# Provide the cluster info
broker_virtual_hostname => 'virtbroker.openshift.local',
broker_virtual_ip_address => '10.10.20.250',
...
}
Example 6.5. Name Server Configuration
nameserver role:
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Use the *virtual* broker info for these values
broker_virtual_hostname => 'virtbroker.openshift.local',
broker_virtual_ip_addr => '10.10.20.250',
# Additionally if you are using this nameserver to serve the domain for
# OpenShift host systems, include the virtual host info in the infrastructure
# list:
dns_infrastructure_names =>
[
...
{ hostname => 'virtbroker.openshift.local',
ipaddr => '10.10.10.250'
},
],
...
}6.2.2. Configuring a High Availability Datastore
datastore instances into a MongoDB replica set. If you choose to use an HA datastore, you must provide at least three datastore hosts and the total number of hosts must be odd.
broker role must have the following additional information:
Example 6.6. Broker Host Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Include the MongoDB replica set information for Brokers
mongodb_replicasets => true,
mongodb_replicasets_members => ['10.10.20.30:27071','10.10.20.31:27071','10.10.20.32:27071'],
}
datastore role must have the following information:
Example 6.7. Datastore Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Set the datastore_hostname value to the current datastore host's name
datastore_hostname => <this_datastore_hostname>,
# Include the MongoDB replica set information
mongodb_replicasets => true,
mongodb_replicasets_members => ['10.10.20.30:27071','10.10.20.31:27071','10.10.20.32:27071'],
# One and only of the datastore hosts will be the primary
mongodb_replica_primary => true|false,
# All datastore hosts will know the primary's IP address and a
# common replica key value
mongodb_replica_primary_ip_addr = <primary_datastore_ip_addr>,
mongodb_key = <replica_key_value>,
}
6.2.3. Configuring a High Availability Message Server
Example 6.8. Message Server Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Set the msgserver_hostname to the current msgserver host
msgserver_hostname => <this_msgserver_hostname>,
# Set the shared password that the cluster members will use
msgserver_password => <shared_cluster_password>,
# Specify the hostnames of all of the cluster members.
msgserver_cluster => true,
msgserver_cluster_members => ['msgserver1.openshift.local','msgserver2.openshift.local','msgserver3.openshift.local'],
}
broker or node role must have the following information as well, even if they are not message server hosts:
Example 6.9. Broker and Node Configuration
class { 'openshift_origin' :
# Other settings as appropriate per above examples
...
# Specify the hostnames of the msgserver cluster members.
msgserver_cluster => true,
msgserver_cluster_members => ['msgserver1.openshift.local','msgserver2.openshift.local','msgserver3.openshift.local'],
}
6.3. Configuring a Complete Environment
- One host with the
nameserverrole. - Two hosts with the
brokerrole. - Two hosts with the
msgserverrole. - Three hosts with the
datastorerole. - Multiple hosts with the
noderole.
ec2_public_ipv4 fact.
Recommended Order of Deployment:
- Name server host.
- Message server hosts.
- Datastore hosts, ensuring the primary host is fully deployed before adding additional nodes to your replica set.
- Broker hosts.
- Node hosts.
$bind_key = 'oZmVeXEiAi3foJ5GPG/11aaliaw1Wm7hccODfqBDfKRluO8bUfHK08mFMxpBnSW2bNJb+567Mc2sOwWyg7a1AA==' $broker_virtual_hostname = 'ose-broker.example.com' $install_method = 'none' $mongodb_replicasets = true $mongodb_replicasets_members = ["ose-mongo01.example.com:27017","ose-mongo02.example.com:27017","ose-mongo03.example.com:27017"] $msgserver_cluster = true $msgserver_cluster_members = ["ose-amq01.example.com","ose-amq02.example.com"] $nameserver_ip_addr = '10.3.8.18' $nameserver_hostname = 'ose-ns01.example.com' $ose_version = '2.2' $register_host_with_nameserver = true
Example 6.10. Name Server Configuration
node /^ose-ns/ {
class { 'openshift_origin':
roles => ['nameserver'],
bind_key => $bind_key,
install_method => $install_method,
nameserver_ip_addr => $nameserver_ip_addr,
nameserver_hostname => $nameserver_hostname,
ose_version => $ose_version,
register_host_with_nameserver => $register_host_with_nameserver,
}
}
Example 6.11. Message Server Configuration (ActiveMQ)
node /^ose-amq/ {
class { 'openshift_origin':
roles => ['msgserver'],
msgserver_cluster => $msgserver_cluster,
msgserver_cluster_members => $msgserver_cluster_members,
msgserver_hostname => $::fqdn,
msgserver_routing => $msgserver_routing,
msgserver_routing_password => $msgserver_routing_password,
# all node classes get these params
bind_key => $bind_key,
install_method => $install_method,
nameserver_ip_addr => $nameserver_ip_addr,
nameserver_hostname => $nameserver_hostname,
ose_version => $ose_version,
register_host_with_nameserver => $register_host_with_nameserver,
}
}
Example 6.12. Datastore Configuration
datastore node ose-mongo01.example.com.
node /^ose-mongo/ {
class { 'openshift_origin':
roles => ['datastore'],
datastore1_ip_addr => '10.3.9.214',
datastore2_ip_addr => '10.3.9.219',
datastore3_ip_addr => '10.3.9.221',
mongodb_replica_primary_ip_addr => '10.3.9.214',
mongodb_replicasets => $mongodb_replicasets,
mongodb_replicasets_members => $mongodb_replicasets_members,
mongodb_replica_primary => $::fqdn ? {
'ose-mongo01.example.com' => true,
default => false,
},
datastore_hostname => $::fqdn,
# all node classes get these params
bind_key => $bind_key,
install_method => $install_method,
nameserver_ip_addr => $nameserver_ip_addr,
nameserver_hostname => $nameserver_hostname,
ose_version => $ose_version,
register_host_with_nameserver => $register_host_with_nameserver,
}
}Example 6.13. Node Configuration
node_ip_address using the public IP address so that all applications on these gears are publicly accessible.
node /^ose-small-node/ {
class { 'openshift_origin':
roles => ['node'],
broker_hostname => $::fqdn,
node_hostname => $::fqdn,
node_ip_addr => $::ec2_public_ipv4,
node_frontend_plugins => ["apache-mod-rewrite","nodejs-websocket"],
install_cartridges => ["cron","diy","haproxy","mongodb","nodejs","perl","php","postgresql","python","ruby","jenkins","jenkins-client","mysql"],
msgserver_cluster => $msgserver_cluster,
msgserver_cluster_members => $msgserver_cluster_members,
# all node classes get these params
bind_key => $bind_key,
install_method => $install_method,
nameserver_ip_addr => $nameserver_ip_addr,
nameserver_hostname => $nameserver_hostname,
ose_version => $ose_version,
register_host_with_nameserver => $register_host_with_nameserver,
}
}Example 6.14. Broker Configuration
node /^ose-broker/ {
class { 'openshift_origin':
roles => ['broker'],
broker_cluster_members => ["ose-broker01.example.com", "ose-broker02.example.com"],
broker_hostname => $::fqdn,
broker_virtual_hostname => $broker_virtual_hostname,
msgserver_cluster => $msgserver_cluster,
msgserver_cluster_members => $msgserver_cluster_members,
conf_broker_session_secret => 'secret',
conf_console_session_secret => 'secret',
mongodb_replicasets => $mongodb_replicasets,
mongodb_replicasets_members => $mongodb_replicasets_members,
# all node classes get these params
bind_key => $bind_key,
install_method => $install_method,
nameserver_ip_addr => $nameserver_ip_addr,
nameserver_hostname => $nameserver_hostname,
ose_version => $ose_version,
register_host_with_nameserver => $register_host_with_nameserver,
}
}
Chapter 7. Manual Post-Deployment Tasks
- Set up DNS entries for hosts. If you installed BIND using the Puppet module, then any other components installed with the module on the same host received DNS entries. Other hosts must all be defined manually, including at least your node hosts. The
oo-register-dnscommand on a broker host may prove useful for this. - Copy the
rsyncpublic key to enable moving gears. The brokerrsyncpublic key must go on nodes, but it is difficult to script the task generically. Nodes should not have password-less access to brokers to copy the.pubkey, so this must be performed manually on each node host:#
scp root@broker:/etc/openshift/rsync_id_rsa.pub /root/.ssh/(The above step will ask for the root password of the broker machine.) #cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys#rm /root/.ssh/rsync_id_rsa.pubIf you skip this step, each gear move requires typingrootpasswords for each of the node hosts involved. - Copy SSH host keys between the node hosts. All node hosts must identify with the same host keys, so that when gears are moved between hosts,
ssh, andgitdo not give developers spurious warnings about the host keys changing. Copy/etc/ssh/ssh_*from one node host to all of the rest. Alternatively, if using the same image for all hosts, keep the keys from the image. - Create districts. Nodes must belong to a district in order to work properly. Adding a node to a district after the node already has hosted applications running on it is very difficult, so it is important to do during the initial deployment. For a discussion of what districts are, see the OpenShift Enterprise Administration Guide [12].On a broker host, run the following command to define a new district:
#
oo-admin-ctl-district -c create -n District_Name -p Gear_ProfileTo perform a blank assignment of all nodes to a district, run:#
oo-admin-ctl-district -c add-node -n District_Name -aOtherwise add nodes one at a time with:#
oo-admin-ctl-district -c add-node -n District_Name -i Node_Hostname - Import cartridge manfiests. Run the following command on a broker host to import the cartridge manifests for all cartridges installed on nodes:
#
oo-admin-ctl-cartridge -c import-profile --activate --obsoleteThis registers the cartridges with the broker and makes them available to developers for new hosted applications.
Note
Chapter 8. Puppet Parameters
Note
Choose from the following roles to be configured on the host.
broker- Installs the broker and Management Console applications.
node- Installs the node component and cartridges.
msgserver- Installs an ActiveMQ message broker.
datastore- Installs MongoDB (not sharded/replicated).
nameserver- Installs a BIND DNS server configured with a TSIG key for dynamic updates.
['broker','node','msgserver','datastore','nameserver']
This sets the method for providing packages to the installation process. Currently, the only supported option for OpenShift Enterprise is none, meaning installation sources must already set up when the module executes (for example, using RHSM or RHN Classic).
The network domain, or cloud domain, under which applications and hosts will be placed.
'example.com'
These parameters supply the FQDN of the hosts containing the respective components. Used for configuring the host’s name at installation and for configuring the broker application to reach the required services.
domain (for example, broker.example.com), except nameserver=ns1.example.com.
Note
IP addresses of the first three MongoDB servers in a replica set. Add datastoreX_ip_addr parameters for larger clusters.
undef
IP of a name server instance or current IP if installing on this node. This is used by every host to configure its primary name server.
When the name server is remote, use this to specify the key for updates. This is the Key: field from the .private key file generated by the dnssec-keygen command. This field is required on all node hosts.
When using a BIND key, use this algorithm for the BIND key.
'HMAC-MD5'
When the name server is remote, this Kerberos keytab together with a Kerberos principal can be used instead of the dnssec key for updates.
When the name server is remote, this Kerberos principal together with a Kerberos keytab can be used instead of the dnssec key for updates.
This and the aws_secret_key parameter are Amazon AWS security credentials. The aws_access_key_id is a string which identifies an access credential.
This is the secret portion of Amazon AWS security credentials indicated by the aws_access_key_id parameter.
This is the ID string for an AWS Hosted zone which will contain the OpenShift Enterprise application records.
List of upstream DNS servers to use when installing a nameserver on this node.
['8.8.8.8']
This is used for node hosts to record its broker. It is also the default for the name server IP if none is given.
The virtual IP address that will front-end the broker cluster.
undef
The host name that represents the broker API cluster. This name is associated to the broker_virtual_ip_address parameter and added to BIND for DNS resolution.
'changeme'
This is used for node hosts to give a public IP, if different from the one on its NIC.
The following resource limits must be the same within a given district.
- node_profile
- This is the specific node’s gear profile. Default:
'small' - node_quota_files
- The max number of files allowed in each gear. Default:
'80000' - node_quota_blocks
- The max storage capacity allowed in each gear (1 block = 1024 bytes). Default:
'1048576' - node_max_active_gears
- This is used for limiting or guiding gear placement. For no overcommit, this must be:
(Total System Memory - 1G) /
memory_limit_in_bytesDefault:'100' - node_no_overcommit_active
- This enforces the
node_max_active_gearsparameter in a more stringent manner than normal. However, it also adds overhead to gear creation, so it should only be set totruewhen required. For example, in the case of enforcing single tenancy on a node.Default:false - node_limits_nproc
- The max number of processes. Default:
'250' - node_tc_max_bandwidth
- mbit/sec, total bandwidth allowed for all gears. Default:
'800' - node_tc_user_share
- mbit/sec, one user is allotted. Default:
'2' - node_cpu_shares
- The CPU share percentage for each gear. Default:
'128' - node_cpu_cfs_quota_us
- Default:
'100000' - node_memory_limit_in_bytes
- Gear memory limit in bytes. Default:
'536870912'(512MB) - node_memsw_limit_in_bytes
- Gear max memory limit including swap (512M + 100M swap). Default:
'641728512' - node_memory_oom_control
- Kill processes when hitting out of memory. Default:
'1' - node_throttle_cpu_shares
- The CPU share percentage each gear receives at throttle. Default:
'128' - node_throttle_cpu_cfs_quota_us
- Default:
'30000' - node_throttle_apply_period
- Default:
120 - node_throttle_apply_percent
- Default:
'30' - node_throttle_restore_percent
- Default:
'70' - node_boosted_cpu_cfs_quota_us
- Default:
'200000' - node_boosted_cpu_shares
- The CPU share percentage each gear receives while boosted. Default:
'30000'
Enabling this configures NTP. It is important that the time be synchronized across hosts because MCollective messages have a TTL of 60 seconds and may be dropped if the clocks are too far out of sync. However, NTP is not necessary if the clock will be kept in sync by some other means.
true
If the configure_ntp parameter is set to true (default), this parameter allows users to specify an array of NTP servers used for clock synchronization.
['time.apple.com iburst', 'pool.ntp.org iburst', 'clock.redhat.com iburst']
Note
iburst after every NTP server definition to speed up the initial synchronization.
Set to true to cluster ActiveMQ for high availability and scalability of OpenShift Enterprise message queues.
false
An array of ActiveMQ server host names. Required when the msgserver_cluster parameter is set to true.
undef
An array of ActiveMQ server host names. Required when the msgserver_cluster is set to true.
$msgserver_cluster_members
Password used by ActiveMQ’s amquser. The amquser is used to authenticate ActiveMQ inter-cluster communication. Only used when the msgserver_cluster is set to true.
'changeme'
This is the password for the admin user for the ActiveMQ Admin Console, which is not needed by OpenShift Enterprise, but might be useful in troubleshooting.
This is the user and password shared between broker and node for communicating over the MCollective topic channels in ActiveMQ. Must be the same on all broker and node hosts.
'mcollective' / 'marionette'
This is the user name and password of the administrative user that will be created in the MongoDB datastore. These credentials are not used by in this module or by OpenShift Enterprise, but an administrative user must be added to MongoDB in order for it to enforce authentication.
'admin' / 'mongopass'
Note
CONF_NO_DATASTORE_AUTH_FOR_LOCALHOST is enabled.
This is the user name and password of the normal user that will be created for the broker to connect to the MongoDB datastore. The broker application’s MongoDB plug-in is also configured with these values.
'openshift' / 'mongopass'
This is the name of the database in MongoDB in which the broker will store data.
'openshift_broker'
The TCP port used for MongoDB to listen on.
'27017'
Enables or disables MongoDB replica sets for database high availability.
false
The MongoDB replica set name when the mongodb_replicasets parameter is set to true.
'openshift'
Set the host as the primary with true or secondary with false. Must be set on one and only one host within the mongodb_replicasets_members array.
undef
The IP address of the primary host within the MongoDB replica set.
undef
An array of [host:port] of replica set hosts.
undef
The file containing the mongodb_key used to authenticate MongoDB replica set members.
'/etc/mongodb.keyfile'
The key used by members of a MongoDB replica set to authenticate one another.
'changeme'
This user and password are entered in the /etc/openshift/htpasswd file as a test user. Red Hat recommends removing the user after installation or using a different authentication method.
'demo' / 'changeme'
Salt and private keys used when generating secure authentication tokens for application-to-broker communication. Requests like scale up or down and Jenkins builds use these authentication tokens. This value must be the same on all broker nodes.
Relative path to the product logo URL.
ose_version parameter is undefined, the default is /assets/logo-origin.svg. If the ose_version parameter is defined, the deafult is /assets/logo-enterprise-horizontal.svg.
OpenShift instance name.
ose_version parameter is undefined, the default is OpenShift Origin. If the ose_version parameter is defined, the deafult is OpenShift Enterprise.
This setting is applied on a per-scalable-application basis. When set to true, OpenShift Enterprise allows multiple instances of the HAProxy gear for a given scalable application to be established on the same node. Otherwise, on a per-scalable-application basis, a maximum of one HAProxy gear can be created for every node in the deployment. The latter is the default behavior, which protects scalable applications from single points of failure at the node level.
false
Session secrets used to encode cookies used by the broker and Management Console applications. These values must be the same on all broker nodes.
undef
List of all gear sizes that will be used in this OpenShift Enterprise installation.
['small']
Default gear size if one is not specified.
'small'
List of all gear sizes that newly created users will be able to create.
['small']
Default max number of domains a user is allowed to use.
'10'
Default max number of gears a user is allowed to use.
'100'
DNS plug-in used by the broker to register application DNS entries. Only one option is supported with OpenShift Enterprise:
-
nsupdate - An nsupdate-based plug-in. Supports TSIG and GSS-TSIG based authentication. Uses the
bind_keyparameter for TSIG and thebind_krb_keytabandbind_krb_principalparameters for GSS-TSIG.
'nsupdate'
Authentication setup for users of the OpenShift service. Options:
-
mongo - Stores user names and passwords in MongoDB.
-
kerberos - Kerberos-based authentication. Uses the
broker_krb_service_name,broker_krb_auth_realms,broker_krb_keytabparameters. -
htpasswd - Stores user names and passwords in the
/etc/openshift/htpasswdfile. -
ldap - LDAP-based authentication. Uses the
broker_ldap_uriparameter.
'htpasswd'
The KrbServiceName value for a mod_auth_kerb configuration.
The KrbAuthRealms value for a mod_auth_kerb configuration.
The Krb5KeyTab value of mod_auth_kerb is not configurable. The keytab is expected to be at /var/www/openshift/broker/httpd/conf.d/http.keytab.
The URI to the LDAP server, for example:
ldap://ldap.example.com:389/ou=People,dc=my-domain,dc=com?uid?sub?(objectClass=*)
LDAP DN (Distinguished name) of the user to bind to the directory with. For example:
cn=administrator,cn=Users,dc=domain,dc=com
Password of the bind user set in the broker_ldap_bind_dn parameter.
The kernel.shmmax sysctl setting for the /etc/sysctl.conf file.
shmmax = shmall * PAGE_SIZE - PAGE_SIZE = getconf PAGE_SIZE - shmall = cat /proc/sys/kernel/shmall
shmmax to a value higher than 80% of total available RAM on the system (expressed in BYTES).
kernel.shmmax = 68719476736
The kernel.shmall sysctl setting for the /etc/sysctl.conf file. Defaults to 2097152 BYTES
ceil(shmmax/PAGE_SIZE)
kernel.shmall = 4294967296
Specify the container type to use on the node. Currently, the selinux plug-in is the default and only supported option for OpenShift Enterprise.
'selinux'
Specify one or more plug-ins to use to register HTTP and WebSocket connections for applications. Options:
-
apache-vhost - A Virtual Host-based plug-in for HTTP and HTTPS. Suited for installations with less application create and delete activity. Easier to customize. If
apache-mod-rewriteis also selected,apache-vhostis be ignored. -
nodejs-websocket - A WebSocket proxy listening on ports 8000 and 8443.
-
haproxy-sni-proxy - A TLS proxy using SNI routing on ports 2303 through 2308.
-
apache-mod-rewrite - Deprecated in OpenShift Enterprise 2.2. A
mod_rewrite-based plug-in for HTTP and HTTPS requests. Suited for installations with many create, delete, and scale actions. Cannot be used at the same time as theapache-vhostplug-in.
['apache-vhost','nodejs-websocket']
List of user names who have UIDs in the range of OpenShift Enterprise gears but must be excluded from gear setups.
[]
External facing network device. Used for routing and traffic control setup.
'eth0'
Public and private keys used for gears on the default domain. Both values must be defined or default self-signed keys will be generated.
Name of supplementary UNIX group to add a gear to.
Enable or disable the OpenShift Enterprise node Watchman service.
true
Number of restarts to attempt before waiting RETRY_PERIOD.
'3'
Number of seconds to wait before accepting another gear restart.
'300'
Number of seconds to wait before resetting retries.
'28800'
Number of seconds a gear must remain inconsistent with its state before Watchman attempts to reset state.
'900'
Wait at least this number of seconds since last check before checking gear state on the node. Use this to reduce the impact of Watchman’s GearStatePlugin on the system.
'0'
Define a custom MOTD to be displayed to users who connect to their gears directly. If undefined, uses the default MOTD included with the node package.
undef
Set development mode and extra logging.
false
Install a Getty shell which displays DNS, IP, and login information. Used for the all-in-one VM installation.
Set up DNS entries for this host in a locally-installed BIND DNS instance.
false
The name of a zone to create which will contain OpenShift Enterprise infrastructure hosts. If this is unset, then no infrastructure zone or other artifacts will be created.
''
A dnssec symmetric key which grants update access to the infrastructure zone resource records. This is ignored unless the dns_infrastructure_zone parameter is set.
''
When using a BIND key, use this algorithm for the infrastructure BIND key. This is ignored unless the dns_infrastructure_zone parameter is set.
'HMAC-MD5'
An array of hashes containing host name and IP address pairs to populate the infrastructure zone. This is ignored unless the dns_infrastructure_zone parameter is set.
dns_infrastructure_zone parameter. Matching FQDNs are placed in the dns_infrastructure_zone. Host names anchored with a dot (.) are added verbatim.
$dns_infrastructure_names = [
{hostname => "10.0.0.1", ipaddr => "broker1"},
{hostname => "10.0.0.2", ipaddr => "data1"},
{hostname => "10.0.0.3", ipaddr => "message1"},
{hostname => "10.0.0.11", ipaddr => "node1"},
{hostname => "10.0.0.12", ipaddr => "node2"},
{hostname => "10.0.0.13", ipaddr => "node3"},
][]
Indicate whether or not this module configures the firewall for you.
List of cartridges to be installed on the node. Options:
- cron
- diy
- haproxy
- mongodb
- nodejs
- perl
- php
- postgresql
- python
- ruby
- jenkins
- jenkins-client
- mysql
- jbossews
- jbosseap (requires add-on subscription)
['cron','diy','haproxy','mongodb','nodejs','perl','php','postgresql','python','ruby','jenkins','jenkins-client','mysql']
Indicate whether or not this module will configure the resolv.conf file and network for you.
true
Set this to the X.Y release version (for example, 2.2) of OpenShift Enterprise to ensure an OpenShift Enterprise supported configuration is used.
README_OSE.asciidoc distributed with the openshift_origin Puppet module for more details.
undef
Set this to true to allow OpenShift Enterprise unsupported configurations. Only appropriate for proof of concept environments.
ose_version parameter is set.
false
Appendix A. Revision History
| Revision History | |||
|---|---|---|---|
| Revision 1.0-2 | Tue Mar 10 2015 | Timothy Poitras | |
| |||
| Revision 1.0-1 | Tue Nov 11 2014 | Alex Dellapenta | |
| |||
| Revision 1.0-0 | Mon Nov 10 2014 | Alex Dellapenta | |
| |||