-
Language:
English
-
Language:
English
Administration Guide
A Guide to OpenShift Enterprise Operation and Administration
Red Hat OpenShift Documentation Team
Abstract
- Platform administration
- User administration
- Cartridge management
- Resource monitoring and management
- Monitoring with the administration console
- Command reference for broker and node hosts
Chapter 1. Introduction to OpenShift Enterprise
1.1. What's New in Current Release
1.2. Upgrading OpenShift Enterprise
openshift-broker
service. This step regenerates the bundler
utility's Gemfile.lock
file and allows the broker application and related administrative commands to use the updated gems. See the latest OpenShift Enterprise Deployment Guide at https://access.redhat.com/site/documentation for instructions on how to apply asynchronous errata updates.
ose-upgrade
tool.
1.3. Migrating from RHN Classic to RHSM
subscription-manager
CLI. The migration tools are contained in the subscription-manager-migration package. An additional package, subscription-manager-migration-data, is required to map the RHN Classic channels to Red Hat Subscription Management product certificates.
Procedure 1.1. To Migrate from RHN Classic to RHSM:
- Use the
oo-admin-yum-validator
validation tool to verify that the system'syum
configuration for the current subscription method is valid for the installed OpenShift Enterprise version and components. Use the-o
option for the version and the-r
option for the components.Example 1.1. Verifying a Host With the Validation Tool
The following example is for an OpenShift Enterprise 2.2 broker host:# oo-admin-yum-validator -o 2.2 -r broker
If run without options, the validation tool attempts to detect the installed version and components. If any problems are reported, fix them manually or use the validation tool's--fix
or--fix-all
options to attempt to fix them automatically.Additional details on running the validation tool can be found in this knowledgebase article or in theoo-admin-yum-validator
man page. - Install the migration tool packages:
# yum install subscription-manager-migration subscription-manager-migration-data
- Use the
rhn-migrate-classic-to-rhsm
tool to initiate the migration. This tool has many options available, including registering to on-premise services and manually selecting subscriptions. If run without options, this tool migrates the system profile, registers the system with Red Hat Subscription Management, and automatically attaches the system to the best-matched subscriptions:# rhn-migrate-classic-to-rhsm
Consult the Red Hat Subscription Management - Migrating from RHN Classic guide or therhn-migrate-classic-to-rhsm
man page for details on additional options that may be relevant to your organization and environment.Note
A known issue, which will be fixed in Red Hat Enterprise Linux 6.6, prevents the migration tool from automatically enabling the required channels on OpenShift Enterprise 2.1 systems. You can work around this issue by using the migration tool with the--force
and--no-auto
options; this continues registering the system to Red Hat Subscription Management, but does not automatically attach a subscription. Once the migration is complete, manually attach the desired OpenShift Enterprise subscription using thesubscription-manager
tool:# subscription-manager attach --pool Pool_ID
- After the migration completes, use the
subscription-manager
tool to list information about the migration including the previous system ID:Example 1.2. Listing Migration Information
#
subscription-manager facts --list | grep migr
migration.classic_system_id: 09876 migration.migrated_from: rhn_hosted_classic migration.migration_date: 2012-09-14T14:55:29.280519 - Use the
oo-admin-yum-validator
validation tool again to verify that the system'syum
configuration is still valid under the new subscription method, and correct any issues that are reported.
Chapter 2. Platform Administration
2.1. Changing the Front-end HTTP Configuration for Existing Deployments
Virtual Hosts
front-end HTTP proxy is the default for new deployments. If your nodes are currently using the previous default, the Apache mod_rewrite
plug-in, you can use the following procedure to change the front-end configuration of your existing deployment.
HTTP
server plug-ins.
Procedure 2.1. To Change the Front-end HTTP Configuration on an Existing Deployment:
- To prevent the broker from making any changes to the front-end during this procedure, stop the ruby193-mcollective service on the node host:
#
Then set the following environment variable to prevent each front-end change from restarting the httpd service:service ruby193-mcollective stop
#
export APACHE_HTTPD_DO_NOT_RELOAD=1
- Back up the existing front-end configuration. You will use this backup to restore the complete state of the front end after the process is complete. Replace filename with your desired backup storage location:
#
oo-frontend-plugin-modify --save > filename
- Delete the existing front-end configuration:
#
oo-frontend-plugin-modify --delete
- Remove and install the front-end plug-in packages as necessary:
#
yum remove rubygem-openshift-origin-frontend-apache-mod-rewrite
#yum -y install rubygem-openshift-origin-frontend-apache-vhost
- Replicate any Apache customizations reliant on the old plug-in onto the new plug-in, then restart the httpd service:
#
service httpd restart
- Change the
OPENSHIFT_FRONTEND_HTTP_PLUGINS
value in the/etc/openshift/node.conf
file fromopenshift-origin-frontend-apache-mod-rewrite
toopenshift-origin-frontend-apache-vhost
:OPENSHIFT_FRONTEND_HTTP_PLUGINS="openshift-origin-frontend-apache-vhost"
- Un-set the previous environment variable to restarting the httpd service as normal after any front-end changes:
#
export APACHE_HTTPD_DO_NOT_RELOAD=""
- Restart the MCollective service:
#
service ruby193-mcollective restart
- Restore the HTTP front-end configuration from the backup you created in step one:
#
oo-frontend-plugin-modify --restore < filename
2.2. Enabling User Login Normalization
lowercase
method, a user logging in as JDoe
is authenticated using the configured authentication method, then the login is normalized as jdoe
by the broker to access the jdoe
user account on OpenShift Enterprise. When normalization is not enabled, a user logging in as JDoe
is authenticated using the configured authentication method and accesses the JDoe
user account on OpenShift Enterprise, while a user logging in as jdoe
ultimately accesses a separate jdoe
user account.
Warning
Table 2.1. Available Default User Login Normalization Methods
Method | Function |
---|---|
strip | Removes any additional spaces on either side of the login. |
lowercase | Changes all characters to lowercase. For example: JDoe --> jdoe |
remove_domain | Removes a domain suffix. For example: jdoe@example.com --> jdoe |
/etc/openshift/broker.conf
file on the broker host and provide one or more methods in the NORMALIZE_USERNAME_METHOD
parameter using a comma-separated list:
Example 2.1. Setting User Login Normalization Methods
NORMALIZE_USERNAME_METHOD="lowercase,remove_domain"
service openshift-broker restart
2.3. Allowing Multiple HAProxies on a Node Host
ALLOW_MULTIPLE_HAPROXY_ON_NODE
setting, located in the /etc/openshift/broker.conf
file, is set to false
by default. In production environments, Red Hat recommends to leave this setting as default. If two or more HAProxies for a single application reside on the same node host, the front-end Apache will map the DNS or alias to one HAProxy gear and not for the remaining HAProxy gears. If, for example, you have only one node host and wish to enable scalability, changing the ALLOW_MULTIPLE_HAPROXY_ON_NODE
setting to true
allows multiple HAProxy gears for the same application to reside on the same node host.
Procedure 2.2. To Allow Multiple HAProxies on a Single Node:
- Open the
/etc/openshift/broker.conf
file on the broker host and set theALLOW_MULTIPLE_HAPROXY_ON_NODE
value totrue
:ALLOW_MULTIPLE_HAPROXY_ON_NODE="
true
" - Restart the
openshift-broker
service:#
service openshift-broker restart
2.4. Enabling Support for High-Availability Applications
Note
Procedure 2.3. To Enable Support for High-Availability Applications:
- To allow scalable applications to become highly available using the configured external router, edit the
/etc/openshift/broker.conf
file on the broker host and set theALLOW_HA_APPLICATIONS
parameter to"true"
:ALLOW_HA_APPLICATIONS="true"
Note that this parameter controls whether high-availability applications are allowed in general, but does not adjust user account capabilities. User account capabilities are discussed in a later step. - A scaled application that is not highly available uses the following URL form:
http://${APP_NAME}-${DOMAIN_NAME}.${CLOUD_DOMAIN}
When high-availability is enabled, HAproxy instances are deployed in multiple gears of the application, which are spread across multiple node hosts. In order to load balance user requests, a high-availability application requires a new high-availability DNS name that points to the external routing layer rather than directly to the application head gear. The routing layer then forwards requests directly to the application's HAproxy instances, which are then distributed to the framework gears. In order to create DNS entries for high-availability applications that point to the routing layer, OpenShift Enterprise adds either a prefix or suffix, or both, to the regular application name:http://${HA_DNS_PREFIX}${APP_NAME}-${DOMAIN_NAME}${HA_DNS_SUFFIX}.${CLOUD_DOMAIN}
To change the prefix or suffix used in the high-availability URL, you can modify theHA_DNS_PREFIX
orHA_DNS_SUFFIX
parameters:HA_DNS_PREFIX="ha-" HA_DNS_SUFFIX=""
If you modify theHA_DNS_PREFIX
parameter and are using the sample routing daemon, ensure this parameter and theHA_DNS_PREFIX
parameter in the/etc/openshift/routing-daemon.conf
file are set to the same value. - DNS entries for high-availability applications can either be managed by OpenShift Enterprise or externally. By default, this parameter is set to
"false"
, which means the entries must be created externally; failure to do so could prevent the application from receiving traffic through the external routing layer. To allow OpenShift Enterprise to create and delete these entries when applications are created and deleted, set theMANAGE_HA_DNS
parameter to"true"
:MANAGE_HA_DNS="true"
Then set theROUTER_HOSTNAME
parameter to the public hostname of the external routing layer, which the DNS entries for high-availability applications point to. Note that the routing layer host must be resolvable by the broker:ROUTER_HOSTNAME="www.example.com"
- For developers to enable high-availability support with their scalable applications, they must have the
HA allowed
capability enabled on their account. By default, theDEFAULT_ALLOW_HA
parameter is set to"false"
, which means user accounts are created with theHA allowed
capability initially disabled. To have this capability enabled by default for new user accounts, setDEFAULT_ALLOW_HA
to"true"
:DEFAULT_ALLOW_HA="true"
You can also adjust theHA allowed
capability per user account using theoo-admin-ctl-user
command with the--allowha
option:#
oo-admin-ctl-user -l user --allowha true
- To make any changes made to the
/etc/openshift/broker.conf
file take effect, restart the broker service:#
service openshift-broker restart
2.5. Creating Environment Variables on Node Hosts
/etc/openshift/env
directory. By creating a file in the /etc/openshift/env
directory on a node host, an environment variable is created with the same name as the file name, and the value being set to the contents of the file.
/etc/openshift/env
directory are only set for gear users, and not for system services or other users on the node host. For example, the MCollective service does not have access to these settings during the gear and cartridge creation process.
rhc env set
command to override any environment variables set in the /etc/openshift/env
directory.
Procedure 2.4. Creating Environment Variables on a Node Host
- Create a new file in the
/etc/openshift/env
directory on the node hosts that you want the environment variable set. For example, to allow applications to use an external database, set an external database environment variable EXT_DB_CONNECTION_URL with the value of mysql://host.example.com:3306/#
echo mysql://host.example.com:3306/ > /etc/openshift/env/EXT_DB_CONNECTION_URL
- To make the changes take effect for existing applications, ask affected application owners to restart their applications by running the following commands:
$
rhc app stop -a appname
$rhc app start -a appname
Alternatively, you can restart all gears on affected node hosts. The downtime caused by restarting all gears is minimal and around a few seconds.#
oo-admin-ctl-gears restartall
2.6. Controlling Direct SSL Connections to Gears
SSL_TO_GEAR
enabled to be exposed for direct connections using the PROXY_PORTS
parameter. However, this requires setting up an external router.
Note
SSL_ENDPOINT
setting in the /etc/openshift/broker.conf
file to one of the following options to control access to cartridges that specify direct connections to gears:
allow
- If the cartridge being added to a new application specifies direct SSL connections to gears, configure the appropriate SSL routing. This is the default option.
deny
- If the cartridge being added to a new application specifies direct SSL connections to gears, do not allow the application to be created.
force
- If the cartridge being added to a new application specifies direct SSL connections to gears, set up the appropriate SSL routing. If the cartridge being added to a new application does not specify direct SSL connections to gears, do not allow the application to be created.
# Whether cartridges that specify direct SSL connection to the gear # are allowed, denied or forced. SSL_ENDPOINT="allow" # SSL_ENDPOINT="deny" # SSL_ENDPOINT="force"
2.7. Setting Gear Supplementary Groups
GEAR_SUPL_GRPS
setting in /etc/openshift/node.conf
file to designate additional groups for the gears on that node. Note that you must create a group using standard system commands before you can add it to GEAR_SUPL_GRPS
. Separate multiple groups with commas.
GEAR_SUPL_GRPS="my_group,another_group"
Note
root
and wheel
groups cannot be used as values for GEAR_SUPL_GRPS
.
2.8. Banning IP Addresses That Overload Applications
Note
Procedure 2.5. To Ban an IP Address:
- Run the following command to view a CNAME to the node host where the application's gear is located:
#
dig appname-domain.example.com
- On the node host identified in the previous step, check the application's apache logs for unusual activity. For example, a high frequency of accesses (3 to 5 per second) from the same IP address in the
access_log
file may indicate abuse:#
tail -f /var/lib/openshift/appUUID/appname/logs/*
- Ban the offending IP addresses by placing them in iptables, running the following command for each IP address:
#
iptables -A INPUT -s IP_address -j DROP
- If you are using a configuration management system, configure it appropriately to ban the offending IP addresses. For non-managed configurations, save your new iptables rules:
#
service iptables save
2.9. Enabling Maintenance Mode
Procedure 2.6. To Enable Maintenance Mode:
- Enable maintenance mode using the
ENABLE_MAINTENANCE_MODE
setting in the/etc/openshift/broker.conf
file on the broker host:ENABLE_MAINTENANCE_MODE="true"
- Define the location of the notification message using the
MAINTENANCE_NOTIFICATION_FILE
setting:MAINTENANCE_NOTIFICATION_FILE="/etc/openshift/outage_notification.txt"
- Create or edit the file defined in the
MAINTENANCE_NOTIFICATION_FILE
setting to contain the desired notification message seen by developers while the broker is in maintenance mode. - Restart the broker service:
#
service openshift-broker restart
2.10. Backup and Recovery
2.10.1. Backing Up Broker Host Files
- Backup Strategies for MongoDB Systems - http://docs.mongodb.org/manual/administration/backups/
/var/lib/mongodb
directory, which can be used as a potential mount point for fault tolerance or as backup storage.
2.10.2. Backing Up Node Host Files
tar
or cpio
, to perform this backup. Red Hat recommends backing up the following node host files and directories:
/opt/rh/ruby193/root/etc/mcollective
/etc/passwd
/var/lib/openshift
/etc/openshift
Important
/var/lib/openshift
directory is paramount to recovering a node host, including head gears of scaled applications, which contain data that cannot be recreated. If the file is recoverable, then it is possible to recreate a node from the existing data. Red Hat recommends this directory be backed up on a separate volume from the root file system, preferably on a Storage Area Network.
Even though applications on OpenShift Enterprise are stateless by default, developers can also use persistent storage for stateful applications by placing files in their $OPENSHIFT_DATA_DIR
directory. See the OpenShift Enterprise User Guide for more information.
cron
scripts to clean up these hosts. For stateful applications, Red Hat recommends keeping the state on a separate shared storage volume. This ensures the quick recovery of a node host in the event of a failure.
Note
2.10.3. Recovering Failed Node Hosts
Important
/var/lib/openshift
directory. See Section 2.10.2, “Backing Up Node Host Files” for more information.
/var/lib/openshift
gear directory had fault tolerance and can be restored. SELinux contexts must be preserved with the gear directory in order for recovery to succeed. Note this scenario rarely occurs, especially when node hosts are virtual machines in a fault-tolerant infrastructure rather than physical machines. Note that scaled applications cannot be recovered onto a node host with a different IP address than the original node host.
Procedure 2.7. To Recover a Failed Node Host:
- Create a node host with the same host name and IP address as the one that failed.
- The host name DNS A record can be adjusted if the IP address must be different. However, note that the application CNAME and database records all point to the host name and cannot be easily changed.
- Ensure the
ruby193-mcollective
service is not running on the new node host:#
service ruby193-mcollective stop
- Copy all the configuration files in the
/etc/openshift
directory from the failed node host to the new node host and ensure that the gear profile is the same.
- Attach and mount the backup to
/var/lib/openshift
, ensuring theusrquota
mount option is used:#
echo "/dev/path/to/backup/partition /var/lib/openshift/ ext4 defaults,usrquota 0 0" >> /etc/fstab
#mount -a
- Reinstate quotas on the
/var/lib/openshift
directory:#
quotacheck -cmug /var/lib/openshift
#restorecon /var/lib/openshift/aquota.user
#quotaon /var/lib/openshift
- Run the
oo-admin-regenerate-gear-metadata
tool, available starting in OpenShift Enterprise 2.1.6, on the new node host to replace and recover the failed gear data. This browses each existing gear on the gear data volume and ensures it has the correct entries in certain files, and if necessary, performs any fixes:#
oo-admin-regenerate-gear-metadata
This script attempts to regenerate gear entries for: * /etc/passwd * /etc/shadow * /etc/group * /etc/cgrules.conf * /etc/cgconfig.conf * /etc/security/limits.d Proceed? [yes/NO]:yes
Theoo-admin-regenerate-gear-metadata
tool will not make any changes unless it notices any missing entries. Note that this tool can be added to a node host deployment script.Alternatively, if you are using OpenShift Enteprise 2.1.5 or earlier, replace the/etc/passwd
file on the new node host with the content from the original, failed node host. If this backup file was lost, see Section 2.10.4, “Recreating /etc/passwd Entries” for instructions on recreating the/etc/passwd
file. - When the
oo-admin-regenerate-gear-metadata
tool completes, it runs theoo-accept-node
command and reports the output:Running oo-accept-node to check node consistency... ... FAIL: user 54fe156faf1c09b9a900006f does not have quotas imposed. This can be addressed by running: oo-devel-node set-quota --with-container-uuid 54fe156faf1c09b9a900006f --blocks 2097152 --inodes 80000
If there are any quota errors, run the suggested quota command, then run theoo-accept-node
command again to ensure the problem has been resolved:#
oo-devel-node set-quota --with-container-uuid 54fe156faf1c09b9a900006f --blocks 2097152 --inodes 80000
#oo-accept-node
- Reboot the new node host to activate all changes, start the gears, and allow MCollective and other services to run.
2.10.4. Recreating /etc/passwd Entries
/etc/passwd
entries for all gears if this backup file was lost.
Note
oo-admin-regenerate-gear-metadata
tool on a node host to replace and recover the failed gear data, including /etc/passwd
entries.
Procedure 2.8. To Recreate /etc/passwd
Entries:
- Get a list of UUIDs from the directories in
/var/lib/openshift
. - For each UUID, ensure the UNIX UID and GID values correspond to the group ID of the
/var/lib/openshift/UUID
directory. See the fourth value in the output from the following command:#
ls -d -n /var/lib/openshift/UUID
- Create the corresponding entries in
/etc/passwd
, using another node's/etc/passwd
file for reference.
2.11. Component Timeout Value Locations
- When a custom cartridge is taking a long time to be added to a gear.
- When network latency is forcing requests to take longer than usual.
- When a high load on the system is causing actions to take longer than usual.
Table 2.2. Timeout Information for Various Components
Type | Location | File | Directive |
---|---|---|---|
MCollective | Broker | /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf | MCOLLECTIVE_TIMEOUT=240 |
MCollective | Node | /opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/openshift.ddl | :timeout => 360 |
MCollective Client | Broker | /opt/rh/ruby193/root/etc/mcollective/client.cfg | plugin.activemq.heartbeat_interval = 30 |
Node Discovery | Broker | /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf | MCOLLECTIVE_DISCTIMEOUT=5 |
Facts | Broker | /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf | MCOLLECTIVE_FACT_TIMEOUT=10 |
Facts | Node | /opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/rpcutil.rb | :timeout => 10 |
Apache | Broker | /etc/httpd/conf.d/000002_openshift_origin_broker_proxy.conf | ProxyTimeout 300 |
Apache | Node | /etc/httpd/conf.d/000001_openshift_origin_node.conf | ProxyTimeout 300 |
RHC | Client | ~/.openshift/express.conf | timeout=300 |
Background Thread | Broker | /etc/openshift/console.conf | BACKGROUND_REQUEST_TIMEOUT=30 |
Warning
/opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/openshift.ddl
and /opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/rpcutil.rb
files are unsupported and may be erased by a yum update
.
- MCollective
- The MCollective timeout is configured on the broker, and is used for MCollective messages being sent from the broker to the node. If the message is lost after it is sent, or the node takes longer than expected to complete a request, this timeout will be hit.
- MCollective Client
- The MCollective client timeout is used to ensure that you have a valid and active connection to your messaging broker. Lowering the defined amount causes a quicker switch to a redundant system in the event of a failure.
- Node Discovery
- The node discovery timeout represents the allowed amount of time a node takes to acknowledge itself in the environment, instead of broadcasting to all nodes. This method of discovery is generally used in non-direct calls to the nodes. For example, when an application is created, when some administration commands are used, and some ssh key operations are performed.
- Facts
- The Facts timeout is configured on both the broker and node, and is for determining the allowed amount of time for a fact to be gathered from a node through MCollective. An example of a fact is when an application is created, and in doing so, the node's profile determines which node will perform the action. Facts are gathered often, so this timeout is short.
- Apache
- The Apache timeout is configured on the broker and node, and represents the timeout of proxy requests. This affects most requests, as they go through a proxy on both the broker and on the node. The ProxyTimeout on the broker affects requests to the broker API and rhc. If the timeout is exceeded due to lengthy requests, the client will receive an uninformative HTTP 502 error, even though the request may have succeeded. The ProxyTimeout on a node affects requests to hosted applications.
- RHC
- The rhc timeout represents the allowed amount of time that the client tools will wait for a request to be completed before ceasing the attempt. This only has to be configured on the client where rhc is run. If an action is taking longer to complete than expected, this timeout will be hit.
- Background Thread
- The background thread timeout is found on the broker, and determines how long requests from the console to the broker will take to be completed before ceasing the attempt. This communication is impacted by the amount of applications, domains, and gears an application developer has access to, as well as the locations of the datacenters that make up the OpenShift Enterprise deployment.
2.12. Enabling Network Isolation for Gears
localhost
as well as IP addresses belonging to other gears on the node, allowing users access to unprotected network resources running in another user's gear. To prevent this, starting with OpenShift Enterprise 2.2 the oo-gear-firewall
command is invoked by default at installation when using the oo-install
installation utility or the installation scripts. It must be invoked explicitly on each node host during manual installations.
Note
oo-gear-firewall
command is available in OpenShift Enterprise 2.1 starting with release 2.1.9.
oo-gear-firewall
command configures nodes with firewall rules using the iptables
command and SELinux policies using the semanage
command to prevent gears from binding or connecting on IP addresses that belong to other gears.
oo-gear-firewall
command creates static sets of rules and policies to isolate all possible gears in the range. The UID range must be the same across all hosts in a gear profile. By default, the range used by the oo-gear-firewall
command is taken from existing district settings if known, or 1000 through 6999 if unknown. The tool can be re-run to apply rules and policies for an updated UID range if the range is changed later.
# oo-gear-firewall -i enable -s enable
# oo-gear-firewall -i enable -s enable -b District_Beginning_UID -e District_Ending_UID
Chapter 3. User Administration
3.1. Creating a User
oo-admin-ctl-user
command can be used with the -c
or --create
option to create new user accounts for the OpenShift Enterprise environment. The command creates a user record in MonogDB and when used with different options, allows different capabilities to be set for specific users overriding the default settings in the /etc/openshift/broker.conf
file.
oo-admin-ctl-user
command does not set up authentication credentials. OpenShift Enterprise allows you to choose from a variety of authentication mechanisms and separates the concept of the user record that it stores in MongoDB from the user credentials that are stored by your chosen authentication mechanism. See the OpenShift Enterprise Deployment Guide [1] for more information on configuring user authentication on the broker.
# oo-admin-ctl-user -c -l Username
# oo-admin-ctl-user -c -f File_Name
3.2. Removing User Applications
oo-admin-ctl-app
command to remove a user's application.
Warning
Procedure 3.1. To Remove a User Application:
- Stop the application by running the following command on the broker host:
#
oo-admin-ctl-app -l username -a appname -c stop
- Delete the application:
#
oo-admin-ctl-app -l username -a appname -c destroy
- If the standard
stop
anddestroy
commands fail, you canforce-stop
andforce-remove
the application. Theforce-
commands do not wait for the proper shutdown sequence, so should only be used if the standard commands fail:#
oo-admin-ctl-app -l username -a appname -c force-stop
#oo-admin-ctl-app -l username -a appname -c force-destroy
3.3. Removing User Data
Warning
Procedure 3.2. To Remove User Data:
- Prevent the user from creating more gears by running the following command on the broker host:
#
oo-admin-ctl-user -l username --setmaxgears 0
- Retrieve the user's domain and application names:
#
oo-admin-ctl-domain -l username | egrep -i '^name:|^Namespace:'
- Remove the user's applications by running the following commands for each application found in the previous step:
#
oo-admin-ctl-app -l username -a app1 -c stop
#oo-admin-ctl-app -l username -a app1 -c destroy
Use theforce-destroy
parameter to remove particularly troublesome applications:#
oo-admin-ctl-app -l username -a app1 -c force-destroy
- Delete the user's domain:
#
oo-admin-ctl-domain -l username -c delete -n testdomain
--setmaxgears
option may be restricted based on the user's configuration settings:
# oo-admin-ctl-user -l username --setmaxgears 5
3.4. Removing a User
oo-admin-ctl-domain
command to remove a user from an OpenShift Enterprise environment:
# oo-admin-ctl-domain -l username -c delete
Note
oo-admin-ctl-domain
command deletes the user from the OpenShift Enterprise datastore, but does not delete user credentials stored on external databases such as LDAP or Kerberos.
3.5. Enabling Users to Add a Kerberos Principal SSH Key
VALID_SSH_KEY_TYPES
option, in the /etc/openshift/broker.conf
file, contains a list of supported SSH key types. If VALID_SSH_KEY_TYPES
is unspecified, all supported types are allowed.
k5login_directory
option is used in the /etc/krb5.conf
file, ensure SSHD can read the specified directory. For SELinux, the default context might need to be modified, as in the following example:
$semanage fcontext -a -t krb5_home_t "/Path/To/File(/.*)?"
$restorecon -R -v /Path/To/File
3.6. Setting Default Maximum Number of Domains per User
DEFAULT_MAX_DOMAINS
setting in the /etc/openshift/broker.conf
file on the broker host to configure the default maximum number of domains that can be created per user.
DEFAULT_MAX_DOMAINS="5"
3.7. Managing Custom Domain Aliases
app.example.com
or my-app.example.com
for an application that was created in the cloud domain example.com
. This restriction prevents confusion or possible name collisions.
ALLOW_ALIAS_IN_DOMAIN
setting in the /etc/openshift/broker.conf
file on the broker host allows developers to create aliases within the cloud domain, provided the alias does not take the form <name>-<name>.<cloud-domain>. Aliases taking this standard form of application names are rejected to prevent conflicts. For example, while a developer could now create the alias app.example.com
for an application that was created in the cloud domain example.com
, they still could not create the alias my-app.example.com
because it takes the standard form.
Important
ALLOW_ALIAS_IN_DOMAIN
setting is enabled, only standard name collisions are prevented. Collisions with high-availability application names are not prevented, which, should they occur on the same node host, could result in traffic being routed to the wrong gear on the node host. OpenShift Enterprise still does not create a DNS entry for the alias; that is an external step.
Procedure 3.3. To Allow Custom Domain Aliases in the Cloud Domain:
- Edit the
/etc/openshift/broker.conf
file on the broker host and set theALLOW_ALIAS_IN_DOMAIN
setting to"true"
:ALLOW_ALIAS_IN_DOMAIN="true"
- Restart the broker service:
#
service openshift-broker restart
3.8. Determining Gear Ownership
/var/lib/openshift/.httpd.d/
directory to view the operational directories for gears. These directories have the format UUID_domain_appname. For example, the following command shows a gear with an application named chess
in the domain games
:
Example 3.1. Listing the Contents of the /var/lib/openshift/.http.d/
Directory
# ls /var/lib/openshift/.httpd.d/
c13aca229215491693202f6ffca1f84a_games_chess
Chapter 4. Team and Global Team Management
Table 4.1. Teams and Global Teams
Team Types | Owner | Use | Conditions |
---|---|---|---|
Team | Developer | To collaborate on an application | Each team name must be unique name within a domain. |
Global team | Administrator | To reuse existing group definitions for user management, such as LDAP groups. | Each global team name must be unique. |
Note
4.1. Setting the Maximum Number of Teams for Specific Users
# oo-admin-ctl-user -l username --setmaxteams No_of_Teams
0
. Edit the DEFAULT_MAX_TEAMS
setting located in the /etc/openshift/broker.conf
file to change the default setting for any new users created after the setting has been modified. Restart the broker service for the changes to take effect.
4.2. Creating Global Teams and Synchronizing with LDAP Groups
Note
oo-admin-ctl-team
command man pages for detailed descriptions of each command shown in the following instructions.
Procedure 4.1. To Synchronize a Global Team with LDAP Groups:
- Create an LDAP configuration file in the
/etc/openshift/
directory. This file specifies how your instance will connect to the LDAP server and query for LDAP groups and group membership. - Create one or more global teams. If you are not using LDAP groups, then the
--maps-to
option can be specified as anything:#
oo-admin-ctl-team -c create --name Team_Name --maps-to cn=all,ou=Groups,dc=example,dc=com
Alternatively, you can create a global team straight from LDAP groups using the--groups
option. In this case, you must indicate your LDAP config file and the LDAP groups to create the global team from:#
oo-admin-ctl-team --config-file
/etc/openshift/File_Name.yml
-c create --groups Group_Name1,Group_Name2Example 4.1. Sample LDAP configuration File
Host: server.example.com Port: 389 Get-Group: Base: dc=example,dc=com Filter: (cn=<group_cn>) Get-Group-Users: Base: <group_dn> Attributes: [member] Get-User: Base: dc=example,dc=com Filter: (uid=<user_id>) Attributes: [emailAddress] Openshift-Username: emailAddress
Example 4.2. Sample Active Directory based LDAP configuration File
Host: server.example.com Port: 389 Username: CN=username.gen,OU=Generics,OU=Company Users,DC=company,DC=com Password: xxxxxxxxxxxxxx #get group entry so we can map team to the group distinguished name Get-Group: Base: dc=example,dc=com Filter: (cn=<group_cn>) #get all the users in the group Get-Group-Users: Base: <group_dn> Filter: (memberOf=<group_dn>) Attributes: [emailaddress] Openshift-Username: emailaddress
- Next, synchronize global team membership with LDAP:
#
This step can be performed in a cron job in order to regularly synchronize OpenShift Enterprise with LDAP.oo-admin-ctl-team --config-file
/etc/openshift/File_Name.yml
-c sync --create-new-users --remove-old-usersAlternatively, use a sync file to synchronize global team membership with LDAP with the following command:#
This command creates a file you can modify to suit your requirements. The format is the entity to act upon, an action, then the user names.oo-admin-ctl-team --config-file /etc/openshift/File_Name.yml -c sync-to-file --out-file teams.sync --create-new-users --remove-old-users
The following example sync file adds users to an OpenShift Enterprise instance, then adds them as members to the team named "myteam".Example 4.3. Synchronizing Global Team Membership with a Sync File
USER|ADD|user1 ... USER|ADD|user100 MEMBER|ADD|myteam|user1,...,user100
Alternatively, create this file from any source and sync team members from the specified file with the following command:#
oo-admin-ctl-team -c sync-from-file --in-file
teams.sync
4.2.1. Encrypting an LDAP Global Team Connection
.yml
file. This encrypts any communication between the LDAP client and server and is only intended for instances where the LDAP server is a trusted source. simple_tls
encryption establishes an SSL/TLS encryption with the LDAP server before any LDAP protocol data is exchanged, meaning that no validation of the LDAP server's SSL certificate is performed. Therefore, no errors are reported if the SSL certificate of the client is not trusted. If you have communication errors, see your LDAP server administrator.
/etc/openshift/File_Name.yml
file and replace it with the following:
Host: server.example.com Port: 636 Encryption: simple_tls Get-Group: Base: dc=example,dc=com Filter: (cn=<group_cn>) Get-Group-Users: Base: <group_dn> Attributes: [member] Get-User: Base: dc=example,dc=com Filter: (uid=<user_id>) Attributes: [emailAddress] Openshift-Username: emailAddress
simple_tls
connections on the same port.
4.2.2. Enabling Global Team Visibility
Set the following variable in the /etc/openshift/broker.conf
file to "true":
DEFAULT_VIEW_GLOBAL_TEAMS = "true"
# service openshift-broker restart
All new developer accounts that are created in the future will have the ability to search and view global teams.
Enable the ability to view and search global teams for existing accounts with the following command:
$ oo-admin-ctl-user -l username --allowviewglobalteams true
Disable this capability by changing the --allowviewglobalteams
option to false
.
Chapter 5. Cartridge Management
Note
Important
5.1. Managing Cartridges on Broker Hosts
Important
oo-admin-ctl-cartridge
command, is only applicable to OpenShift Enterprise 2.1 and later.
To better understand cartridge management on broker hosts, including required tasks such as importing and activating cartridges, it is important to note the distinction between software versions and cartridge versions in cartridge manifests.
ruby-1.8
, and Ruby 1.9 with the cartridge name ruby-1.9
.
ruby-1.8
cartridge name would therefore have two cartridge versions (0.0.17 and 0.0.18), and the ruby-1.9
cartridge would also have two cartridge versions (0.0.17 and 0.0.18).
After manifests have been imported on the broker, you can designate cartridges as either active or inactive. The active cartridge represents the cartridge, based on an imported manifest, that is made available to developers for creating new applications or adding to existing applications. Any inactive cartridges cannot be deployed as new cartridges by developers. Cartridges can be activated automatically when importing the latest manifests from nodes or activated and deactivated manually at any time.
5.1.1. Importing, Activating, and Deactivating Cartridges
oo-admin-ctl-cartridge
command. Running the oo-admin-ctl-cartridge
command with the -c import-profile
option imports the latest manifests for all cartridges installed on a randomly selected node for each gear profile. Importing the latest manifests includes manifests for both newly installed cartridges as well as newly updated cartridges that may have older manifests that were previously imported.
# oo-admin-ctl-cartridge -c import-profile --activate
# oo-admin-ctl-cartridge -c import --url URL_to_Cartridge_Manifest --activate
After manifests have been imported, you can activate and deactivate cartridges manually using their cartridge name. Running the oo-admin-ctl-cartridge
command with the -c list
option lists all currently imported cartridges and the timestamp of each import. Active cartridges are identified with an asterisk.
Example 5.1. Listing Imported Cartridges
# oo-admin-ctl-cartridge -c list
* cron-1.4 plugin Cron 1.4 2014/06/16 22:09:55 UTC
* jenkins-client-1 plugin Jenkins Client 2014/06/16 22:09:55 UTC
mongodb-2.4 service MongoDB 2.4 2014/06/16 22:09:55 UTC
* mysql-5.1 service MySQL 5.1 2014/06/16 22:09:55 UTC
* mysql-5.5 service MySQL 5.5 2014/06/16 22:09:55 UTC
ruby-1.8 web Ruby 1.8 2014/06/16 22:09:55 UTC
* ruby-1.9 web Ruby 1.9 2014/06/16 22:09:55 UTC
* haproxy-1.4 web_proxy Web Load Balancer 2014/06/16 22:09:55 UTC
# oo-admin-ctl-cartridge -c activate --name Cart_Name1,Cart_Name2,Cart_Name3
# oo-admin-ctl-cartridge -c deactivate --name Cart_Name1,Cart_Name2,Cart_Name3
Whenever a new manifest is imported, a record is created in the MongoDB datastore noting the cartridge name, the timestamp of the import, and a unique cartridge ID. Cartridge IDs are alphanumeric strings used to identify a cartridge based on an imported manifest and timestamp of the import. Therefore, a single cartridge name can be associated with multiple cartridge IDs.
oo-admin-ctl-cartridge
.
5.1.2. Migrating and Upgrading Existing Applications to Active Cartridges
Existing applications using inactive cartridges continue to use the inactive versions when adding new gears, for example, during scaling operations. Run the following command on the broker host to allow these applications to instead use the currently active cartridges, if active versions are available, when adding new gears:
# oo-admin-ctl-cartridge -c migrate
Note
2
, wait a few minutes for all applications to finish using the cartridges, then run the command again until it completes successfully.
You can use the oo-admin-upgrade
command on the broker host to upgrade existing application gears that are currently using inactive cartridges to instead use active cartridges. The most common scenario that requires this cartridge upgrade process is when applying certain asynchronous errata updates. See the following section of the OpenShift Enterprise Deployment Guide for instructions on running the oo-admin-upgrade
command when applying these types of errata updates:
oo-admin-upgrade
command can also be used to upgrade existing application gears that are using inactive versions of custom, community, and downloadable cartridges. See Section 5.3, “Upgrading Custom and Community Cartridges” for more information.
5.1.3. Removing Unused Inactive Cartridges
oo-admin-ctl-cartridge
command with the -c clean
option on the broker. This command returns a list of the unused inactive cartridges that were removed, but also lists any inactive cartridges that were not removed because they were still in use by an application. Inactive cartridges that were not removed are shown on lines starting with a #
symbol; the number of applications that are still using the cartridge is shown at the end of the same line.
Example 5.2. Listing Imported Cartridges And Removing Unused Inactive Cartridges
#oo-admin-ctl-cartridge -c list
* cron-1.4 plugin Cron 1.4 2014/06/16 22:09:55 UTC * jenkins-client-1 plugin Jenkins Client 2014/06/16 22:09:55 UTC mongodb-2.4 service MongoDB 2.4 2014/06/16 22:09:55 UTC * mysql-5.1 service MySQL 5.1 2014/06/16 22:09:55 UTC * mysql-5.5 service MySQL 5.5 2014/06/16 22:09:55 UTC ruby-1.8 web Ruby 1.8 2014/06/16 22:09:55 UTC * ruby-1.9 web Ruby 1.9 2014/06/16 22:09:55 UTC * haproxy-1.4 web_proxy Web Load Balancer 2014/06/16 22:09:55 UTC #oo-admin-ctl-cartridge -c clean
Deleting all unused cartridges from the broker ... 539f6b336892dff17900000f # ruby-1.8 # 539f6b336892dff179000012 mongodb-2.4 1
mongodb-2.4
and ruby-1.8
cartridges were both inactive cartridges. The ruby-1.8
cartridge was successfully removed, however the mongodb-2.4
cartridge was not because it was still in use by one application. Listing the imported cartridges again confirms the removal of only the ruby-1.8
cartridge:
Example 5.3. Listing Imported Cartridges After Removing Unused Inactive Cartridges
# oo-admin-ctl-cartridge -c list
* cron-1.4 plugin Cron 1.4 2014/06/16 22:09:55 UTC
* jenkins-client-1 plugin Jenkins Client 2014/06/16 22:09:55 UTC
mongodb-2.4 service MongoDB 2.4 2014/06/16 22:09:55 UTC
* mysql-5.1 service MySQL 5.1 2014/06/16 22:09:55 UTC
* mysql-5.5 service MySQL 5.5 2014/06/16 22:09:55 UTC
* ruby-1.9 web Ruby 1.9 2014/06/16 22:09:55 UTC
* haproxy-1.4 web_proxy Web Load Balancer 2014/06/16 22:09:55 UTC
5.2. Installing and Removing Custom and Community Cartridges
Table 5.1. Cartridge Types
Type | Description | Red Hat Supported? |
---|---|---|
Standard cartridges | These cartridges are shipped with OpenShift Enterprise. | Yes. Requires base OpenShift Enterprise entitlement. |
Premium cartridges | These cartridges are shipped with OpenShift Enterprise. | Yes. Requires premium add-on OpenShift Enterprise entitlement. |
Custom cartridges | These cartridges are developed by users and can be based on other cartridges. See the OpenShift Enterprise Cartridge Specification Guide for more information on creating custom cartridges. | No. |
Community cartridges | These cartridges are contributed by the community. See the OpenShift Origin Index at http://origin.ly to browse and search for many community cartridges. | No. |
Partner cartridges | These cartridges are developed by third-party partners. | No, but can possibly be directly supported by the third-party developer. |
Note
Custom and community cartridges are installed locally on your OpenShift Enterprise deployment and appear as cartridge options for developers when using the Management Console or client tools. However, installing custom or community cartridges locally as an administrator is not to be confused with developers using downloadable cartridges, which are custom or community cartridges that are hosted externally. See the OpenShift Enterprise User Guide for more information on developers using downloadable cartridges in applications:
To use custom or community cartridges in any release of OpenShift Enterprise 2, you must install the cartridges from a source directory using the oo-admin-cartridge
command on each node host. In OpenShift Enterprise 2.1 and later, you must then import the newly installed cartridge manifests on the broker using the oo-admin-ctl-cartridge
command before the cartridges are usable in applications.
Procedure 5.1. To Install Custom or Community Cartridges:
- Run the following command on each node host, specifying the source directory of the custom or community cartridge to install:
#
oo-admin-cartridge --action install --source /path/to/cartridge/
- Verify that the list of installed cartridges on each node host is updated with the newly added custom or community cartridge:
Example 5.4. Listing Installed Cartridges
#
oo-admin-cartridge --list
(redhat, jenkins-client, 1.4, 0.0.1) (redhat, haproxy, 1.4, 0.0.1) (redhat, jenkins, 1.4, 0.0.1) (redhat, mock, 0.1, 0.0.1) (redhat, tomcat, 8.0, 0.0.1) (redhat, cron, 1.4, 0.0.1) (redhat, php, 5.3, 0.0.1) (myvendor, mycart, 1.1, 0.0.1) (redhat, ruby, 1.9, 0.0.1) (redhat, perl, 5.10, 0.0.1) (redhat, diy, 0.1, 0.0.1) (redhat, mysql, 5.1, 0.2.0)This command displays the vendor name, cartridge name, software version, and cartridge version of each installed cartridge. - Restart the MCollective service on each node host:
#
service ruby193-mcollective restart
- Update the cartridge lists on the broker. For releases prior to OpenShift Enterprise 2.1, run the following command on the broker host to clear the broker cache and, if installed, the Management Console cache:
#
oo-admin-broker-cache --clear --console
For OpenShift Enterprise 2.1 and later, run the following commands on the broker host to import and activate the latest cartridges from the nodes and, if installed, clear the Management Console cache:#
oo-admin-ctl-cartridge -c import-profile --activate
#oo-admin-console-cache --clear
You can also use the oo-admin-cartridge
command to remove cartridges from the cartridge repositories on a node host. Cartridges should only be removed from cartridge repositories after they are no longer in use by any existing applications. When removing a cartridge, ensure the same cartridge is removed from each node host.
Procedure 5.2. To Remove Custom and Community Cartridges:
- For OpenShift Enterprise 2.1 and later, deactivate the cartridge to be removed by running the following command on the broker host:
#
oo-admin-ctl-cartridge -c deactivate --name Cart_Name
Deactivating the cartridge ensures it can no longer be used by developers in new applications or as add-on cartridges to existing applications. This step is not applicable for releases prior to OpenShift Enterprise 2.1. - List the installed cartridges by running the following command on each node host:
#
oo-admin-cartridge --list
Identify in the output the cartridge name, software version, and cartridge version of the cartridge to be removed. - Remove the cartridge from the cartridge repository by running the following command on each node host with the cartridge information identified in the previous step:
#
oo-admin-cartridge --action erase --name Cart_Name --version Software_Version_Number --cartridge_version Cart_Version_Number
- Update the relevant cartridge lists. For releases prior to OpenShift Enterprise 2.1, clear the cache for the broker and, if installed, the Management Console by running the following command on the broker host:
#
oo-admin-broker-cache --clear --console
For OpenShift Enterprise 2.1 and later, clear the cache for only the Management Console, if installed, by running the following command on the broker host:#
oo-admin-console-cache --clear
5.3. Upgrading Custom and Community Cartridges
oo-admin-upgrade
command on the broker host provides the command line interface for the upgrade system and can upgrade all the gears in an OpenShift Enterprise environment, all the gears on a node, or a single gear. This command queries the OpenShift Enterprise broker to determine the locations of the gears to migrate and uses MCollective calls to trigger the upgrade for a gear.
Upgrade Process Overview
- Load the gear upgrade extension, if configured.
- Inspect the gear state.
- Run the gear extension's
pre-upgrade
script, if it exists. - Compute the upgrade itinerary for the gear.
- If the itinerary contains an incompatible upgrade, stop the gear.
- Upgrade the cartridges in the gear according to the itinerary.
- Run the gear extension's
post-upgrade
script, if it exists. - If the itinerary contains an incompatible upgrade, restart and validate the gear.
- Clean up after the upgrade by deleting pre-upgrade state and upgrade metadata.
oo-admin-upgrade
command can perform the following tasks, as described by the oo-admin-upgrade help
command:
oo-admin-upgrade archive
- Archives existing upgrade data in order to begin a completely new upgrade attempt.
oo-admin-upgrade help <task>
- List available tasks or describe the designated task and its options.
oo-admin-upgrade upgrade-gear --app-name=<app_name>
- Upgrades only the specified gear.
oo-admin-upgrade upgrade-node --version=<version>
- Upgrades all gears on one or all nodes.
Important
oo-admin-upgrade upgrade-from-file
task. The help
output of the oo-admin-upgrade
command does list upgrade-from-file
as a valid task. However, it is not meant for direct use by an administrator and can invalidate an upgrade process.
5.4. Adding QuickStarts to the Management Console
Warning
/etc/openshift/quickstarts.json
file on the broker host and add entries for one or more QuickStart configurations. The following shows the basic format of a /etc/openshift/quickstarts.json
file with two QuickStarts using some common parameters:
[ {"quickstart": { "id":"QuickStart1_ID", "name":"QuickStart1_Name", "website":"QuickStart1_Website", "initial_git_url":"QuickStart1_Location_URL", "cartridges":["Cart_Name"], "summary":"QuickStart1_Description", "tags":["Tags"], "admin_tags":["Tags"] }}, {"quickstart": { "id":"QuickStart2_ID", "name":"QuickStart2_Name", "website":"QuickStart2_Website", "initial_git_url":"QuickStart2_Location_URL", "cartridges":["Cart_Name"], "summary":"QuickStart2_Description", "tags":["Tags"], "admin_tags":["Tags"] }} ]
"cartridges"
parameter of a QuickStart configuration are available to developers in your OpenShift Enterprise instance. These can be cartridges local to your instance or downloadable cartridges. If the web framework cartridge required by a QuickStart is unavailable, developers are unable to create applications using the QuickStart, even if the QuickStart appears as an option in the Management Console. See the OpenShift Enterprise Deployment Guide for information on installing cartridges:
python-2.7
cartridge:
Example 5.5. /etc/openshift/quickstarts.json
File with a Django QuickStart Entry
[ {"quickstart": { "id":"2", "name":"Django", "website":"https://www.djangoproject.com/", "initial_git_url":"git://github.com/openshift/django-example.git", "cartridges":["python-2.7"], "summary":"A high-level Python web framework that encourages rapid development and clean, pragmatic design. Administrator user name and password are written to $OPENSHIFT_DATA_DIR/CREDENTIALS.", "tags":["python","django","framework"], "admin_tags":[] }} ]
/etc/openshift/quickstarts.json
file, clear the Management Console cache to ensure the QuickStart appears immediately for developers. For releases prior to OpenShift Enterprise 2.1, run the following command on the broker host:
# oo-admin-broker-cache --clear --console
# oo-admin-console-cache --clear
5.5. Disabling Downloadable Cartridges
DOWNLOAD_CARTRIDGES_ENABLED
setting, located in the /etc/openshift/broker.conf
file, is set to true
by default. Set it to false
to disable the ability to use downloadable cartridges.
Procedure 5.3. To Disable Downloadable Cartridges:
- Open the
/etc/openshift/broker.conf
file on the broker host and set theDOWNLOAD_CARTRIDGES_ENABLED
value tofalse
:DOWNLOAD_CARTRIDGES_ENABLED="
false
" - Restart the
openshift-broker
service:#
service openshift-broker restart
5.6. Disabling Obsolete Cartridges
Procedure 5.4. To Disable Obsolete Cartridges:
- Ensure the
ALLOW_OBSOLETE_CARTRIDGES
parameter in the/etc/openshift/broker.conf
file on the broker host is set tofalse
:ALLOW_OBSOLETE_CARTRIDGES="false"
- Add the
Obsolete: true
parameter to the/usr/libexec/openshift/cartridges/Cart_Name/metadata/manifest.yml
file on each node host for any cartridge being marked obsolete:Obsolete: true
- Restart the MCollective service on each node host:
#
service ruby193-mcollective restart
- Update the cartridge lists on the broker. For releases prior to OpenShift Enterprise 2.1, run the following command on the broker host to clear the broker cache and, if installed, the Management Console cache:
#
oo-admin-broker-cache --clear --console
For OpenShift Enterprise 2.1 and later, run the following commands on the broker host to import the latest cartridge manifests from the nodes and, if installed, clear the Management Console cache:#
oo-admin-ctl-cartridge -c import-profile
#oo-admin-console-cache --clear
- Restart the broker service:
#
service openshift-broker restart
Chapter 6. Resource Management
6.1. Adding or Modifying Gear Profiles
- Define the new gear profile on the node host.
- Update the list of valid gear sizes on the broker host.
- Grant users access to the new gear size.
Procedure 6.1. To Define a New Gear Profile:
small
. Edit the /etc/openshift/resource_limits.conf
file on the node host to define a new gear profile.
Note
resource_limits.conf
files based on other gear profile and host type configurations are included in the /etc/openshift/
directory on nodes. For example, files for medium
and large
example profiles are included, as well as an xpaas
profile for use on nodes hosting xPaaS cartridges. These files are available as a reference or can be used to copy over the existing /etc/openshift/resource_limits.conf
file.
- Edit the
/etc/openshift/resource_limits.conf
file on the node host and modify its parameters to your desired specifications. See the file's commented lines for information on available parameters. - Modify the
node_profile
parameter to set a new name for the gear profile, if desired. - Restart the
ruby193-mcollective
service on the node host:#
service ruby193-mcollective restart
- If Traffic Control is enabled in the
/etc/openshift/node.conf
file, run the following command to apply any bandwidth setting changes:#
oo-admin-ctl-tc restart
- If gears already exist on the node host, run the following commands to ensure the resource limits for the new gear profile are applied to the existing gears:
#
oo-cgroup-enable --with-all-containers
#oo-pam-enable --with-all-containers
Procedure 6.2. To Update the List of Valid Gear Sizes:
- Edit the
/etc/openshift/broker.conf
file on the broker host and modify the comma-separated list in theVALID_GEAR_SIZES
parameter to include the new gear profile. - Consider adding the new gear profile to the comma-separated list in the
DEFAULT_GEAR_CAPABILITIES
parameter as well, which determines the default available gear sizes for new users. - Restart the broker service:
#
service openshift-broker restart
- For existing users, you must grant their accounts access to the new gear size before they can create gears of that size. Run the following command on the broker host for the relevant user name and gear size:
#
oo-admin-ctl-user -l Username --addgearsize Gear_Size
- See Section 6.3.2, “Creating and Populating Districts” for more information on how to create and populate a district, which are required for gear deployment, using the new gear profile.
6.2. Capacity Planning and Districts
Note
See Also:
6.2.1. Hierarchy of OpenShift Enterprise Entities
Table 6.1. OpenShift Enterprise Container Hierarchy
Entity | Description |
---|---|
Gears | Gears are at the bottom of the hierarchy, and contain instances of one or more cartridges. |
Nodes | Nodes contain gears. Each gear UUID has a local UNIX user UID on the node host with storage and processes constrained by various mechanisms. |
Districts | When used, districts contain a set of nodes, including the gears that reside on them. |
Node profiles | Node profiles are at the top of the hierarchy, and are also referred to as gear profiles or gear sizes. They are conceptually similar to a label attached to a set of nodes. Node profiles are assigned to districts, and all nodes in a district must have that node profile. Nodes or districts can only contain gears for one node profile. |
Applications | Applications contain one or more gears, which currently must all have the same node profile. Application gears can span multiple nodes in multiple districts. However, no mechanism exists for placing gears on specific nodes or districts. |
6.2.2. Purpose of Districts
6.2.3. Gear Capacity Planning
6.2.3.1. Gear Capacity Planning for Nodes
max_active_gears
parameter in the /etc/openshift/resource_limits.conf
file to specify the maximum number of active gears allowed per node. By default, this value is set to 100
, but most administrators will need to modify this value over time. Stopped or idled gears do not count toward this limit; a node can have any number of inactive gears, constrained only by storage. However, starting inactive gears after the max_active_gears
limit has been reached may exceed the limit, which cannot be prevented or corrected. Reaching the limit exempts the node from future gear placement by the broker.
max_active_gears
limit on nodes is to consider the resource most likely to be exhausted first (typically RAM) and divide the amount of available resource by the resource limit per gear. For example, consider a node with 7.5 GB of RAM available and gears constrained to 0.5 GB of RAM:
Example 6.1. Example max_active_gears
Calculation
max_active_gears = 7.5 GB / 0.5 GB = 15 gears
max_active_gears
parameter after installation is harmless. Consider beginning with conservative limits and adjust accordingly after empirical evidence of usage becomes available. It is easier to add more active gears than to move them away.
6.2.3.2. Gear Capacity Planning for Districts
D = district capacity (6000) G = total number of gears per node
C = node capacity (max_active_gears)
A = percentage of gears that are active
G = C * 100 / A
N = 6000 * A / (100 * C)
Example 6.2. Example Nodes per District Calculation
max_active_gears
is 50, calculate the following:
6000 * 10 / (100 * 50) = 12 (round down if needed)
6.3. Managing Districts
6.3.1. Enabling Districts
/etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
file on the broker host enable and enforce district use, all of which are set to true
by default:
DISTRICTS_ENABLED=true
NODE_PROFILE_ENABLED=true
DISTRICTS_REQUIRE_FOR_APP_CREATE=true
Note
false
and restarting the openshift-broker
service.
true
for the DISTRICTS_REQUIRE_FOR_APP_CREATE
parameter prevents gear placement if no district exists with capacity for the chosen gear profile, therefore preventing the use of node hosts that are outside of districts. Setting the value to false
and restarting the openshift-broker
service enables immediate use of node hosts without having to understand or implement districts. While this immediate usage may be helpful in an evaluation setting, it is neither desirable nor recommended in a production setting where districts are used to place gears on a node host before being placed in a district. This is because nodes cannot be placed in a district after they are hosting gears.
6.3.2. Creating and Populating Districts
oo-admin-ctl-district
command on the broker host to administer districts.
Note
/etc/openshift/broker.conf
file on the broker host, and is created in the following procedure. For information on how to change the default gear profile, see Section 6.1, “Adding or Modifying Gear Profiles”.
Procedure 6.3. To Create and Populate Districts:
- Create a district using the following command:
#
oo-admin-ctl-district -c create -n District_Name -p Gear_Profile
- Add a node to the district using the following command:
#
oo-admin-ctl-district -c add-node -n District_Name -i Node_Hostname
-i
option and any node hostnames, or use the --available
option to add all undistricted nodes of the specified size:
# oo-admin-ctl-district -c add-node -n District_Name -p Gear_Profile -i Node_Hostname1,Node_Hostname2
small
gear profile to create a district named small_district
, then add the node host node1.example.com
to the new district:
Example 6.3. Creating a District Named small_district
:
# oo-admin-ctl-district -c create -n small_district -p small
Successfully created district: 7521a7801686477f8409e74f67b693f4
{"_id"=>"53443b8b87704f23db000001",
"active_servers_size"=>1,
"available_capacity"=>6000,
"available_uids"=>"<6000 uids hidden>",
"created_at"=>2014-04-08 18:10:19 UTC,
"gear_size"=>"small",
"max_capacity"=>6000,
"max_uid"=>6999,
"name"=>"default-small-0",
"servers"=> [],
"updated_at"=>2014-04-08 18:10:19 UTC,
"uuid"=>"53443b8b87704f23db000001"}
Example 6.4. Adding node1.example.com
to the District:
# oo-admin-ctl-district -c add-node -n small_district -i node1.example.com
Success!
{"_id"=>"53443b8b87704f23db000001",
"active_servers_size"=>1,
"available_capacity"=>6000,
"available_uids"=>"<6000 uids hidden>",
"created_at"=>2014-04-08 18:10:19 UTC,
"gear_size"=>"small",
"max_capacity"=>6000,
"max_uid"=>6999,
"name"=>"default-small-0",
"servers"=>
[{"_id"=>"53443bbc87704f49bd000001",
"active"=>true,
"name"=>"node1.example.com",
"unresponsive"=>false}],
"updated_at"=>2014-04-08 18:10:19 UTC,
"uuid"=>"53443b8b87704f23db000001"}
Important
node1.example.com
in the above example, is the node's host name as configured on that server, which could be different from the PUBLIC_HOSTNAME
configured in the /etc/openshift/node.conf
file on the node. CNAME records use the PUBLIC_HOSTNAME
parameter, which must resolve to the host through DNS; the host name could be something completely different and may not resolve in DNS at all.
6.3.3. Viewing District Information
oo-admin-ctl-district
command, or use the -n
option with the district's name to view a single district.
Example 6.5. Viewing All Districts
# oo-admin-ctl-district
{ ...
"uuid"=>"7521a7801686477f8409e74f67b693f4",
...}
Example 6.6. Viewing a Single District
# oo-admin-ctl-district -n small_district
During district creation, the broker creates a new document in its MongoDB database. Run the following command to view these documents inside of the openshift_broker
database, replacing the login credentials from the /etc/openshift/broker.conf
file, if needed:
# mongo -u openshift -p password openshift_broker
mongo
shell, you can perform commands against the broker database. Run the following command to list all of the available collections in the openshift_broker
database:
> db.getCollectionNames()
districts
collection:
[ "applications", "auth_user", "cloud_users", "districts", "domains", "locks", "system.indexes", "system.users", "usage", "usage_records" ]
districts
collection to verify the creation of your districts. District information is output in JSON format:
> db.districts.find()
mongo
shell using the exit
command:
> exit
6.3.4. Viewing Capacity Statistics
# oo-stats
--help
option for script arguments. By default, this tool summarizes district and profile gear usage in a human-readable format, and produces several computer-readable formats for use by automation or monitoring.
6.3.5. Moving Gears Between Nodes
oo-admin-move
command moves a gear from one node to another. Note that moving gears requires a rsync_id_rsa
private key in the broker host's /etc/openshift/
directory and a matching public key in each node host's /root/.ssh/authorized_keys
file as explained in the OpenShift Enterprise Deployment Guide at https://access.redhat.com/site/documentation.
oo-admin-move
command on the broker host to move a gear from one node to another:
Example 6.7. Moving a Gear from One Node to Another
# oo-admin-move --gear_uuid 3baf79139b0b449d90303464dfa8dd6f -i node2.example.com
URL: http://app3-username.example.com
Login: username
App UUID: 3baf79139b0b449d90303464dfa8dd6f
Gear UUID: 3baf79139b0b449d90303464dfa8dd6f
DEBUG: Source district uuid: NONE
DEBUG: Destination district uuid: NONE
[...]
DEBUG: Starting cartridge 'ruby-1.8' in 'app3' after move on node2.example.com
DEBUG: Fixing DNS and mongo for gear 'app3' after move
DEBUG: Changing server identity of 'app3' from 'node1.example.com' to 'node2.example.com'
DEBUG: The gear's node profile changed from medium to small
DEBUG: Deconfiguring old app 'app3' on node1.example.com after move
Successfully moved 'app3' with gear uuid '3baf79139b0b449d90303464dfa8dd6f' from 'node1.example.com' to 'node2.example.com'
6.3.6. Removing Nodes from Districts
oo-admin-ctl-district
and oo-admin-move
commands in combination to remove the gears from the node host, and then remove the host from its district.
Procedure 6.4. To Remove Nodes from Districts:
small_district
has two node hosts, node1.example.com
and node2.example.com
. The second node host, node2.example.com
, has a high number of idle gears.
- Run the following commands and fix any problems that are found. This prevents future problems caused by moving a broken gear. On the broker host, run:
#
oo-admin-chk
On the node hosts, run:#
oo-accept-node
- Deactivate the node you want to remove to prevent applications from being created on or moved to the node. Existing gears continue running. On the broker host, run:
#
oo-admin-ctl-district -c deactivate-node -n small_district -i node2.example.com
- Move all the gears from
node2.example.com
tonode1.example.com
by repeating the following command on the broker host for each gear onnode2.example.com
:#
oo-admin-move --gear_uuid UUID -i node1.example.com
- Remove
node2.example.com
from the district:#
oo-admin-ctl-district -c remove-node -n small_district -i node2.example.com
6.3.7. Removing Districts
Procedure 6.5. To Remove Districts:
- On the broker host, set the district's capacity to
0
:#
oo-admin-ctl-district -c remove-capacity -n district_name -s 6000
- Remove all the node hosts from the district you want to delete by running the following commands for each node:
#
oo-admin-ctl-district -c deactivate-node -i node_hostname
#oo-admin-ctl-district -c remove-node -n district_name -i node_hostname
- Delete the empty district:
#
oo-admin-ctl-district -c destroy -n district_name
6.4. Managing Regions and Zones
Prerequisites:
Note
oo-admin-move
tool if the move is between districted nodes and all gears in the application remain in a single region.
6.4.1. Creating a Region with Zones
oo-admin-ctl-region
tool to create, list, or destroy regions and add or remove zones within a given region.
Procedure 6.6. To Create a Region with Zones:
- Create a new region. Region names can include alphanumeric characters, underscores, hyphens, and dots:
#
oo-admin-ctl-region -c create -r region_name
- Add zones to the region. Zone names can include alphanumeric characters, underscores, hyphens, and dots:
#
oo-admin-ctl-region -c add-zone -r region_name -z zone_name
- Verify the new region and zones:
#
oo-admin-ctl-region -c list -r region_name
6.4.2. Tagging a Node with a Region and Zone
Prerequisites:
oo-admin-ctl-district
tool to tag nodes in districts with a region and zone.
Procedure 6.7. To Tag a Node with a Region and Zone:
- Create a district if one does not already exist:
#
oo-admin-ctl-district -c create -n district_name -p gear_profile
- While adding a node to a district, tag the node with a region and zone. Note that you can add multiple nodes with the
-i
option:#
oo-admin-ctl-district -c add-node -n district_name -i Node_Hostname1,Node_Hostname2 -r region_name -z zone_name
Alternatively, tag a node previously added to a district with a region and zone:#
oo-admin-ctl-district -c set-region -n district_name -i Node_Hostname -r region_name -z zone_name
6.4.3. Setting the Default Region For New Applications
--region
option, new applications are created in the default region set in the /etc/openshift/broker.conf
file.
Procedure 6.8. To Set the Default Region for New Applications:
- Change the following parameter to the desired default region for new applications:
DEFAULT_REGION_NAME="Region_Name"
- Restart the broker service:
#
service openshift-broker restart
6.4.4. Disabling Region Selection
--region
option. Use the following procedure to disable a developer's ability to create an application in a specific region. Paired with Section 6.4.3, “Setting the Default Region For New Applications”, this gives you control over the region in which newly-created applications are located.
Procedure 6.9. To Disable Region Selection When Creating Applications:
- Change the following setting in the
/etc/openshift/broker.conf
file to false:ALLOW_REGION_SELECTION="false"
- Restart the broker service for the changes to take effect:
#
service openshift-broker restart
6.4.5. Additional Region and Zone Tasks
# oo-admin-ctl-region -c list
# oo-admin-ctl-district -c unset-region -n district_name -i server_identity
# oo-admin-ctl-region -c remove-zone -r region_name -z zone_name
# oo-admin-ctl-region -c destroy -r region_name
Procedure 6.10. To Require New Applications Use Zones:
- In the
/etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
file on the broker host, set theZONES_REQUIRE_FOR_APP_CREATE
parameter totrue
to require that new applications only use nodes tagged with a zone. Whentrue
, gear placement will fail if there are no zones available with the correct gear profile:ZONES_REQUIRE_FOR_APP_CREATE=true
- Restart the broker service:
#
service openshift-broker restart
Procedure 6.11. To Enforce the Minimum Number of Zones per Gear Group
- In the
/etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
file on the broker host, set theZONES_MIN_PER_GEAR_GROUP
parameter to the desired minimum number of zones between which gears in application gear groups are distributed:ZONES_MIN_PER_GEAR_GROUP=number_of_zones
- Restart the broker service:
#
service openshift-broker restart
6.5. Gear Placement Algorithm
Prerequisites:
MongoDB configures district capacity. Gear status does not affect district capacity, because districts reserve resources; they do not account for actual usage. In the JSON record for a district, max_capacity
indicates the maximum number of gears that can be placed in the district, while available_capacity
indicates the number of gears still available in that district. See Section 6.3.3, “Viewing District Information” for details on viewing the JSON record of a district.
When choosing nodes for new gears, the default gear placement algorithm also considers any least preferred servers and restricted servers to help maintain high availability for applications. Least preferred servers are nodes that already have gears on them for the given application gear group; it is preferable to find other nodes instead so that high availability is ensured. Restricted servers are nodes that should not be chosen at all. For example, restricted servers would be identified for high-availability applications when two HAProxy gears are created to ensure they are placed on different nodes.
The following steps describe the default algorithm for selecting a node on which to place a new gear for an application:
- Find all the districts.
- Find the nodes that have the least
active_capacity
. - Filter nodes based on given criteria to ensure gears within scalable applications are spread across multiple nodes.
- Filter non-districted nodes when districts are required.
- When regions and zones are present:
- Filter nodes without zones when zones are required.
- If the application already has gears on a node tagged with a region, exclude nodes that do not belong to the current region.
- Verify whether the minimum required number of zones for application gear groups is met.
- Filter zones to ensure that gears within the application gear group do not exist solely in a single zone.
- Choose zones that are least consumed to evenly distribute gears among zones.
- When zone nodes available, exclude nodes without zones.
- When districted nodes are available, exclude nodes without districts.
- Among remaining nodes, choose the ones with plenty of available capacity that are in districts with available UIDs.
- Randomly choose one of the nodes with the lower levels of
active_capacity
.
6.6. Setting Default Gear Quotas and Sizes
/etc/openshift/broker.conf
file, modify the VALID_GEAR_SIZES
value to include a list of gear sizes available to create districts and applications. The DEFAULT_GEAR_SIZE
value and DEFAULT_GEAR_CAPABILITIES
value must be set to a size listed available. To modify gear quotas and sizes for specific users, see Section 6.7, “Setting Gear Quotas and Sizes for Specific Users”.
Table 6.2. Default Gear Quotas and Sizes
Configuration Setting | Description |
---|---|
VALID_GEAR_SIZES | Specifies a list of gear sizes that are available in the system. |
DEFAULT_MAX_GEARS | Specifies the default maximum number of a gears a new user is entitled to. |
DEFAULT_GEAR_SIZE | Specifies the default gear size when a new gear is created. |
DEFAULT_GEAR_CAPABILITIES | Specifies a list of gear sizes available to a new user. |
Example 6.8. Setting Default Gear Quotas and Sizes
/etc/openshift/broker.conf
file on the broker host and modify the following defaults as desired to set the default gear quotas and sizes:
# Comma-separated list of valid gear sizes available anywhere in the installation VALID_GEAR_SIZES="small,medium" # Default number of gears to assign to a new user DEFAULT_MAX_GEARS="100" # Default gear size for a new gear if not otherwise specified DEFAULT_GEAR_SIZE="small" # Default gear sizes (comma-separated) allowed to a new user DEFAULT_GEAR_CAPABILITIES="small"
6.7. Setting Gear Quotas and Sizes for Specific Users
oo-admin-ctl-user
command to set individual user gear parameters to limit the number of gears a user is allowed, and change user access to gear sizes.
username
with the relevant user name:
# oo-admin-ctl-user -l username
Example 6.9.
# oo-admin-ctl-user -l user
User user:
consumed gears: 3
max gears: 100
gear sizes: small
username
with the relevant user name and 101
with the desired number of gears:
# oo-admin-ctl-user -l username --setmaxgears 101
Example 6.10. Limiting a User's Number of Gears
# oo-admin-ctl-user -l user --setmaxgears 101
Setting max_gears to 101... Done.
User user:
consumed gears: 3
max gears: 101
gear sizes: small
username
with the relevant user name and medium
with the desired gear size:
# oo-admin-ctl-user -l username --addgearsize medium
Example 6.11. Enabling a Gear Size For a User
# oo-admin-ctl-user -l user --addgearsize medium
Adding gear size medium for user user... Done.
User user:
consumed gears: 3
max gears: 101
gear sizes: small, medium
username
with the relevant user name and medium
with the desired gear size:
# oo-admin-ctl-user -l username --removegearsize medium
Example 6.12. Removing a Gear Size For a User
# oo-admin-ctl-user -l user --removegearsize medium
Removing gear size medium for user user... Done.
User user:
consumed gears: 3
max gears: 101
gear sizes: small
6.8. Restricting Gear Sizes for Cartridges
/etc/openshift/broker.conf
file:
VALID_GEAR_SIZES_FOR_CARTRIDGE = "Cart_Name|Gear_Size"
Example 6.13. Restricting Gears Sizes for Specific Cartridges
VALID_GEAR_SIZES_FOR_CARTRIDGE = "php-5.3|medium,large jbossews-2.0|large"
# service openshift-broker restart
6.9. Viewing Resource Usage on a Node
free
and vmstat
to report memory usage. Note that tools such as vmstat
, top
, and uptime
report the number of processes waiting to execute. These tools might be artificially inflated, because cgroups restrict each gear to a specific time slice rather than time-sharing between processes. This restriction enables OpenShift Enterprise to provide consistent CPU availability for each gear, but also means that active processes on gears may wait while the CPU is allocated to a less active gear. As a result, reported load average may routinely be close to the number of active gears.
oo-idler-stats
command returns a status summary of all the gears on a node host.
Example 6.14. Returning a Status Summary of Gears on a Node Host
# oo-idler-stats
1 running, 1 idled, 0 half-idled for a total 2 of 3 (66.67 %)
half-idled
state is deprecated and always returns 0.
6.10. Enforcing Low Tenancy on Nodes
max_active_gears
value in a node's gear profile to a low number to achieve low or single tenancy on nodes. In environments with low numbers of gears per node and large gear sizes, it is important to avoid overcommitting resources and exceeding the max_active_gears
value.
no_overcommit_active
value in the node's gear profile to true
to avoid overcommitting resources and to enforce the desired tenancy. This setting verifies node capacity when a gear is being created on a node. If sufficient capacity is not available on the selected node, the gear is not created and an error message is displayed.
Example 6.15. Node Capacity Exceeded Error Message
Gear creation failed (chosen node capacity exceeded).
Procedure 6.12. To Enforce Low Tenancy on Nodes:
- Open the
/etc/openshift/resource_limits.conf
file on the node host and set theno_overcommit_active
value totrue
:no_overcommit_active=true
- Restart the
ruby193-mcollective
service:#
service ruby193-mcollective restart
6.11. Managing Capacity on Broker Hosts
Chapter 7. Administration Console
/admin-console
, however external access is disabled by default. See the OpenShift Enterprise Deployment Guide at https://access.redhat.com/site/documentation for information on installing and configuring the Administration Console, including options for configuring external access.
7.1. Understanding the System Overview
Figure 7.1. System Overview
7.2. Viewing Gear Profiles
Figure 7.2. Gear Profile Districts View
Figure 7.3. Gear Profile Nodes View
7.3. Viewing Suggestions
Figure 7.4. Suggestions for Adding Capacity
Figure 7.5. Suggestions for Missing Nodes
7.4. Searching for Entities
7.5. Viewing Statistics
7.6. Configuring Suggestions
/etc/openshift/plugins.d/openshift-origin-admin-console.conf
, to enable suggestions that warn of current or impending capacity problems. For example, the Administration Console can suggest where to add nodes to ensure a particular profile can continue to create gears, or where capacity is poorly utilized.
Note
max_active_gears
limit, it uses the largest.
STATS_CACHE_TIMEOUT
parameter in the configuration file, set by default to one hour, determines how long to keep capacity and suggestions statistics cached. If you do not immediately see changes that you expect in the Administration Console, refresh the data by clicking the refresh icon near the upper right of any page.
7.7. Loading Capacity Data from a File
Admin Stats
library used by the oo-stats
command to collect capacity data. Record the YAML or JSON output from the oo-stats
command and use the output directly instead of the actual system data using the following procedure:
Procedure 7.1. To Load Capacity Data from a File:
- Run the following command to gather capacity data on a broker host, which then records the output to a file. Replace yaml with json, if needed:
#
oo-stats -f yaml > /tmp/stats.yaml
- Copy
/tmp/stats.yaml
to the host running the Administration Console, if needed. - Set
/tmp/stats.yaml
in theSTATS_FROM_FILE
parameter in the/etc/openshift/plugins.d/openshift-origin-admin-console.conf
file. - SELinux limits what the broker application can read (for example, it cannot ordinarily read
/tmp
entries). To ensure that the broker can read the data file, adjust its context as follows:#
chcon system_u:object_r:httpd_sys_content_t:s0 /tmp/stats.yaml
- Restart the broker:
#
service openshift-broker restart
7.8. Exposed Data
Exposed Data Points
/admin-console/capacity/profiles.json
returns all profile summaries from theAdmin Stats
library (the same library used by theoo-stats
command). Add the?reload=1
parameter to ensure the data is current rather than cached./admin-console/stats/gears_per_user.json
returns frequency data for gears owned by a user./admin-console/stats/apps_per_domain.json
returns frequency data for applications belonging to a domain./admin-console/stats/domains_per_user.json
returns frequency data for domains owned by a user.
/admin-console/capacity/profiles.json
on the broker host:
# curl http://localhost:8080/admin-console/capacity/profiles.json
Chapter 8. Monitoring
8.1. General System Checks
- Use standard system administration checks to monitor the basic health of your system. For example:
- ensure adequate memory
- minimize disk swapping
- ensure adequate disk space
- monitor file system health
- Monitor the services used by OpenShift Enterprise. Ensure the following are running and configured correctly:
- MCollective
- Mongo
- Apache
- ActiveMQ
- SELinux and cgroups
- Use custom scripts to run checks specific to your system. Confirm that the entire system is working by checking:
- nodes and gears are valid and consistent system-wide by running
oo-admin-chk
on a broker host - gears are created and deleted correctly
- available statistics and capacities
- hosts respond to MCollective using
oo-mco ping
8.2. Response Times for Administrative Actions
The time it takes to create an application depends on the application type and how long DNS propagation takes. Apart from the time spent propagating DNS, applications are generally created in approximately 35 seconds.
The length of time required to restart a node host depends on the number of gears on the node host, and how many of those gears are active. Node host restarts can take approximately five to thirty minutes.
8.3. Testing a Path Through the Whole System
8.4. Monitoring Broker Activity
SSH
and Git, and applications are still available online.
8.4.1. Default Broker Log File Locations
Table 8.1. Default Broker Log Files
File | Description |
---|---|
/var/log/openshift/broker/production.log | This file contains any log requests processed by the broker application. |
/var/log/openshift/broker/user_action.log | This file logs any user actions, including the creation and deletion of gears. Similar to production.log , but less verbose. |
/var/log/openshift/broker/httpd/access_log | This file logs any calls made to the REST API. |
/var/log/openshift/broker/httpd/error_log | This file logs any Rails errors that occur on start-up. |
/var/log/openshift/broker/usage.log | This file logs information on gear or filesystem resource usage, but only if tracking is enabled in the /etc/openshift/broker.conf file. |
8.4.2. Verifying Functionality with Administration Commands
Note
The following table outlines administration commands for testing functionality on a broker host:
Table 8.2. Verification Commands for a Broker Host
Command Name | Description |
---|---|
oo-accept-broker | Use this command to test basic functionality before performing more intensive tests. |
oo-accept-systems | Use this command to verify that the settings on node hosts are valid and can be used by a broker host. |
oo-admin-chk | Use this command to verify consistency throughout all node hosts and gears in an OpenShift Enterprise deployment. |
The following table contains administration commands for testing functionality on a node host:
Table 8.3. Verification Commands for a Node Host
Command Name | Description |
---|---|
oo-accept-node | Use this command to test basic functionality before performing more intensive tests. |
8.5. Monitoring Node and Gear Activity
8.5.1. Default Node Log File Locations
Table 8.4. Node Log Files
File | Description | Configuration Setting |
---|---|---|
/var/log/openshift/node/platform.log | Primary log for node platform actions including MCollective actions performed on the node host. | PLATFORM_LOG_FILE setting in /etc/openshift/node.conf . |
/var/log/openshift/node/platform-trace.log | Logs node platform trace actions. | PLATFORM_TRACE_LOG_FILE setting in /etc/openshift/node.conf . |
/var/log/openshift/node/ruby193-mcollective.log | Logs MCollective messages communicated between broker and node hosts. Read to confirm proper gear creation. | logfile setting in /opt/rh/ruby193/root/etc/mcollective/server.cfg . |
/var/log/httpd/openshift_log | Logs gear access from the front-end Apache. | APACHE_ACCESS_LOG setting in /etc/openshift/node.conf . |
8.5.2. Enabling Application and Gear Context in Node Component Logs
Procedure 8.1. To Enable Application and Gear Context in Apache Logs:
- Configure Apache to include application names and gear UUIDs in its log messages by editing the
/etc/sysconfig/httpd
file and adding the following line:OPTIONS="-DOpenShiftAnnotateFrontendAccessLog"
Important
All options must be on the same line. For example, in Section 8.8.2, “Enabling Syslog for Node Components” another option for Apache log files is explained. If both options are desired, the line must use the following syntax:OPTIONS="-Option1 -Option2"
- Restart the
httpd
service for the Apache changes to take effect for new applications:#
service httpd restart
Procedure 8.2. To Enable Application and Gear Context in Node Platform Logs:
- Configure the node platform to include application and gear context in its log messages by editing the
/etc/openshift/node.conf
file and adding the following line:PLATFORM_LOG_CONTEXT_ENABLED=1
- Add the following line to specify which attributes are included. Set any or all of the following options in a comma-delimited list:
PLATFORM_LOG_CONTEXT_ATTRS=request_id,container_uuid,app_uuid
This produces key-value pairs for the specified attributes. If no context attribute configuration is present, all context attributes are printed. - Restart the
ruby193-mcollective
service for the node platform changes to take effect:#
service ruby193-mcollective restart
8.5.3. Viewing Application Details
oo-app-info
command to view the information about applications and gears.
# oo-app-info options
Option | Description |
---|---|
-a, --app [NAME] |
Specify a comma-delimited list of application names, without domains.
Alternatively, specify a regular expression instead of application names.
|
-d, --domain [NAME] |
Specify a comma-delimited list of domain namespaces, without application names.
Alternatively, specify a regular expression instead of a domain namespace.
|
-f, --fqdn [NAME] |
Specify a comma-delimited list of application FQDNs.
Alternatively, specify a regular expression instead of an application FQDN.
|
-l, --login [NAME] |
Specify a comma-delimited list of OpenShift user logins.
Alternatively, specify a regular expression instead of a login.
|
-u, --gear_uuid [uuid] |
Specify a comma-delimited list of application or gear UUIDs.
Alternatively, specify a regular expression instead of a UUID.
|
--deleted | Search for deleted applications. |
--raw | Display raw data structure without formatting. |
Example 8.1. Viewing Application Details for a Specific Login:
# oo-app-info --login login --app py33s
Loading broker environment... Done.
================================================================================ Login: demo
Plan: ()
App Name: py33s
App UUID: 54471801f09833e74300001e
Creation Time: 2014-10-22 02:35:45 AM
URL: http://py33s-demo.example.com
Group Instance[0]:
Components:
Cartridge Name: python-3.3
Component Name: python-3.3
Gear[0]
Server Identity: node.hosts.example.com
Gear UUID: 54471801f09833e74300001e
Gear UID: 1224
Current DNS
-----------
py33s-demo.example.com is an alias for node.hosts.example.com.
node.hosts.example.com has address 172.16.4.136
8.5.4. The Watchman Tool
- Watchman searches cgroup event flow through syslog to determine when a gear is destroyed. If the pattern does not match a clean gear removal, the gear will be restarted.
- Watchman monitors the application server logs for messages hinting at out of memory, then restarts the gear if needed.
- Watchman compares the user-defined status of a gear, then the actual status of the gear, and fixes any dependencies.
- Watchman searches processes to ensure they belong to the correct cgroup. It kills abandoned processes associated with a stopped gear, or restarts a gear that has zero running processes.
- Watchman monitors the usage rate of CPU cycles and restricts a gear's CPU consumption if the rate of change is too aggressive.
8.5.4.1. Enabling Watchman
# chkconfig openshift-watchman on
8.5.4.2. Supported Watchman Plug-ins
/etc/openshift/watchman/plugins.d
directory, and are outlined in the following table.
/etc/openshift/watchman/plugins.d
directory. To disable a plug-in, move the plug-in file from the plugins.d
directory and place it into an unused directory for backup. Ensure to restart the Watchman tool any time a change to the plugins.d
directory is made:
# service openshift-watchman restart
Table 8.5. Supported Watchman Plug-ins
Watchman Plug-in Name | Plug-in Filename | Function |
---|---|---|
Syslog | syslog_plugin.rb | This searches the /var/log/messages file for any messages logged by cgroups when a gear is destroyed and restarts the gear if required. |
JBoss | jboss_plugin.rb | This searches JBoss cartridge server.log files for out-of-memory exceptions and restarts gears if required. |
Gear State | gear_state_plugin.rb | This compares the last state change commanded by the user against the current status of the gear in order to find the best use for resources. For example, this plug-in kills any processes running on a stopped gear, and restarts a started gear if it has no running processes. |
Throttler | throttler_plugin.rb | This uses cgroups to monitor CPU usage and restricts usage if required. |
Metrics | metrics_plugin.rb | This gathers and publishes gear-level metrics such as cgroups data for all gears on a node host at a configurable interval. |
OOM | oom_plugin.rb | Available starting with OpenShift Enterprise 2.1.4, this monitors for gears under out-of-memory (OOM) state, attempts to resolve problems, and restarts gears if required. |
As well as adding it to the plugins.d
directory, as outlined above, enabling the Metrics plug-in requires an extra step. Edit the /etc/openshift/node.conf
file and ensure the following line is uncommented to enable the Metrics plug-in:
WATCHMAN_METRICS_ENABLED=trueRestart the Watchman service for the changes to take effect
/var/log/openshift/node/platform.log
by default, which is the node platform log file. However, if you have Syslog enabled, log data is sent to the syslog file with type=metric
in each Metrics logline.
Example 8.2. Logged Metrics Data
Jun 10 16:25:39 vm openshift-platform[29398]: type=metric appName=php6 gear=53961099e659c55b08000102 app=53961099e659c55b08000102 ns=demo quota.blocks.used=988 quota.blocks.limit=1048576 quota.files.used=229 quota.files.limit=80000
To configure the throttler plug-in, edit the following parameters in the /etc/openshift/resource_limits.conf
file on a node host:
Example 8.3. Throttler Plug-in Configuration Parameters
[cg_template_throttled] cpu_shares=128 cpu_cfs_quota_us=100000 apply_period=120 apply_percent=30 restore_percent=70
apply_percent
parameter the gear is placed into a 'throttled' cgroup quota. The throttler plug-in continues to watch the gear until it is using the amount of CPU time defined by the restore_percent
parameter or less. When the amount is stabilized, the gear is placed back into the default cgroup limit.
cpu_shares
and cpu_cfs_quota
parameters define the throttle cgroup templates to apply to any throttled gears. The apply_period
parameter defines how long a gear is throttled before it is restored into the default cgroup limit. For example, using the default parameters, a gear is throttled once it uses over 30 percent of a gear's CPU usage for 120 seconds, and a throttled gear is unthrottled once it is using less than 70 percent of the throttled CPU usage for 120 seconds.
8.5.4.3. Configuring Watchman
/etc/sysconfig/watchman
file to configure Watchman. This file is available by default starting with OpenShift Enterprise 2.1.4.
Table 8.6. Watchman Configuration Parameters
Parameter | Function |
---|---|
GEAR_RETRIES | This sets the number of gear restarts attempted before a RETRY_PERIOD . |
RETRY_DELAY | This sets the number of seconds to wait before attempting to restart the gear. |
RETRY_PERIOD | This sets the number of seconds to wait before resetting the GEAR_RETRIES entry. |
STATE_CHANGE_DELAY | This sets the number of seconds the gear remains broken before Watchman attempts to fix the issue. |
STATE_CHECK_PERIOD | Available starting with OpenShift Enterprise 2.1.4, this sets the number of seconds to wait since the last check before checking the gear state. Increase this to reduce the impact of the Watchman Gear State plug-in. |
THROTTLER_CHECK_PERIOD | Available starting with OpenShift Enterprise 2.1.4, this sets the number of seconds to wait before checking the cgroup state of the gear. Increase this to reduce the impact of the Watchman Throttler plug-in. |
OOM_CHECK_PERIOD | Available starting with OpenShift Enterprise 2.1.4, this sets the number of seconds to wait since the last check before looking for gears under out-of-memory (OOM) state. |
# service openshift-watchman restart
8.5.5. Testing Node Host Functionality
oo-accept-node
command to test the basic functionality of the node host before performing more intensive tests.
8.5.6. Validating Gears
oo-admin-chk
on a broker host. This command lists gears that partially failed creation or deletion, as well as nodes with incorrect gear counts.
8.5.7. Node Capacity
active_capacity
entry in /opt/rh/ruby193/root/etc/mcollective/facts.yaml
.
max_active_gears
entry in /etc/openshift/resource_limits.conf
.
8.6. Monitoring Management Console Activity
8.6.1. Default Management Console Log File Locations
/var/log/openshift/console
directory.
Table 8.7. Default Management Console Log Files
File | Description |
---|---|
/var/log/openshift/console/production.log | This file contains any log requests processed by the Management Console. |
/var/log/openshift/console/httpd/access_log | This file logs any calls made to the REST API. |
/var/log/openshift/console/httpd/error_log | This file logs any Rails errors that occur on start-up. |
8.7. Usage Tracking
Procedure 8.3. To Disable Usage Tracking:
- Open
/etc/openshift/broker.conf
on the broker host. - Set the value of
ENABLE_USAGE_TRACKING_DATASTORE
to"false"
.- Alternatively, set
ENABLE_USAGE_TRACKING_AUDIT_LOG
tofalse
to disable audit logging for usage tracking.
- Restart the broker service:
#
service openshift-broker restart
8.7.1. Setting Tracked and Untracked Storage
oo-admin-ctl-user
command allows you to manage a user's available tracked and untracked gear storage. Both types of storage provide additional storage to a user's gears, but untracked storage is not included in usage reports. The total storage available to a user's gear is the sum of the tracked and untracked storage.
Note
# oo-admin-ctl-user -l username --setmaxtrackedstorage 10
Example 8.4. Setting the Maximum Amount of Tracked Storage
# oo-admin-ctl-user -l user --setmaxtrackedstorage 10
Setting max_tracked_addtl_storage_per_gear to 10... Done.
User user:
consumed gears: 2
max gears: 100
max tracked storage per gear: 10
max untracked storage per gear: 0
plan upgrade enabled:
gear sizes: small
sub accounts allowed: false
# oo-admin-ctl-user -l username --setmaxuntrackedstorage 10
Example 8.5. Setting the Maximum Amount of Untracked Storage
# oo-admin-ctl-user -l user --setmaxuntrackedstorage 10
Setting max_tracked_addtl_storage_per_gear to 10... Done.
User user:
consumed gears: 2
max gears: 100
max tracked storage per gear: 10
max untracked storage per gear: 10
plan upgrade enabled:
gear sizes: small
sub accounts allowed: false
8.7.2. Viewing Accumulated Usage Data
oo-admin-usage
or oo-admin-ctl-usage
command to view resource usage reports per user. Usage reports include how long a user has been using a gear and any additional storage. Red Hat
recommends using the oo-admin-usage
command for listing a single user's usage data, because it contains more detailed information. Use the oo-admin-ctl-usage
command to list all users' basic usage data at one time.
# oo-admin-usage -l username
Example 8.6. Viewing a User's Resource Usage
# oo-admin-usage -l user
Usage for user
------------------------------------------
#1
Usage Type: GEAR_USAGE (small)
Gear ID: 519262ef6892df43f7000001 (racecar)
Duration: 3 hours and 19 minutes (2013-05-14 12:14:45 - PRESENT)
#2
Usage Type: ADDTL_FS_GB (3)
Gear ID: 5192624e6892dfcb3f00000e (foo)
Duration: 15 seconds (2013-05-14 12:16:33 - 2013-05-14 12:16:48)
#3
Usage Type: ADDTL_FS_GB (2)
Gear ID: 5192624e6892dfcb3f00000e (foo)
Duration: 3 hours and 17 minutes (2013-05-14 12:16:48 - PRESENT)
Field | Description |
---|---|
Usage Type |
GEAR_USAGE is related to how long a gear has been in use with the gear size in parentheses.
ADDTL_FS_GB is related to how long additional storage has been in use on a gear with the number of GBs in parentheses.
|
Gear ID | Gear ID indicates the UUID of the relevant gear with the associated application name in parentheses. |
Duration | Duration indicates the start and end time of the gear (or start time and PRESENT if still in use). |
# oo-admin-ctl-usage --list
Example 8.7. Viewing Resource Usage for All Users
# oo-admin-ctl-usage --list
Errors/Warnings will be logged to terminal
2013-05-14 15:48:54 -0400 INFO::
---------- STARTED ----------
User: username1
Gear: 518bcaa26892dfcb74000001, UsageType: GEAR_USAGE, Usage: 23.32543548687111
Gear: 518bcb876892dfcb74000017, UsageType: GEAR_USAGE, Usage: 23.32543548687111
Gear: 519254d36892df8f9000000b, UsageType: ADDTL_FS_GB, Usage: 0.05429166666666666
Gear: 519254d36892df8f9000000b, UsageType: GEAR_USAGE, Usage: 0.08019000000000001
Gear: 519258d46892df156600001f, UsageType: GEAR_USAGE, Usage: 4.287655764648889
User: username2
Gear: 5192624e6892dfcb3f00000e, UsageType: ADDTL_FS_GB, Usage: 0.0042325
Gear: 5192624e6892dfcb3f00000e, UsageType: ADDTL_FS_GB, Usage: 3.5350574313155554
Gear: 519262ef6892df43f7000001, UsageType: GEAR_USAGE, Usage: 3.5691388202044445
2013-05-14 15:48:54 -0400 INFO::
---------- ENDED, #Errors: 0, #Warnings: 0 ----------
Field | Description |
---|---|
User | User names the user accumulating the resource usage. |
Gear | Gear indicates the UUID of the relevant gear. |
UsageType |
GEAR_USAGE is related to how long a gear has been in use.
ADDTL_FS_GB is related to how long additional storage has been in use on a gear.
|
Usage | Usage lists the duration of the gear (in hours). |
8.8. Enabling Syslog
Note
/var/log/messages
file by default. See the Red Hat Enterprise Linux 6 Deployment Guide for information on viewing and managing log files if using Rsyslog.
8.8.1. Enabling Syslog for Broker Components
SYSLOG_ENABLED
variable in the /etc/openshift/broker.conf
file to true
in order to group production.log
, user_action.log
, and usage.log
into the syslog
file:
SYSLOG_ENABLED=trueThe default location for the
syslog
file is /var/log/messages
, but this is configurable. However, in the syslog
file, these share the same program name. In order to distinguish between the log files, the following applies:
- Messages usually sent to
production.log
will havesrc=app
in each log line. - Messages usually sent to
user_action.log
will havesrc=useraction
in each log line. - Messages usually sent to
usage.log
will havesrc=usage
in each log line.
8.8.2. Enabling Syslog for Node Components
Note
/var/log/messages
file by default. See the Red Hat Enterprise Linux 6 Deployment Guide for information on viewing and managing log files if using Rsyslog.
Procedure 8.4. To Enable Syslog for Apache:
- Configure Apache to send log messages to Syslog by adding the following option in the
/etc/sysconfig/httpd
file:OPTIONS="-DOpenShiftFrontendSyslogEnabled"
Important
All options must be on the same line. For example, in Section 8.5.2, “Enabling Application and Gear Context in Node Component Logs” another option for Apache log files is explained. If both options are desired, the line must use the following syntax:OPTIONS="-Option1 -Option2"
- Restart the
httpd
service for the Apache changes to take effect:#
service httpd restart
Procedure 8.5. To Enable Syslog for the Node Platform:
- Configure the node platform to send log messages to Syslog by editing the
/etc/openshift/node.conf
file. Add the following line and any or all of the described optional settings that follow:PLATFORM_LOG_CLASS=SyslogLogger
Optional Threshold Setting:Add the following line to include messages with priorities up to and including the set threshold. Replace
priority
in the following line with one of the levels listed at http://ruby-doc.org/stdlib-1.9.3/libdoc/syslog/rdoc/Syslog.html#method-c-log:PLATFORM_SYSLOG_THRESHOLD=priority
Optional Trace Log Setting:Add the following line to include trace logs that were previously directed to the default
/var/log/openshift/node/platform-trace.log
file:PLATFORM_SYSLOG_TRACE_ENABLED=1
- Restart the
ruby193-mcollective
service for the node platform changes to take effect:#
service ruby193-mcollective restart
- When Syslog support is enabled for the node platform, the
local0
Syslog facility is used to log messages. By default, the/etc/rsyslog.conf
file does not log platform debug messages. If you are using Rsyslog as your Syslog implementation, add the following line to the/etc/rsyslog.conf
file to enable platform debug message logging. If necessary, replace/var/log/messages
with your chosen logging destination:local0.*;*.info;mail.none;authpriv.none;cron.none /var/log/messages
Then restart thersyslog
service:#
service rsyslog restart
With this change, all log messages using thelocal0
facility are sent to the configured logging destination.
8.8.3. Enabling Syslog for Cartridge Logs from Gears
$OPENSHIFT_LOG_DIR
directory of an application. You can configure logshifter
on node hosts to instead have gears send their cartridge logs to Syslog. Starting with OpenShift Enterprise 2.1.7, you can also have them sent to both Syslog and the $OPENSHIFT_LOG_DIR
directory at the same time.
Procedure 8.6. To Enable Syslog for Cartridge Logs from Gears:
- Edit the
/etc/openshift/logshifter.conf
file on the node host. The default value for theoutputType
setting isfile
, which results in gears sending cartridge logs to the$OPENSHIFT_LOG_DIR
directory. Change this setting tosyslog
to have them sent to Syslog:outputType=syslog
Alternatively, starting with OpenShift Enterprise 2.1.7, you can choose to change theoutputType
setting instead tomulti
, which results in logs being written using bothfile
andsyslog
at the same time. - Ask affected owners of existing applications to restart their applications for the changes to take effect. They can restart their applications with the following commands:
$
rhc app stop -a appname
$rhc app start -a appname
Alternatively, you can restart all gears on affected node hosts. The downtime caused by restarting all gears is minimal and normally lasts a few seconds:#
oo-admin-ctl-gears restartall
Important
outputTypeFromEnviron
setting in the /etc/openshift/logshifter.conf
file is set to true
, application owners are allowed to override the global outputType
setting using a LOGSHIFTER_OUTPUT_TYPE
environment variable in their application. See the OpenShift Enterprise User Guide for more information.
With the mmopenshift
plug-in, all Cron cartridges will output all log information to the configured gear log file (/var/log/openshift_gears
in the example 8.1). It may be necessary for system-level Cron logs to be separated from the gear logs for troubleshooting purposes. System-level Cron messages are tagged with cron_sys_log
and can be separated into another file by adding the below to the /etc/syslog.conf
Syslog configuration file:
:syslogtag, contains, "cron_sys_log:" /var/log/openshift_cron_cartridges.log &~ action(type="mmopenshift") if $!OpenShift!OPENSHIFT_APP_UUID != '' then # annotate and log syslog output from gears specially *.* action(type="omfile" file="/var/log/openshift_gears" template="OpenShift") else # otherwise send syslog where it usually goes *.info;mail.none;authpriv.none;cron.none action(type="omfile" file="/var/log/messages")
:syslogtag
entry must be placed before the *.* mmopenshift
entry to prevent Cron system logs from going to both the openshift_cron_cartridges
log and the openshift_gears
log. The &~
tells Rsyslog to stop processing log entries if the filter condition on the previous line is met.
To provide context to cartridge logs aggregated to Syslog, a message modification plug-in for Rsyslog called mmopenshift
can be used to add gear metadata to the cartridge logs. The plug-in can be configured to add metadata items to the JSON properties of each message that Rsyslog receives from a gear.
mmopenshift
plug-in also only works for messages that have the $!uid
JSON property, which can be added automatically when the imuxsock
plug-in is enabled with the following options:
- SysSock.Annotate
- SysSock.ParseTrusted
- SysSock.UsePIDFromSystem
Procedure 8.7. To Enable Application and Gear Context for Cartridge Logs:
- Install the
mmopenshift
plug-in, which requires the rsyslog7 package, on the node host. Because installing the rsyslog7 package where the rsyslog package is already installed can cause conflicts, consult the following instructions relevant to your node host.If the rsyslog package is already installed, use ayum
shell to remove the rsyslog package and install the rsyslog7 and rsyslog7-mmopenshift packages safely:- Stop the Rsyslog service:
#
service rsyslog stop
- Open a
yum
shell:#
yum shell
- Run the following commands inside of the
yum
shell:> erase rsyslog
> install rsyslog7 rsyslog7-mmopenshift
> transaction run
> quit
The rsyslog package is uninstalled and a newer version of Rsyslog takes its place. The rsyslog7-mmopenshift package is also installed, which provides themmopenshift
module.
Alternatively, if the rsyslog package is not already installed, or if rsyslog7 is already the only version of Rsyslog installed, install themmopenshift
module using the following command:#
yum install rsyslog7 rsyslog7-mmopenshift
- Review the existing
/etc/rsyslog.conf
file, if relevant, and note any important default or custom settings. This includes changes that were made with the instructions described in Section 8.8.2, “Enabling Syslog for Node Components”. Next, make any required changes to ensure that the new/etc/rsyslog7.conf
file contains those changes. Note that some settings may be different between/etc/rsyslog.conf
and/etc/rsyslog7.conf.rpmnew
; see http://www.rsyslog.com/doc/v7-stable/ for more information.Once complete, take a backup of/etc/rsyslog.conf
and move/etc/rsyslog.conf.rpmnew
to/etc/rsyslog.conf
Important
A sample section of an/etc/rsyslog7.conf.rpmnew
file is provided at Example 8.8, “Sample Configuration Settings in/etc/rsyslog7.conf
” which depicts how themmopenshift
plug-in can be enabled for Rsyslog. However, it is not meant to represent a comprehensive/etc/rsyslog7.conf
file or be fully comparable to the standard/etc/rsyslog.conf
configuration. - Edit the
/etc/rsyslog7.conf
file and add the following lines under theMODULES
section to enable theimuxsock
plug-in and themmopenshift
plug-in:module(load="imuxsock" SysSock.Annotate="on" SysSock.ParseTrusted="on" SysSock.UsePIDFromSystem="on") module(load="mmopenshift")
- Edit the
/etc/rsyslog7.conf
file and comment out the following line under theMODULES
section to configure theimuxsock
plug-in:#$ModLoad imuxsock
- Edit the
/etc/rsyslog7.conf
file and comment out the following lines to disable theimjournal
plug-in:$ModLoad imjournal $OmitLocalLogging on $IMJournalStateFile imjournal.state
- Edit the
/etc/rsyslog7.conf
file to have Syslog search the/etc/rsyslog7.d
directory for configuration files:#$IncludeConfig /etc/rsyslog.d/*.conf $IncludeConfig /etc/rsyslog7.d/*.conf
- Examine the
/etc/rsyslog.d
directory and copy any configuration files that are needed in/etc/rsyslog7.d
directory for the Rsyslog7 logging configuration. - Create a gear log template file in the Rsyslog7 directory. This defines the format of the gear logs, including sufficient parameters to distinguish gears from each other. This example template can be modified to suit the requirements of your log analysis tools. For more information on template configuration instructions, see http://www.rsyslog.com/doc/v7-stable/configuration/templates.html.:
#
vi /etc/rsyslog7.d/openshift-gear-template.conf
template(name="OpenShift" type="list") { property(name="timestamp" dateFormat="rfc3339") constant(value=" ") property(name="hostname") constant(value=" ") property(name="syslogtag") constant(value=" app=") property(name="$!OpenShift!OPENSHIFT_APP_NAME") constant(value=" ns=") property(name="$!OpenShift!OPENSHIFT_NAMESPACE") constant(value=" appUuid=") property(name="$!OpenShift!OPENSHIFT_APP_UUID") constant(value=" gearUuid=") property(name="$!OpenShift!OPENSHIFT_GEAR_UUID") property(name="msg" spifno1stsp="on") property(name="msg" droplastlf="on") constant(value="\n") } - Add the following lines to the
/etc/rsyslog7.conf
file under theRULES
section to configure themmopenshift
plug-in to use the template from the previous step. The following example logs all gear messages to the/var/log/openshift_gears
file and all other messages to the/var/log/messages
file, but these destinations are configurable to a different destination:module(load="mmopenshift") action(type="mmopenshift") if $!OpenShift!OPENSHIFT_APP_UUID != '' then *.* action(type="omfile" file="/var/log/openshift_gears" template="OpenShift") else { *.info;mail.none;authpriv.none;cron.none action(type="omfile" file="/var/log/messages") }
Also, comment out the following line:# *.info;mail.none;authpriv.none;cron.none /var/log/messages
- Start or restart the
rsyslog
service and ensure it starts persistently across reboots:#
service rsyslog restart
#chkconfig rsyslog on
Example 8.8. Sample Configuration Settings in /etc/rsyslog7.conf
#### MODULES #### # The imjournal module bellow is now used as a message source instead of imuxsock. #$ModLoad imuxsock # provides support for local system logging (e.g. via logger command) #$ModLoad imjournal # provides access to the systemd journal $ModLoad imklog # provides kernel logging support (previously done by rklogd) #$ModLoad immark # provides --MARK-- message capability module(load="imuxsock" SysSock.Annotate="on" SysSock.ParseTrusted="on" SysSock.UsePIDFromSystem="on") module(load="mmopenshift") # Provides UDP syslog reception #$ModLoad imudp #$UDPServerRun 514 # Provides TCP syslog reception #$ModLoad imtcp #$InputTCPServerRun 514 #### GLOBAL DIRECTIVES #### # Where to place auxiliary files $WorkDirectory /var/lib/rsyslog # Use default timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # File syncing capability is disabled by default. This feature is usually not required, # not useful and an extreme performance hit #$ActionFileEnableSync on # Include all config files in /etc/rsyslog7.d/ #$IncludeConfig /etc/rsyslog.d/*.conf $IncludeConfig /etc/rsyslog7.d/*.conf # Turn off message reception via local log socket; # local messages are retrieved through imjournal now. #$OmitLocalLogging on # File to store the position in the journal #$IMJournalStateFile imjournal.state #### RULES #### # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! #*.info;mail.none;authpriv.none;cron.none /var/log/messages action(type="mmopenshift") if $!OpenShift!OPENSHIFT_APP_UUID != '' then # annotate and log syslog output from gears specially *.* action(type="omfile" file="/var/log/openshift_gears" template="OpenShift") else # otherwise send syslog where it usually goes *.info;mail.none;authpriv.none;cron.none action(type="omfile" file="/var/log/messages") # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg :omusrmsg:* # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log
8.8.4. Enabling Syslog for Management Console Components
SYSLOG_ENABLED
variable in the /etc/openshift/console.conf
file to true
in order to send production.log
log messages to the syslog
file:
SYSLOG_ENABLED=trueThe default location for the
syslog
file is /var/log/messages
, but this is configurable. However, in the syslog
file, different log files share the same program name. Management Console log messages will have src=app
inserted into each log line.
Chapter 9. Command Reference
9.1. Broker Administration Commands
openshift-origin-broker
and openshift-origin-broker-util
RPMs.
9.1.1. oo-accept-broker
PASS
and exits with return code 0. With the -v
option added, it displays the current checks that are being performed.
Example 9.1. Checking For Errors With oo-accept-broker
# oo-accept-broker -v
INFO: SERVICES: DATA: mongo, Auth: mongo, Name bind
INFO: AUTH_MODULE: rubygem-openshift-origin-auth-mongo
INFO: NAME_MODULE: rubygem-openshift-origin-dns-bind
INFO: Broker package is: openshift-origin-broker
INFO: checking packages
INFO: checking package ruby
INFO: checking package rubygems
INFO: checking package rubygem-rails
INFO: checking package rubygem-passenger
INFO: checking package rubygem-openshift-origin-common
INFO: checking package rubygem-openshift-origin-controller
INFO: checking package openshift-origin-broker
INFO: checking ruby requirements
INFO: checking ruby requirements for openshift-origin-controller
INFO: checking ruby requirements for config/application
INFO: checking firewall settings
INFO: checking services
INFO: checking datastore
INFO: checking cloud user authentication
INFO: auth plugin = /var/www/openshift/broker/config/initializers/broker.rb:2: uninitialized constant ApplicationObserver (NameError) from -:6
INFO: checking dynamic dns plugin
INFO: checking messaging configuration
PASS
9.1.2. oo-accept-systems
PUBLIC_HOSTNAME
and PUBLIC_IP
configuration settings are globally valid and unique. It also checks the cartridges installed on the nodes and the status of the broker's cache. It is run without options on the broker host.
PASS
and exits with return code 0. With the -v
option added, it displays the current checks that are being performed.
Example 9.2. Checking For Errors With oo-accept-systems
# oo-accept-systems -v
INFO: checking that each public_hostname resolves to external IP
INFO: PUBLIC_HOSTNAME node1.example.com for node2.example.com resolves to 10.4.59.136
INFO: PUBLIC_HOSTNAME node2.example.com for node1.example.com resolves to 10.4.59.133
INFO: checking that each public_hostname is unique
INFO: checking that public_ip has been set for all nodes
INFO: PUBLIC_IP 10.4.59.136 for node1.example.com
INFO: PUBLIC_IP 10.4.59.133 for node2.example.com
INFO: checking that public_ip is unique for all nodes
INFO: checking that all node hosts have cartridges installed
INFO: cartridges for node1.example.com: cron-1.4|ruby-1.9|perl-5.10|jenkins-client-1.4|diy-0.1|jenkins-1.4|php-5.3|haproxy-1.4|abstract|abstract-jboss|jbosseap-6.0|mysql-5.1|postgresql-8.4|ruby-1.8|jbossews-1.0|python-2.6|abstract-httpd
INFO: cartridges for node2.example.com: diy-0.1|jenkins-client-1.4|cron-1.4|jbossews-1.0|php-5.3|abstract-httpd|ruby-1.9|python-2.6|jbosseap-6.0|perl-5.10|abstract|postgresql-8.4|abstract-jboss|ruby-1.8|jenkins-1.4|haproxy-1.4|mysql-5.1
INFO: checking that same cartridges are installed on all node hosts
INFO: checking that broker's cache is not stale
INFO: API reports carts: diy-0.1|jbossews-1.0|php-5.3|ruby-1.9|python-2.6|jbosseap-6.0|perl-5.10|ruby-1.8|jenkins-1.4|jenkins-client-1.4|cron-1.4|postgresql-8.4|haproxy-1.4|mysql-5.1
PASS
9.1.3. oo-admin-chk
-v
option added, it displays the current checks that are being performed.
Example 9.3. Checking For MongoDB Consistency with oo-admin-chk
# oo-admin-chk -v
Started at: 2013-05-03 03:36:28 +1000
Time to fetch mongo data: 0.005s
Total gears found in mongo: 3
Time to get all gears from nodes: 20.298s
Total gears found on the nodes: 3
Total nodes that responded : 1
Checking application gears and ssh keys on corresponding nodes:
51816f026892dfec74000004 : String... OK
518174556892dfec74000040 : String... OK
518176826892dfec74000059 : String... OK
Checking node gears in application database:
51816f026892dfec74000004... OK
518174556892dfec74000040... OK
518176826892dfec74000059... OK
Success
Total time: 20.303s
Finished at: 2013-05-03 03:36:49 +1000
-l
option added, additional levels of checks can be included:
# oo-admin-chk -l 1 -v
9.1.4. oo-admin-clear-pending-ops
oo-admin-clear-pending-ops
removes stuck user operations from the application queue, so that they no longer hold up the queue preventing other operations from proceeding on that application.
oo-admin-clear-pending-ops [options]
Option | Description |
---|---|
-t , --time n |
Deletes pending operations older than n hours. (Default:
1 )
|
-u , --uuid uuid | Prunes only applications with the given UUID. |
Note
oo-admin-clear-pending-ops
command directly. It is most commonly run automatically by the ose-upgrade
tool as part of the upgrade process described in the OpenShift Enterprise Deployment Guide. This ensures the database is in a consistent state before data migrations happen.
9.1.5. oo-admin-console-cache
oo-admin-console-cache
command manages the Management Console Rails application's cache.
oo-admin-console-cache [-c | --clear] [-q | --quiet]
Option | Description |
---|---|
-c , --clear |
Removes all entries from the Management Console Rails application's cache.
|
-q , --quiet | Shows as little output as possible. |
9.1.6. oo-admin-broker-auth
AUTH_SALT
is changed in /etc/openshift/broker.conf
, restart the broker service and run the oo-admin-broker-auth
command to recreate authentication tokens for all applicable gears.
# oo-admin-broker-auth
9.1.7. oo-admin-broker-cache
# oo-admin-broker-cache --clear
9.1.8. oo-admin-ctl-app
# oo-admin-ctl-app
The HAProxy multiplier sets the ratio of how many HAproxy cartridges are enabled for application scaling. Setting the multiplier number to 2
means that for every two gears, one will have HAProxy enabled. Alternatively, you can set the minimum and maximum number of HAProxy cartridges allowed in scaling.
oo-admin-ctl-app
command with the --multiplier
option.
# oo-admin-ctl-app -l username -a appname --cartridge haproxy-1.4 -c set-multiplier --multiplier 2
9.1.9. oo-admin-ctl-authorization
# oo-admin-ctl-authorization -c expire
# oo-admin-ctl-authorization -c revoke_all
9.1.10. oo-admin-ctl-district
9.1.11. oo-admin-ctl-domain
# oo-admin-ctl-domain
9.1.12. oo-admin-ctl-region
oo-admin-ctl-region
command is used to create, list, or destroy regions and add or remove zones within a given region.
9.1.13. oo-admin-ctl-team
oo-admin-ctl-team
tool manages global teams and is invoked with a set of commands using the -c
or --command
option:
oo-admin-ctl-team -c command [options]
Command | Description |
---|---|
list |
Lists all teams.
|
create |
Creates a new team. Requires either both the
--name and --maps-to options, or both the --groups and --config-file options. For example:
#
Alternatively:
#
|
update | Updates an existing team LDAP correspondance. Requires both the --name and --maps-to options. |
delete | Deletes a team. Requires the --name option. |
show | Displays a team and its members. Requires either the --name or --maps-to option. |
sync | Syncs global teams with the LDAP groups. Requires the --config-file option. |
sync-to-file | Generates a sync file for review. No changes are made to the teams and their members. Requires the --out-file and --config-file options. |
sync-from-file | Syncs from a file. Requires the --in-file option. |
Option | Description |
---|---|
--broker path |
Specifies the path to the broker.
|
--create-new-users | Creates new users in OpenShift if they do not exist. |
--remove-old-users | Removes members from a team that are no longer in the group. |
9.1.14. oo-admin-ctl-usage
oo-admin-ctl-usage
displays usage data for all users. The output includes user names, gears, usage type and duration.
oo-admin-ctl-usage --list [--enable-logger]
Option | Description |
---|---|
--list |
List usage data.
|
--enable-logger | Print error and warning messages to the log file instead of to the terminal. |
--list
option.
Field | Description |
---|---|
User | User names the user accumulating the resource usage. |
Gear | Gear indicates the UUID of the relevant gear. |
UsageType |
GEAR_USAGE is related to how long a gear has been in use.
ADDTL_FS_GB is related to how long additional storage has been in use on a gear.
|
Usage | Usage lists the duration of the gear (in hours). |
9.1.15. oo-admin-ctl-user
Option | Description |
---|---|
-l, --login Username |
Login name for an OpenShift Enterprise user account. Required unless
-f is used.
|
-f, --logins-file File_Name |
File containing one login name per line. Required unless
-l is used.
|
-c, --create |
Create user account(s) for the specified login name(s) if they do not already exist.
|
--setmaxdomains Number |
Set the maximum number of domains a user is allowed to use.
|
--setmaxgears Number |
Set the maximum number of gears a user is allowed to use.
|
--setmaxteams Number | Set the maximum number of teams a user is allowed to create. |
--allowviewglobalteams true|false | Add or remove the capability for a user to search and view any global team. |
--allowprivatesslcertificates true|false | Add or remove the capability for a user to add private SSL certificates. |
--addgearsize Gear_Size | Add a gear size to the capabilities for a user. |
--removegearsize Gear_Size | Remove a gear size from the capabilities for a user. |
--allowha true|false | Allow or disallow the high-availability applications capability for a user. |
-q, --quiet | Suppress non-error output. |
-h, --help | Show usage information. |
oo-admin-ctl-user
command:
- See Section 2.4, “Enabling Support for High-Availability Applications” for more information on the high-availability applications capability.
- See Section 3.1, “Creating a User” for more information on creating users.
- See Section 4.1, “Setting the Maximum Number of Teams for Specific Users” and Section 4.2.2, “Enabling Global Team Visibility” for more information on team options.
- See Section 6.1, “Adding or Modifying Gear Profiles” for more information on modifying gear size capabilities.
- See Section 6.7, “Setting Gear Quotas and Sizes for Specific Users” for more information on setting gear quotas.
- See Section 8.7.1, “Setting Tracked and Untracked Storage” for more information on setting maximum tracked and untracked storage per gear.
9.1.16. oo-admin-move
rsync_id_rsa
private key in the broker host's /etc/openshift/
directory and the public key in each node host's /root/.ssh/authorized_keys
file as explained in the OpenShift Enterprise Deployment Guide at https://access.redhat.com/site/documentation. A gear retains its UNIX UID when moved, therefore cross-district moves are only allowed when the destination district has the same gear UID available.
oo-admin-move
command on the broker host, specifying the desired gear's UUID and the node host you wish to move the gear to:
Example 9.4. Moving a Gear From One Node to Another
# oo-admin-move --gear_uuid 3baf79139b0b449d90303464dfa8dd6f -i node2.example.com
9.1.17. oo-admin-repair
--help
output for additional uses.
# oo-admin-repair
9.1.18. oo-admin-upgrade
oo-admin-upgrade
command usage.
Important
oo-admin-upgrade
tool is also often required when applying asynchronous errata updates provided by Red Hat for OpenShift Enterprise. See the latest OpenShift Enterprise Deployment Guide at https://access.redhat.com/site/documentation for usage instructions as it applies to these types of updates.
9.1.19. oo-admin-usage
oo-admin-usage
command displays a resource usage report for a particular user, or aggregated usage data of all users. The output includes usage type, gear ID, and duration.
oo-admin-usage [-l username] [options]
-l
username is omitted, the command displays aggregated data on all users.
Option | Description |
---|---|
-a , --app application_name |
Filters usage data by the given application name.
|
-g , --gear gear_id | Filters usage data by the given gear ID. |
-s , --start start_date | Filters usage data by the given start date, expressed as ISO dates (YYYY-MM-DD). |
-e , --end end_date | Filters usage data by the given end date, expressed as ISO dates (YYYY-MM-DD). |
Field | Description |
---|---|
Usage Type |
GEAR_USAGE is related to how long a gear has been in use with the gear size in parentheses.
ADDTL_FS_GB is related to how long additional storage has been in use on a gear with the number of GBs in parentheses.
|
Gear ID | Gear ID indicates the UUID of the relevant gear with the associated application name in parentheses. |
Duration | Duration indicates the start and end time of the gear (or start time and PRESENT if still in use). |
9.1.20. oo-admin-ctl-cartridge
oo-admin-ctl-cartridge
command facilitates cartridge management on the broker, including importing cartridge manifests from nodes and activating or deactivating cartridges. This command must be used to ensure that newly installed or updated cartridges can be used in applications.
Note
9.1.21. oo-register-dns
nsupdate
command. Normally this command is used for broker or node hosts, although it can be used for other infrastructure hosts. Do not use this command to change DNS records for applications and gears, because these are CNAME records.
# oo-register-dns
9.2. Node Administration Commands
openshift-origin-node-util
RPM.
Note
9.2.1. oo-accept-node
PASS
and exits with return code 0. With the -v
option added, it displays the current checks that are being performed.
# oo-accept-node -v
9.2.2. oo-admin-ctl-gears
# oo-admin-ctl-gears
9.2.3. oo-idler-stats
# oo-idler-stats
9.2.4. Idler Commands
9.2.4.1. oo-last-access
Example 9.5. A Cron auto-idler Script
# run the last-access compiler hourly 0 * * * * /usr/sbin/oo-last-access > /var/lib/openshift/last_access.log 2>&1 # run the auto-idler twice daily and idle anything stale for 24 hours 30 7,19 * * * /usr/sbin/oo-auto-idler idle --interval 12
9.2.4.2. oo-auto-idler
Appendix A. Revision History
Revision History | |||
---|---|---|---|
Revision 2.2-6 | Wed Nov 23 2016 | Ashley Hardin | |
|
Revision History | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Revision 2.2-5 | Thu Sep 08 2016 | Ashley Hardin | |||||||||||
| |||||||||||||
Revision 2.2-4 | Wed Sep 09 2015 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.2-3 | Fri Jun 05 2015 | Vikram Goyal | |||||||||||
| |||||||||||||
Revision 2.2-2 | Fri Apr 10 2015 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.2-1 | Wed Dec 10 2014 | Timothy Poitras | |||||||||||
| |||||||||||||
Revision 2.2-0 | Tue Nov 4 2014 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.1-8 | Thu Oct 2 2014 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.1-7 | Thu Sep 11 2014 | Alex Dellapenta | |||||||||||
| |||||||||||||
Revision 2.1-6 | Tue Aug 26 2014 | Alex Dellapenta | |||||||||||
| |||||||||||||
Revision 2.1-5 | Fri Aug 8 2014 | Alex Dellapenta | |||||||||||
| |||||||||||||
Revision 2.1-4 | Tue Aug 5 2014 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.1-3 | Wed Jul 9 2014 | Julie Wu | |||||||||||
| |||||||||||||
Revision 2.1-2 | Thu Jun 26 2014 | Julie Wu | |||||||||||
| |||||||||||||
Revision 2.1-1 | Thu Jun 5 2014 | Julie Wu | |||||||||||
| |||||||||||||
Revision 2.1-0 | Fri May 16 2014 | Brice Fallon-Freeman | |||||||||||
Revision 2.0-2 | Tue Jan 28 2014 | Julie Wu | |||||||||||
| |||||||||||||
Revision 2.0-1 | Tue Jan 14 2014 | Brice Fallon-Freeman | |||||||||||
| |||||||||||||
Revision 2.0-0 | Mon Dec 9 2013 | Alex Dellapenta | |||||||||||
|