Chapter 4. Managing high availability services with Pacemaker

The Pacemaker service manages core container and active-passive services, such as Galera, RabbitMQ, Redis, and HAProxy. You use Pacemaker to view and manage general information about the managed services, virtual IP addresses, power management, and fencing.

For more information about Pacemaker in Red Hat Enterprise Linux, see Configuring and Managing High Availability Clusters in the Red Hat Enterprise Linux documentation.

4.1. Resource bundles and containers

Pacemaker manages Red Hat OpenStack Platform (RHOSP) services as Bundle Set resources, or bundles. Most of these services are active-active services that start in the same way and always run on each Controller node.

Pacemaker manages the following resource types:

Bundle
A bundle resource configures and replicates the same container on all Controller nodes, maps the necessary storage paths to the container directories, and sets specific attributes related to the resource itself.
Container
A container can run different kinds of resources, from simple systemd services like HAProxy to complex services like Galera, which requires specific resource agents that control and set the state of the service on the different nodes.
Important
  • You cannot use docker or systemctl to manage bundles or containers. You can use the commands to check the status of the services, but you must use Pacemaker to perform actions on these services.
  • Docker containers that Pacemaker controls have a RestartPolicy set to no by Docker. This is to ensure that Pacemaker, and not Docker, controls the container start and stop actions.

Simple Bundle Set resources (simple bundles)

A simple Bundle Set resource, or simple bundle, is a set of containers that each include the same Pacemaker services that you want to deploy across the Controller nodes.

The following example shows a list of simple bundles from the output of the pcs status command:

Docker container set: haproxy-bundle [192.168.24.1:8787/rhosp-rhel7/openstack-haproxy:pcmklatest]
  haproxy-bundle-docker-0      (ocf::heartbeat:docker):        Started overcloud-controller-0
  haproxy-bundle-docker-1      (ocf::heartbeat:docker):        Started overcloud-controller-1
  haproxy-bundle-docker-2      (ocf::heartbeat:docker):        Started overcloud-controller-2

For each bundle, you can see the following details:

  • The name that Pacemaker assigns to the service.
  • The reference to the container that is associated with the bundle.
  • The list and status of replicas that are running on the different Controller nodes.

The following example shows the settings for the haproxy-bundle simple bundle:

$ sudo pcs resource show haproxy-bundle
Bundle: haproxy-bundle
 Docker: image=192.168.24.1:8787/rhosp-rhel7/openstack-haproxy:pcmklatest network=host options="--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" replicas=3 run-command="/bin/bash /usr/local/bin/kolla_start"
 Storage Mapping:
  options=ro source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json (haproxy-cfg-files)
  options=ro source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src (haproxy-cfg-data)
  options=ro source-dir=/etc/hosts target-dir=/etc/hosts (haproxy-hosts)
  options=ro source-dir=/etc/localtime target-dir=/etc/localtime (haproxy-localtime)
  options=ro source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted (haproxy-pki-extracted)
  options=ro source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt (haproxy-pki-ca-bundle-crt)
  options=ro source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt (haproxy-pki-ca-bundle-trust-crt)
  options=ro source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem (haproxy-pki-cert)
  options=rw source-dir=/dev/log target-dir=/dev/log (haproxy-dev-log)

The example shows the following information about the containers in the bundle:

  • image: Image used by the container, which refers to the local registry of the undercloud.
  • network: Container network type, which is "host" in the example.
  • options: Specific options for the container.
  • replicas: Indicates how many copies of the container must run in the cluster. Each bundle includes three containers, one for each Controller node.
  • run-command: System command used to spawn the container.
  • Storage Mapping: Mapping of the local path on each host to the container. To check the haproxy configuration from the host, open the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file instead of the /etc/haproxy/haproxy.cfg file.
Note

Although HAProxy provides high availability services by load balancing traffic to selected services, you configure HAProxy as a highly available service by managing it as a Pacemaker bundle service.

Complex Bundle Set resources (complex bundles)

Complex Bundle Set resources, or complex bundles, are Pacemaker services that specify a resource configuration in addition to the basic container configuration that is included in simple bundles.

This configuration is needed to manage Multi-State resources, which are services that can have different states depending on the Controller node they run on.

This example shows a list of complex bundles from the output of the pcs status command:

Docker container set: rabbitmq-bundle [192.168.24.1:8787/rhosp-rhel7/openstack-rabbitmq:pcmklatest]
  rabbitmq-bundle-0    (ocf::heartbeat:rabbitmq-cluster):      Started overcloud-controller-0
  rabbitmq-bundle-1    (ocf::heartbeat:rabbitmq-cluster):      Started overcloud-controller-1
  rabbitmq-bundle-2    (ocf::heartbeat:rabbitmq-cluster):      Started overcloud-controller-2
Docker container set: galera-bundle [192.168.24.1:8787/rhosp-rhel7/openstack-mariadb:pcmklatest]
  galera-bundle-0      (ocf::heartbeat:galera):        Master overcloud-controller-0
  galera-bundle-1      (ocf::heartbeat:galera):        Master overcloud-controller-1
  galera-bundle-2      (ocf::heartbeat:galera):        Master overcloud-controller-2
Docker container set: redis-bundle [192.168.24.1:8787/rhosp-rhel7/openstack-redis:pcmklatest]
  redis-bundle-0       (ocf::heartbeat:redis): Master overcloud-controller-0
  redis-bundle-1       (ocf::heartbeat:redis): Slave overcloud-controller-1
  redis-bundle-2       (ocf::heartbeat:redis): Slave overcloud-controller-2

This output shows the following information about each complex bundle:

  • RabbitMQ: All three Controller nodes run a standalone instance of the service, similar to a simple bundle.
  • Galera: All three Controller nodes are running as Galera masters under the same constraints.
  • Redis: The overcloud-controller-0 container is running as the master, while the other two Controller nodes are running as slaves. Each container type might run under different constraints.

The following example shows the settings for the galera-bundle complex bundle:

[...]
Bundle: galera-bundle
 Docker: image=192.168.24.1:8787/rhosp-rhel7/openstack-mariadb:pcmklatest masters=3 network=host options="--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" replicas=3 run-command="/bin/bash /usr/local/bin/kolla_start"
 Network: control-port=3123
 Storage Mapping:
  options=ro source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json (mysql-cfg-files)
  options=ro source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src (mysql-cfg-data)
  options=ro source-dir=/etc/hosts target-dir=/etc/hosts (mysql-hosts)
  options=ro source-dir=/etc/localtime target-dir=/etc/localtime (mysql-localtime)
  options=rw source-dir=/var/lib/mysql target-dir=/var/lib/mysql (mysql-lib)
  options=rw source-dir=/var/log/mariadb target-dir=/var/log/mariadb (mysql-log-mariadb)
  options=rw source-dir=/dev/log target-dir=/dev/log (mysql-dev-log)
 Resource: galera (class=ocf provider=heartbeat type=galera)
  Attributes: additional_parameters=--open-files-limit=16384 cluster_host_map=overcloud-controller-0:overcloud-controller-0.internalapi.localdomain;overcloud-controller-1:overcloud-controller-1.internalapi.localdomain;overcloud-controller-2:overcloud-controller-2.internalapi.localdomain enable_creation=true wsrep_cluster_address=gcomm://overcloud-controller-0.internalapi.localdomain,overcloud-controller-1.internalapi.localdomain,overcloud-controller-2.internalapi.localdomain
  Meta Attrs: container-attribute-target=host master-max=3 ordered=true
  Operations: demote interval=0s timeout=120 (galera-demote-interval-0s)
              monitor interval=20 timeout=30 (galera-monitor-interval-20)
              monitor interval=10 role=Master timeout=30 (galera-monitor-interval-10)
              monitor interval=30 role=Slave timeout=30 (galera-monitor-interval-30)
              promote interval=0s on-fail=block timeout=300s (galera-promote-interval-0s)
              start interval=0s timeout=120 (galera-start-interval-0s)
              stop interval=0s timeout=120 (galera-stop-interval-0s)
[...]

This output shows that, unlike in a simple bundle, the galera-bundle resource includes explicit resource configuration that determines all aspects of the multi-state resource.

Note

Although a service can run on multiple Controller nodes at the same time, the Controller node itself might not be listening at the IP address that is required to reach those services. For information about how to check the IP address of a service, see Section 4.4, “Viewing virtual IP addresses”.

4.2. Viewing general Pacemaker information

To view general Pacemaker information, use the pcs status command.

Procedure

  1. Log in to any Controller node as the heat-admin user.

    $ ssh heat-admin@overcloud-controller-0
  2. Run the pcs status command:

    [heat-admin@overcloud-controller-0 ~]  $ sudo pcs status

    Example output:

    Cluster name: tripleo_cluster
    Stack: corosync
    Current DC: overcloud-controller-1 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum
    
    Last updated: Thu Feb  8 14:29:21 2018
    Last change: Sat Feb  3 11:37:17 2018 by root via cibadmin on overcloud-controller-2
    
    12 nodes configured
    37 resources configured
    
    Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
    GuestOnline: [ galera-bundle-0@overcloud-controller-0 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-2 rabbitmq-bundle-0@overcloud-controller-0 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-2 redis-bundle-0@overcloud-controller-0 redis-bundle-1@overcloud-controller-1 redis-bundle-2@overcloud-controller-2 ]
    
    Full list of resources:
    [...]

    The main sections of the output show the following information about the cluster:

    • Cluster name: Name of the cluster.
    • [NUM] nodes configured: Number of nodes that are configured for the cluster.
    • [NUM] resources configured: Number of resources that are configured for the cluster.
    • Online: Names of the Controller nodes that are currently online.
    • GuestOnline: Names of the guest nodes that are currently online. Each guest node consists of a complex Bundle Set resource. For more information about bundle sets, see Section 4.1, “Resource bundles and containers”.

4.3. Viewing bundle status

You can check the status of a bundle from an undercloud node or log in to one of the Controller nodes to check the bundle status directly.

Check bundle status from an undercloud node

Run the following command:

$ sudo docker exec -it haproxy-bundle-docker-0 ps -efww | grep haproxy*

Example output:

root           7       1  0 06:08 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws
haproxy       11       7  0 06:08 ?        00:00:17 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws

The output shows that the haproxy process is running inside the container.

Check bundle status from a Controller node

Log in to a Controller node and run the following command:

$ ps -ef | grep haproxy*

Example output:

root       17774   17729  0 06:08 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws
42454      17819   17774  0 06:08 ?        00:00:21 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws
root      288508  237714  0 07:04 pts/0    00:00:00 grep --color=auto haproxy*
[root@controller-0 ~]# ps -ef | grep -e 17774 -e 17819
root       17774   17729  0 06:08 ?        00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws
42454      17819   17774  0 06:08 ?        00:00:22 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws
root      301950  237714  0 07:07 pts/0    00:00:00 grep --color=auto -e 17774 -e 17819

4.4. Viewing virtual IP addresses

Each IPaddr2 resource sets a virtual IP address that clients use to request access to a service. If the Controller node with that IP address fails, the IPaddr2 resource reassigns the IP address to a different Controller node.

Show all virtual IP addresses

Run the pcs resource show command with the --full option to display all resources that use the VirtualIP type:

$ sudo pcs resource show --full

The following example output shows each Controller node that is currently set to listen to a particular virtual IP address:

 ip-10.200.0.6	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1
 ip-192.168.1.150	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0
 ip-172.16.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-1
 ip-172.16.0.11	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-0
 ip-172.18.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2
 ip-172.19.0.10	(ocf::heartbeat:IPaddr2):	Started overcloud-controller-2

Each IP address is initially attached to a specific Controller node. For example, 192.168.1.150 is started on overcloud-controller-0. However, if that Controller node fails, the IP address is reassigned to other Controller nodes in the cluster.

The following table describes the IP addresses in the example output and shows the original allocation of each IP address.

Table 4.1. IP address description and allocation source

IP AddressDescriptionAllocated From

192.168.1.150

Public IP address

ExternalAllocationPools attribute in the network-environment.yaml file

10.200.0.6

Controller virtual IP address

Part of the dhcp_start and dhcp_end range set to 10.200.0.5-10.200.0.24 in the undercloud.conf file

172.16.0.10

Provides access to OpenStack API services on a Controller node

InternalApiAllocationPools in the network-environment.yaml file

172.18.0.10

Storage virtual IP address that provides access to the Glance API and to Swift Proxy services

StorageAllocationPools attribute in the network-environment.yaml file

172.16.0.11

Provides access to Redis service on a Controller node

InternalApiAllocationPools in the network-environment.yaml file

172.19.0.10

Provides access to storage management

StorageMgmtAlloctionPools in the network-environment.yaml file

View a specific IP address

Run the pcs resource show command.

$ sudo pcs resource show ip-192.168.1.150

Example output:

 Resource: ip-192.168.1.150 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=192.168.1.150 cidr_netmask=32
  Operations: start interval=0s timeout=20s (ip-192.168.1.150-start-timeout-20s)
              stop interval=0s timeout=20s (ip-192.168.1.150-stop-timeout-20s)
              monitor interval=10s timeout=20s (ip-192.168.1.150-monitor-interval-10s)

View network information for a specific IP address

  1. Log in to the Controller node that is assigned to the IP address you want to view.
  2. Run the ip addr show command to view network interface information.

    $ ip addr show vlan100

    Example output:

      9: vlan100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
        link/ether be:ab:aa:37:34:e7 brd ff:ff:ff:ff:ff:ff
        inet *192.168.1.151/24* brd 192.168.1.255 scope global vlan100
           valid_lft forever preferred_lft forever
        inet *192.168.1.150/32* brd 192.168.1.255 scope global vlan100
           valid_lft forever preferred_lft forever
  3. Run the netstat command to show all processes that listen to the IP address.

    $ sudo netstat -tupln | grep "192.168.1.150.haproxy"

    Example output:

    tcp        0      0 192.168.1.150:8778          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8042          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:9292          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8080          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:80            0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8977          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:6080          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:9696          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8000          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8004          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8774          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:5000          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8776          0.0.0.0:*               LISTEN      61029/haproxy
    tcp        0      0 192.168.1.150:8041          0.0.0.0:*               LISTEN      61029/haproxy
    Note

    Processes that are listening to all local addresses, such as 0.0.0.0, are also available through 192.168.1.150. These processes include sshd, mysqld, dhclient, ntpd.

View port number assignments

Open the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file to see default port number assignments.

The following example shows the port numbers and the services that they listen to:

  • TCP port 6080: nova_novncproxy
  • TCP port 9696: neutron
  • TCP port 8000: heat_cfn
  • TCP port 80: horizon
  • TCP port 8776: cinder

In this example, most services that are defined in the haproxy.cfg file listen to the 192.168.1.150 IP address on all three Controller nodes. However, only the controller-0 node is listening externally to the 192.168.1.150 IP address.

Therefore, if the controller-0 node fails, HAProxy only needs to re-assign 192.168.1.150 to another Controller node and all other services will already be running on the fallback Controller node.

4.5. Viewing Pacemaker status and power management information

The last sections of the pcs status output show information about your power management fencing, such as IPMI, and the status of the Pacemaker service itself:

 my-ipmilan-for-controller-0	(stonith:fence_ipmilan): Started my-ipmilan-for-controller-0
 my-ipmilan-for-controller-1	(stonith:fence_ipmilan): Started my-ipmilan-for-controller-1
 my-ipmilan-for-controller-2	(stonith:fence_ipmilan): Started my-ipmilan-for-controller-2

PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled openstack-cinder-volume        (systemd:openstack-cinder-volume):      Started overcloud-controller-0

  pcsd: active/enabled

The my-ipmilan-for-controller settings show the type of fencing for each Controller node (stonith:fence_ipmilan) and whether or not the IPMI service is stopped or running. The PCSD Status shows that all three Controller nodes are currently online. The Pacemaker service consists of three daemons: corosync, pacemaker, and pcsd. In the example, all three services are active and enabled.

4.6. Troubleshooting failed Pacemaker resources

If one the Pacemaker resources fails, you can view the Failed Actions section of the pcs status output. In the following example, the openstack-cinder-volume service stopped working on controller-0:

Failed Actions:
* openstack-cinder-volume_monitor_60000 on overcloud-controller-0 'not running' (7): call=74, status=complete, exitreason='none',
	last-rc-change='Wed Dec 14 08:33:14 2016', queued=0ms, exec=0ms

In this case, you must enable the systemd service openstack-cinder-volume. In other cases, you might need to locate and fix the problem and then clean up the resources. For more information about troubleshooting resource problems, see Chapter 8, Troubleshooting resource problems.