Chapter 7. Troubleshooting high availability resources
In case of resource failure, you must investigate the cause and location of the problem, fix the failed resource, and optionally clean up the resource. There are many possible causes of resource failures depending on your deployment, and you must investigate the resource to determine how to fix the problem.
For example, you can check the resource constraints to ensure that the resources are not interrupting each other, and that the resources can connect to each other. You can also examine a Controller node that is fenced more often than other Controller nodes to identify possible communication problems.
Depending on the location of the resource problem, you choose one of the following options:
- Controller node problems
- If health checks to a Controller node are failing, this can indicate a communication problem between Controller nodes. To investigate, log in to the Controller node and check if the services can start correctly.
- Individual resource problems
-
If most services on a Controller are running correctly, you can run the
pcs status
command and check the output for information about a specific Pacemaner resource failure or run thesystemctl
command to investigate a non-Pacemaker resource failure.
7.1. Viewing resource constraints in a high availability cluster
Before you investigate resource problems, you can view constraints on how services are launched, including constraints related to where each resource is located, the order in which the resource starts, and whether the resource must be colocated with another resource.
Procedure
Use one of the following options:
To view all resource constraints, log in to any Controller node and run the
pcs constraint show
command:$ sudo pcs constraint show
The following example shows a truncated output from the
pcs constraint show
command on a Controller node:Location Constraints: Resource: galera-bundle Constraint: location-galera-bundle (resource-discovery=exclusive) Rule: score=0 Expression: galera-role eq true [...] Resource: ip-192.168.24.15 Constraint: location-ip-192.168.24.15 (resource-discovery=exclusive) Rule: score=0 Expression: haproxy-role eq true [...] Resource: my-ipmilan-for-controller-0 Disabled on: overcloud-controller-0 (score:-INFINITY) Resource: my-ipmilan-for-controller-1 Disabled on: overcloud-controller-1 (score:-INFINITY) Resource: my-ipmilan-for-controller-2 Disabled on: overcloud-controller-2 (score:-INFINITY) Ordering Constraints: start ip-172.16.0.10 then start haproxy-bundle (kind:Optional) start ip-10.200.0.6 then start haproxy-bundle (kind:Optional) start ip-172.19.0.10 then start haproxy-bundle (kind:Optional) start ip-192.168.1.150 then start haproxy-bundle (kind:Optional) start ip-172.16.0.11 then start haproxy-bundle (kind:Optional) start ip-172.18.0.10 then start haproxy-bundle (kind:Optional) Colocation Constraints: ip-172.16.0.10 with haproxy-bundle (score:INFINITY) ip-172.18.0.10 with haproxy-bundle (score:INFINITY) ip-10.200.0.6 with haproxy-bundle (score:INFINITY) ip-172.19.0.10 with haproxy-bundle (score:INFINITY) ip-172.16.0.11 with haproxy-bundle (score:INFINITY) ip-192.168.1.150 with haproxy-bundle (score:INFINITY)
This output displays the following main constraint types:
- Location Constraints
Lists the locations to which resources can be assigned:
-
The first constraint defines a rule that sets the galera-bundle resource to run on nodes with the
galera-role
attribute set totrue
. -
The second location constraint specifies that the IP resource ip-192.168.24.15 runs only on nodes with the
haproxy-role
attribute set totrue
. This means that the cluster associates the IP address with thehaproxy
service, which is necessary to make the services reachable. - The third location constraint shows that the ipmilan resource is disabled on each of the Controller nodes.
-
The first constraint defines a rule that sets the galera-bundle resource to run on nodes with the
- Ordering Constraints
Lists the order in which resources can launch. This example shows a constraint that sets the virtual IP address resources IPaddr2 to start before the HAProxy service.
NoteOrdering constraints only apply to IP address resources and to HAproxy. Systemd manages all other resources, because services such as Compute are expected to withstand an interruption of a dependent service, such as Galera.
- Colocation Constraints
- Lists which resources must be located together. All virtual IP addresses are linked to the haproxy-bundle resource.
To view constraints for a specific resource, log in to any Controller node and run the
pcs property show
command:$ sudo pcs property show
Example output:
Cluster Properties: cluster-infrastructure: corosync cluster-name: tripleo_cluster dc-version: 2.0.1-4.el8-0eb7991564 have-watchdog: false redis_REPL_INFO: overcloud-controller-0 stonith-enabled: false Node Attributes: overcloud-controller-0: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-0 overcloud-controller-1: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-1 overcloud-controller-2: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-2
In this output, you can verify the that the resource constraints are set correctly. For example, the
galera-role
attribute istrue
for all Controller nodes, which means that thegalera-bundle
resource runs only on these nodes.
7.2. Investigating Pacemaker resource problems
To investigate failed resources that Pacemaker manages, log in to the Controller node on which the resource is failing and check the status and log events for the resource. For example, investigate the status and log events for the openstack-cinder-volume
resource.
Prerequisites
- A Controller node with Pacemaker services
- Root user permissions to view log events
Procedure
- Log in to the Controller node on which the resource is failing.
Run the
pcs status
command with thegrep
option to get the status of the service:# sudo pcs status | grep cinder Podman container: openstack-cinder-volume [192.168.24.1:8787/rh-osbs/rhosp161-openstack-cinder-volume:pcmklatest] openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Started controller-1
View the log events for the resource:
# sudo less /var/log/containers/stdouts/openstack-cinder-volume.log [...] 2021-04-12T12:32:17.607179705+00:00 stderr F ++ cat /run_command 2021-04-12T12:32:17.609648533+00:00 stderr F + CMD='/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf' 2021-04-12T12:32:17.609648533+00:00 stderr F + ARGS= 2021-04-12T12:32:17.609648533+00:00 stderr F + [[ ! -n '' ]] 2021-04-12T12:32:17.609648533+00:00 stderr F + . kolla_extend_start 2021-04-12T12:32:17.611214130+00:00 stderr F +++ stat -c %U:%G /var/lib/cinder 2021-04-12T12:32:17.616637578+00:00 stderr F ++ [[ cinder:kolla != \c\i\n\d\e\r\:\k\o\l\l\a ]] 2021-04-12T12:32:17.616722778+00:00 stderr F + echo 'Running command: '\''/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf'\''' 2021-04-12T12:32:17.616751172+00:00 stdout F Running command: '/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf' 2021-04-12T12:32:17.616775368+00:00 stderr F + exec /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf
- Correct the failed resource based on the information from the output and from the logs.
Run the
pcs resource cleanup
command to reset the status and the fail count of the resource.$ sudo pcs resource cleanup openstack-cinder-volume Resource: openstack-cinder-volume successfully cleaned up
7.3. Investigating systemd resource problems
To investigate failed resources that systemd manages, log in to the Controller node on which the resource is failing and check the status and log events for the resource. For example, investigate the status and log events for the tripleo_nova_conductor
resource.
Prerequisites
- A Controller node with systemd services
- Root user permissions to view log events
Procedure
Run the
systemctl status
command to show the resource status and recent log events:[heat-admin@controller-0 ~]$ sudo systemctl status tripleo_nova_conductor ● tripleo_nova_conductor.service - nova_conductor container Loaded: loaded (/etc/systemd/system/tripleo_nova_conductor.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2021-04-12 10:54:46 UTC; 1h 38min ago Main PID: 5125 (conmon) Tasks: 2 (limit: 126564) Memory: 1.2M CGroup: /system.slice/tripleo_nova_conductor.service └─5125 /usr/bin/conmon --api-version 1 -c cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e020c76e7887d4 -u cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e020c76e7887d4 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e02> Apr 12 10:54:42 controller-0.redhat.local systemd[1]: Starting nova_conductor container... Apr 12 10:54:46 controller-0.redhat.local podman[2855]: nova_conductor Apr 12 10:54:46 controller-0.redhat.local systemd[1]: Started nova_conductor container.
View the log events for the resource:
# sudo less /var/log/containers/tripleo_nova_conductor.log
- Correct the failed resource based on the information from the output and from the logs.
Restart the resource and check the status of the service:
# systemctl restart tripleo_nova_conductor # systemctl status tripleo_nova_conductor ● tripleo_nova_conductor.service - nova_conductor container Loaded: loaded (/etc/systemd/system/tripleo_nova_conductor.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2021-04-22 14:28:35 UTC; 7s ago Process: 518937 ExecStopPost=/usr/bin/podman stop -t 10 nova_conductor (code=exited, status=0/SUCCESS) Process: 518653 ExecStop=/usr/bin/podman stop -t 10 nova_conductor (code=exited, status=0/SUCCESS) Process: 519063 ExecStart=/usr/bin/podman start nova_conductor (code=exited, status=0/SUCCESS) Main PID: 519198 (conmon) Tasks: 2 (limit: 126564) Memory: 1.1M CGroup: /system.slice/tripleo_nova_conductor.service └─519198 /usr/bin/conmon --api-version 1 -c 0d6583beb20508e6bacccd5fea169a2fe949471207cb7d4650fec5f3638c2ce6 -u 0d6583beb20508e6bacccd5fea169a2fe949471207cb7d4650fec5f3638c2ce6 -r /usr/bin/runc -b /var/lib/containe> Apr 22 14:28:34 controller-0.redhat.local systemd[1]: Starting nova_conductor container... Apr 22 14:28:35 controller-0.redhat.local podman[519063]: nova_conductor Apr 22 14:28:35 controller-0.redhat.local systemd[1]: Started nova_conductor container.