Chapter 1. High availability services
Red Hat OpenStack Platform (RHOSP) employs several technologies to provide the services required to implement high availability (HA).
- Core container
Core container services are Galera, RabbitMQ, Redis, and HAProxy. These services run on all Controller nodes and require specific management and constraints for the start, stop and restart actions. You use Pacemaker to launch, manage, and troubleshoot core container services.Note
RHOSP uses the MariaDB Galera Cluster to manage database replication.
Active-passive services run on one Controller node at a time, and include services such as
openstack-cinder-volume. To move an active-passive service, you must use Pacemaker to ensure that the correct stop-start sequence is followed.
- Systemd and plain container
Systemd and plain container services are independent services that can withstand a service interruption. Therefore, if you restart a high availability service such as Galera, you do not need to manually restart any other service, such as
nova-api. You can use systemd or Podman to directly manage systemd and plain container services.
When orchestrating your HA deployment with the director, the director uses templates and Puppet modules to ensure that all services are configured and launched correctly. In addition, when troubleshooting HA issues, you must interact with services in the HA framework using the
podman command or the
HA services can run in one of the following modes:
- Active-active: Pacemaker runs the same service on multiple Controller nodes, and uses HAProxy to distribute traffic across the nodes or to a specific Controller with a single IP address. In some cases, HAProxy distributes traffic to active-active services with Round Robin scheduling. You can add more Controller nodes to improve performance.
- Active-passive: Services that are unable to run in active-active mode must run in active-passive mode. In this mode, only one instance of the service is active at a time. For example, HAProxy uses stick-table options to direct incoming Galera database connection requests to a single back-end service. This helps prevent too many simultaneous connections to the same data from multiple Galera nodes.