pacemaker_remote service allows nodes not running
corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes.
Among the capabilities that the
pacemaker_remote service provides are the following:
pacemaker_remote service allows you to scale beyond the
corosync 16-node limit.
pacemaker_remote service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources.
The following terms are used to describe the
cluster node — A node running the High Availability services (
remote node — A node running
pacemaker_remote to remotely integrate into the cluster without requiring
corosync cluster membership. A remote node is configured as a cluster resource that uses the
ocf:pacemaker:remote resource agent.
guest node — A virtual guest node running the
pacemaker_remote service. A guest node is configured using the
remote-node metadata option of a resource agent such as
ocf:pacemaker:VirtualDomain. The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node.
pacemaker_remote — A service daemon capable of performing remote application management within remote nodes and guest nodes (KVM and LXC) in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker’s local resource management daemon (LRMD) that is capable of managing resources remotely on a node not running corosync.
LXC — A Linux Container defined by the
libvirt-lxc Linux container driver.
A Pacemaker cluster running the
pacemaker_remote service has the following characteristics.
The remote nodes and/or the guest nodes run the
pacemaker_remote service (with very little configuration required on the virtual machine side).
The cluster stack (
corosync), running on the cluster nodes, connects to the
pacemaker_remote service on the remote nodes, allowing them to integrate into the cluster.
The cluster stack (
corosync), running on the cluster nodes, launches the guest nodes and immediately connects to the
pacemaker_remote service on the guest nodes, allowing them to integrate into the cluster.
The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations:
they do not take place in quorum
they do not execute fencing device actions
they are not eligible to be be the cluster's Designated Controller (DC)
they do not themselves run the full range of
On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack.
Other than these noted limitations, the remote nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the
pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do.
8.4.1. Host and Guest Authentication
The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running
pacemaker_remote must share the same private key. By default this key must be placed at
/etc/pacemaker/authkey on both cluster nodes and remote nodes.