One or more resources restart throughout the cluster when a node joins in a RHEL 6 or 7 High Availability cluster with pacemaker
Issue
- When a node leaves the cluster, all services are correctly live-migrated / relocated to other cluster nodes and everything seems to work fine. But when the node re-joins the cluster, all services on all nodes get restarted
- When we
pcs cluster starton a node, all of the other nodes restart their copy of a clone resource. Why did node42 restart the fs resource?
Oct 19 13:07:52 [1280] node42 pengine: info: LogActions: Leave dlm:0 (Started node42)
Oct 19 13:07:52 [1280] node42 pengine: notice: LogAction: * Start dlm:1 ( node41 )
Oct 19 13:07:52 [1280] node42 pengine: info: LogActions: Leave clvmd:0 (Started node42
Oct 19 13:07:52 [1280] node42 pengine: notice: LogAction: * Start clvmd:1 ( node41 )
Oct 19 13:07:52 [1280] node42 pengine: notice: LogAction: * Restart fs:0 ( node42) due to required clvmd-clone running
Oct 19 13:07:52 [1280] node42 pengine: notice: LogAction: * Start fs:1 ( node41)
- I see all of our resources restart throughout the cluster whenever a node starts up
Environment
- Red Hat Enterprise Linux (RHEL) 6 or 7 with the High Availability Add On
pacemaker- One or more clone resources in the CIB
- At least one clone resource does not have
interleave=truein itsmetaoptions- NOTE: See the Diagnostic Steps to determine if this is the case
- At least one of those clone resources without
interleave=truehas other resources ordered after it via a constraint or group
- At least one clone resource does not have
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
