Failure to sync configuration to all nodes in RHEL 6 cluster, corosync reports "Unable to load new config in corosync"
Issue
corosyncreports that new configuration version has to be newer in/var/log/messagesand/var/log/cluster/corosync.log:
Sep 10 15:38:44 corosync [CMAN ] Can't get updated config version 90: New configuration version has to be newer than current running configuration
.
Sep 10 15:38:45 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration
Sep 10 15:38:44 corosync [CMAN ] Activity suspended on this node
Sep 10 15:38:44 corosync [CMAN ] Error reloading the configuration, will retry every second
- Updating the running cluster configuration version with
cman_tool version -rfails, and some nodes'corosyncbegins repeatedly reporting that itsUnable to load new config in corosync - While cluster nodes were in this 'suspended activity' state due to a configuration mismatch, they all lost quorum when a node joined or left the cluster.
- When I update the cluster configuration in luci, it only updates the configuration on the local node that is running luci. However, if I manually run
ccs_syncon the same node that is runningluci,ccs_syncsucessfully pushes out the configuration to the other nodes in the cluster. - Updating configuration through Conga fails to sync to all nodes
- When the service group had resources added to it via Conga, something went awry and one node showed the service as disabled, while the other node showed the service as not existing. One node was showing the "Unalbe to load new config in corosync" messages over and over.
Environment
- Red Hat Enterprise Linux (RHEL) 6 with the High Availability Add On
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
