The number of resource actions that the cluster is allowed to execute in parallel. The "correct" value will depend on the speed and load of your network and cluster nodes.
|-1 (unlimited)|| |
The number of migration jobs that the cluster is allowed to execute in parallel on a node.
What to do when the cluster does not have quorum. Allowed values:
* ignore - continue all resource management
* freeze - continue resource management, but do not recover resources from nodes not in the affected partition
* stop - stop all resources in the affected cluster partition
* suicide - fence all nodes in the affected cluster partition
Indicates whether resources can run on any node by default.
Indicates that failed nodes and nodes with resources that cannot be stopped should be fenced. Protecting your data requires that you set this
true, or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured also.
Action to send to STONITH device. Allowed values:
. The value
is also allowed, but is only used for legacy devices.
Round trip delay over the network (excluding action execution). The "correct" value will depend on the speed and load of your network and cluster nodes.
Indicates whether deleted resources should be stopped.
Indicates whether deleted actions should be canceled.
Indicates whether a failure to start a resource on a particular node prevents further start attempts on that node. When set to
, the cluster will decide whether to try starting on the same node again based on the resource's current failure count and migration threshold. For information on setting the
option for a resource, see Section 8.2, “Moving Resources Due to Failure”
false incurs the risk that this will allow one faulty node that is unable to start a resource to hold up all dependent actions. This is why
start-failure-is-fatal defaults to
true. The risk of setting
start-failure-is-fatal=false can be mitigated by setting a low migration threshold so that other actions can proceed after that many failures.
|-1 (all)|| |
The number of PE inputs resulting in ERRORs to save. Used when reporting problems.
|-1 (all)|| |
The number of PE inputs resulting in WARNINGs to save. Used when reporting problems.
|-1 (all)|| |
The number of "normal" PE inputs to save. Used when reporting problems.
| || |
The messaging stack on which Pacemaker is currently running. Used for informational and diagnostic purposes; not user-configurable.
| || |
Version of Pacemaker on the cluster's Designated Controller (DC). Used for diagnostic purposes; not user-configurable.
| || |
Last refresh of the Local Resource Manager, given in units of seconds since epoca. Used for diagnostic purposes; not user-configurable.
|15 minutes|| |
Polling interval for time-based changes to options, resource parameters and constraints. Allowed values: Zero disables polling, positive values are an interval in seconds (unless other SI units are specified, such as 5min). Note that this value is the maximum time between checks; if a cluster event occurs sooner than the time specified by this value, the check will be done sooner.
Maintenance Mode tells the cluster to go to a "hands off" mode, and not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it.
The time after which to give up trying to shut down gracefully and just exit. Advanced use only.
How long to wait for a STONITH action to complete.
Should the cluster stop all resources.
(Red Hat Enterprise Linux 7.1 and later) Indicates whether the cluster can use access control lists, as set with the
Indicates whether and how the cluster will take utilization attributes into account when determining resource placement on cluster nodes. For information on utilization attributes and placement strategies, see Section 9.6, “Utilization and Placement Strategy”
(Red Hat Enterprise Linux 7.8 and later) Determines how a cluster node should react if notified of its own fencing. A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that does not cut cluster communication. Allowed values are
to attempt to immediately stop Pacemaker and stay stopped, or
to attempt to immediately reboot the local node, falling back to stop on failure.