pacemaker/corosync 2 nodes cluster questions (resource-stickiness & quorum)

Latest response

Hi Folks,
I am testing a 2 nodes linux HA cluster with corosync and pacemaker. quorum and stonith are disabled for my tests. I use RHEL7.3
For the moment I have 2 concerns:

1) resource-stickiness : as I understand, the metaparameter "resource-stickiness" control if a resource need to come back on the node where it was previously started after a recovery, or not.
I have created a VIP resource ocf:heartbeat:IPaddr2 with resource-stickiness at 100, and I start it on a specific node:

pcs resource create vip_test ocf:heartbeat:IPaddr2 ip=10.102.2.47 cidr_netmask=32 op monitor interval=30s meta resource-stickiness=100

pcs resource move vip_test node2

Seems there something I don't do correctly, because when I reboot node2 or stop corosync/pacemaker, the VIP come back on node2 after node2 integrate the cluster again.

Where am I wrong?^^

2)quorum : if I understood the documentation correctly, it's not possible to have a quorum disk? I have a SAN connected in SAS on both node, and would like to avoid having to create another server to run a "software" quorum.
I used to use with solaris cluster and it's possible to use a "disk quorum", so I wonder if I can do the same with rhel7/coro/pacemaker.

Thx a lot in advance!

Responses

about the question 1, I found the reason: https://access.redhat.com/solutions/1325593 (there is also https://access.redhat.com/solutions/739813) seems a constraint is created when we do a move.... I don't really understand why such thing exist. Why can't we simply move a resource when we want without creating a constraint.... This means that if we do a move, then we need to remove the automatic constraint created, not really practical

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.