Why `SAPHana` doesn't start the Primary SAP HANA when lpa_xxx_lpt node attributes are same on both nodes?
Issue
SAP HANA is able to run outside of cluster with HANA SR in normal state. Once cluster is started, the SAPHana
resource on node where Primary SAP HANA should be running (node2 in example below) cluster reports that SAPHana
resource on that node failed with exit code 7 and the cluster node attributes looks similar to ones below. SAP HANA is not running on any of nodes.
# crm_mon -Af1
...
Master/Slave Set: SAPHana_RH2_02-master [SAPHana_RH2_02]
Slaves: [ node1 ]
...
Node Attributes:
* Node node1:
+ hana_rh2_clone_state : WAITING4PRIM
+ hana_rh2_roles : 1:S:master1::worker:
+ hana_rh2_sync_state : SFAIL
+ lpa_rh2_lpt : 10
+ master-SAPHana_RH2_02 : -INFINITY
* Node node2:
+ hana_rh2_clone_state : UNDEFINED
+ hana_rh2_roles : 1:P:master1::worker:
+ lpa_rh2_lpt : 10
+ master-SAPHana_RH2_02 : -9000
...
Failed Actions:
* SAPHana_RH2_02_start_0 on node2 'not running' (7): call=20, status=complete, exitreason='',
last-rc-change='Thu Sep 20 14:49:17 2018', queued=0ms, exec=2146ms
...
Environment
- Red Hat Enterprise Linux 7 with High Availability Add-on
- running pacemaker cluster with
SAPHana
resources
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.