SAP Netweaver ASCS/ERS ENSA1 in a Pacemaker Cluster Using Master/Slave Resources
Contents
- 1. Overview
- 2. SAP NetWeaver High-Availability architecture
- 3. SAP Netweaver Installation
- 4. Pacemaker cluster configuration
- 4.1. Configure shared filesystems
- 4.2. Configure (A)SCS/ERS
SAPInstance
cluster resource - 4.3. Check that (A)SCS/ERS
SAPInstance
cluster resource is running correctly - 4.4. Configure cluster resource group containing
SAPDatabase
cluster resource - 4.5. Check status of
SAPDatabase
resource group - 4.6. Configure Primary Application Server group (PAS)
- 4.7. Check status and configuration of PAS resource group
- 4.8. (optional) Configuring SAP HAlib for
SAPInstance
resources
1. Overview
This article describes how to configure SAP Netweaver to run in a pacemaker-based cluster on supported RHEL releases.
This article does NOT cover preparation of a RHEL system for SAP Netweaver installation nor exact SAP Netweaver installation procedure. For more details on these topics refer to following SAP Notes:
- Red Hat Enterprise Linux 6.x: Installation and Upgrade - SAP Note 1496410
- Red Hat Enterprise Linux 7.x: Installation and Upgrade - SAP Note 2002167
1.1. Supported scenarios
See: Support Policies for RHEL High Availability Clusters - Management of SAP Netweaver in a Cluster
1.2. Resources: Standalone vs. Master/Slave
There are two approaches to configure (A)SCS and ERS resources in Pacemaker: Master/Slave and Standalone. Master/Slave approach has already been supported in all RHEL 7 minor releases. Standalone approach is supported in RHEL 7.5 and newer.
In any new deployment, Standalone is recommended for the following reasons:
- it meets the requirements of the current SAP HA Interface Certification
- it is compatible with the new Standalone Enqueue Server 2 (ENSA2) configuration
- (A)SCS/ERS instances can be started and stopped independently
- (A)SCS/ERS instance directories can be managed as part of the cluster
This article outlines the configuration procedure of the SAPinstance
Master/Slave approach. For instructions on SAPInstance
Standalone configuration, please refer to kabse article Configure SAP Netweaver ASCS/ERS ENSA1 with Standalone Resources in RHEL 7.5 and newer
2. SAP NetWeaver High-Availability architecture
A typical setup for SAP NetWeaver High-Availability consists of 3 distinctive components:
- ASCS/ERS master/slave cluster resource
- SAPDatabase instance for managing the database used by SAP NetWeaver application servers
- SAP Netweaver application servers - Primary application servers (PAS)
While it is possible to configure only some components, there are some limitations on what can be omitted. For example PAS won't typically work without ASCS/ERS instance while it can work without SAPDatabase
resource (assuming it doesn't need a database or it uses external database managed by other means). This document assumes that all above mentioned component are being used.
2.1. SAPInstance
resource agent architecture
SAPInstance
is resource agent used for both ASCS/ERS Master/Slave resource and also for running PAS instances. All operations of the SAPInstance
resource agent are done by using the startup framework called SAP Management Console or sapstartsrv
. sapstartsrv
uses SOAP messages to request the status of running SAP processes. sapstartsrv
knows 4 status colours:
Color | Meaning |
---|---|
GREEN | everything is fine |
YELLOW | something is wrong, but the service is still working |
RED | the service does not work |
GRAY | the service has not been started |
The SAPInstance
resource agent will interpret GREEN and YELLOW as OK while statuses RED and GRAY are reported as NOT_RUNNING to the cluster. Below is table of available attributes that can be used for SAPInstance
and their descriptions:
Attribute Name | Required? | Default value | Description |
---|---|---|---|
InstanceName | yes | null | The full qualified SAP instance name. e.g. P01_DVEBMGS00_sapp01ci. Usually this is the name of the SAP instance profile. |
ERS_InstanceName | no | null | (only used in a Master/Slave resource configuration) The full qualified SAP enqueue replication instance name. e.g. P01_ERS02_sapp01ers. Usually this is the name of the SAP instance profile. |
START_PROFILE | no | null | The name of the SAP 'START profile'[1]. Specify this if you have changed the name of the SAP 'START profile'[1] after the default SAP installation. |
ERS_START_PROFILE | no | null | (only used in a Master/Slave resource configuration) The parameter ERS_InstanceName must also be set in this configuration. The name of the ERS SAP 'START profile'[1]. Specify this if you have changed the name of the ERS SAP 'START profile'[1] after the default SAP installation. |
DIR_EXECUTABLE | no | null | (for non-default SAP kernel location[2]) The full qualified path where to find sapstartsrv and sapcontrol binaries. |
DIR_PROFILE | no | null | (for non-default SAP profile location[3]) The full qualified path where to find the SAP START profile. |
AUTOMATIC_RECOVER | no | false | The SAPInstance resource agent tries to recover a failed start attempt automaticaly one time. This is done by killing runing instance processes, removing the kill.sap file and executing cleanipc . Sometimes a crashed SAP instance leaves some processes and/or shared memory segments behind. Setting this option to true will try to remove those leftovers during a start operation. That is to reduce manual work for the administrator. |
MONITOR_SERVICES | no | disp+work| msg_server| enserver| enrepserver| jcontrol| jstart |
Within a SAP instance there can be several services. Not all of those services are worth to monitor by the cluster. You may change this with this parameter, if you like to monitor more/less or other services that sapstartsrv supports. Names must match the strings used in the output of the command sapcontrol -nr [Instance-Nr] -function GetProcessList and you may specify multiple services separated by a | (pipe) sign in this parameter. |
START_WAITTIME | no | 3600 | (only for double-stack(ABAP+Java Addin) systems) After that time in seconds a monitor operation is executed by the resource agent. Usually the resource agent waits until all services are started and the SAP Management Console reports a GREEN status. Normally the start of the JAVA instance takes much longer than the start of the ABAP instance. Setting START_WAITTIME to a lower value determines the resource agent to check the status of the instance during a 'start operation' after that time. As it would wait normally for a GREEN status, now it reports SUCCESS to the cluster in case of a YELLOW status already after the specified time. |
[1] - As SAP release 7.10 does not have a 'START profile' anymore, you need to specify the 'Instance Profile' instead.
[2] - Specify this if you have changed the SAP kernel directory location after the default SAP installation.
[3] - Specify this if you have changed the SAP profile directory location after the default SAP installation.
2.2. SAPDatabase
resource agent architecture
The purpose of SAPDatabase
resource agent is to start, stop and monitor the database instance of a SAP system. Together with the RDBMS system it will also control the related network service for the database. Like the Oracle Listener and the xserver of MaxDB. SAPDatabase
does not run any database commands directly. It uses the SAP standard process SAPHostAgent
to control the database. The SAPHostAgent
must be installed on each cluster node locally.
Below is table of available attributes that can be used for SAPDatabase
and their descriptions:
Attribute Name | Required? | Default value | Description |
---|---|---|---|
SID | yes | null | The unique database system identifier |
DBTYPE | yes | null | The name of the database vendor you use. valid values are: ADA (SAP MaxDB), DB6 (IBM DB2), ORA (Oracle DB), SYB (Sybase), HDB (SAP HANA). |
DBINSTANCE | no | null | Must be used for special database implementations, when database instance name is not equal to the SID (e.g. Oracle DataGuard). |
DBOSUSER | no | ADA=taken from /etc/opt/sdb , DB6=db2SID , ORA=oraSID and oracle , SYB=sybSID , HDB=SIDadm |
The parameter can be set, if the database processes on operating system level are not executed with the default user of the used database type. |
STRICT_MONITORING | no | false | This controls how the resource agent monitors the database. If set to true , it will use saphostctrl -function GetDatabaseStatus to test the database state. If set to false , only operating system processes are monitored. |
MONITOR_SERVICES | no | Instance|Database|Listener |
Defines which services are monitored by the SAPDatabase resource agent, if STRICT_MONITORING is set to true . Service names must correspond with the output of the saphostctrl -function GetDatabaseStatus command. |
AUTOMATIC_RECOVER | no | false | If you set this to true , saphostctrl -function StartDatabase will always be called with the -force option. |
DIR_EXECUTABLE | no | /usr/sap/hostctrl/exe |
The full qualified path where to find saphostexec and saphostctrl . |
2.3. Storage requirements
This section describes the requirements for the storage on Red Hat pacemaker cluster running SAP Netweaver.
2.3.1. Local storage
Both 'ASCS' and 'ERS' instance directories must be local on each node. These directories must be available before the cluster is started. In case of update of binaries on one of the nodes, these changes must be copied to other nodes in cluster.
/usr/sap/<SID>/ASCS00
/usr/sap/<SID>/ERS10
2.3.2. Shared storage available on multiple nodes at same time
Following mountpoints must be available on all nodes.
/sapmnt
/usr/sap/trans
This can be achieved by:
- use of the external NFS server (NFS server cannot run on any of nodes in the cluster in which the shares would be mounted, more details about this limitation can be found inarticle Hangs occur if a Red Hat Enterprise Linux system is used as both NFS server and NFS client for the same mount)
- using the GFS2 filesystem (this requires all nodes to have
Resilient Storage Add-on
) - using the glusterfs filesystem (check the additional notes in article Can glusterfs be used for the SAP NetWeaver shared filesystems?)
These mountpoints must be either managed by cluster or mounted before cluster is started.
2.3.3. Shared storage available at one node at a time
These mountpoins hold data that should be available only to nodes which are running resources requiring this data.
/usr/sap/<SID>/DVEBMGS01
/usr/sap/<SID>/D01
/usr/sap/<SID>/<additional_PAS_instances>
3. SAP Netweaver Installation
Installation of all components will be done on one of the nodes that will be further in this document referred as 'installation node'. Once the installation on installation node is finished the changes done an the system are replicated to other nodes in the cluster by synchronizing needed files and directories.
3.1. Configuration options used in this document
Below are configuration options that will be used for instances from this document:
1st node hostname: node1
1st node IP: 192.168.0.11
2nd node hostname: node2
2nd node IP: 192.168.0.12
SID: RH1
ASCS Instance number: 00
ASCS virtual hostname: rh1-ascs
ASCS IP address: 192.168.0.13
ERS Instance number: 10
ERS virtual hostname: rh1-ers
ERS IP address: 192.168.0.14
DB virtual hostname: rh1-db
DB IP address: 192.168.0.15
PAS Instance number: 01
PAS virtual hostname: rh1-pas
PAS IP address: 192.168.0.16
Shared storage will be provided by an external NFS server for the following mountpoints
/sapmnt
/usr/sap/trans
NFS server IP: 192.168.0.10
/etc/exports
/export/sapmnt node1,node2(rw,no_root_sqaush)
/export/trans node1,node2(rw,no_root_squash)
Shared block storage will be provided for following mountpoints
/usr/sap/RH1/D01
VG/LV name for /usr/sap/RH1/D01: vg_d01/lv_d01
3.1.1. SAP MaxDB additional information
SAP MaxDB will be using additionally following block storage and mountpoint.
/sapdb/RH1
VG/LV name for /sapdb/RH1: vg_db/lv_db
3.1.2. SAP HANA with System Replication - additional information
This document will assume that SAP HANA is already operational in cluster as described in the article SAP HANA system replication in pacemaker cluster.
SAP HANA will be using following configuration:
SAP HANA SID: RH2
SAP HANA Instance number: 02
3.2. Prepare installation node
Before starting installation ensure that:
- Shared storage and filesystems are present at correct mountpoints
- IP addresses (that will become Virtual IP addresses later in cluster) used by instances are present and reachable
- Hostnames that will be used by instances can be resolved to IP addresses and back
- Installation files are available on installation node
- System is configured according the recommendation for running SAP Netweaver
- Red Hat Enterprise Linux 6.x: Installation and Upgrade - SAP Note 1496410
- Red Hat Enterprise Linux 7.x: Installation and Upgrade - SAP Note 2002167
3.3. Installing the instances
Using software provisioning manager (SWPM) install instance in following order:
- ASCS instance
- ERS instance
- DB instance
- PAS instance
Important: use SAPINST_USE_HOSTNAME
argument to specify the virtual hostname of instance when starting SWPM. Example of starting installation of instance:
[root]# /swpm/sapinst SAPINST_USE_HOSTNAME=xxx-instance
All instance must be installed using High Availability system or Distributed system options in SWPM. Additionally (A)SCS and ERS instances must be installed as separate instances.
3.3.1. (A)SCS profile modification
(A)SCS instance requires following modification in profile to prevent automatic restart of enqueue server as it will be managed by cluster. To apply the change run the following command at your ASCS profile. Profile is typically stored in /sapmnt/RH1/profile/RH1_ASCS00_rh1-ascs>
(/sapmnt/<SID>/profile/<SID>_ASCS<Instance_number>_<virtual_hostname>
). Below is example command for the installation parameters used in this document.
[root]# sed -i -e 's/Restart_Program_01/Start_Program_01/' /sapmnt/RH1/profile/RH1_ASCS00_rh1-ascs
3.3.2. Update the /usr/sap/sapservices
file
To prevent start of the instances by the sapinit
startup script, all instances managed by cluster must be commented out from /usr/sap/sapservices
file. Do not comment out the SAP HANA instance if it will be used with HANA SR.
3.4. Preparation of other nodes
Ensure the following on rest of the nodes in the cluster:
- Hostnames that will be used by instances can be resolved to IP addresses and back
- System is configured according the recommendation for running SAP Netweaver
- Red Hat Enterprise Linux 6.x: Installation and Upgrade - SAP Note 1496410
- Red Hat Enterprise Linux 7.x: Installation and Upgrade - SAP Note 2002167
3.4.1. Synchronization of files to the all nodes
After the installation of all instances on installation node following things needs to be synchronized to all nodes:
- New users and groups (typically appearing at the end of following files:
/etc/passwd
,/etc/shaddow
,/etc/group
,/etc/gshaddow
). Synchronized users and groups must use same UID/GID as on installation node. - Home directories of new users (typically
/home/sapadm
,/home/rh1adm
and 'database user' home directory) - File
/etc/services
that now contains the services added by installation - File
/etc/init.d/sapinit
- Data from
/usr/sap
directory that are not on shared filesystems - Data that needs to be synchronized by databases (this depends on used database and database vendor should be consulted in case of questions or concerns)
3.4.2. Example of synchronizing files for installation with MaxDB
Important: Below example is from typical installation and may be different from your deployment!
-
Ensure that all instances and DB are stopped and that there are no other processes accessing/altering the data that will be synchronized.
-
Copy the users created by installation to all nodes. Below is the list of created users as seen in
/etc/passwd
. To edit/etc/passwd
entries thevipw
command can be used. To edit/etc/shadow
entries thevipw -s
command can be used.rh1adm:x:1000:1001:SAP System Administrator:/home/rh1adm:/bin/csh sapadm:x:1001:1001:SAP System Administrator:/home/sapadm:/bin/false sdb:x:1002:1002:Database Software Owner:/home/sdb:/bin/csh sqdrh1:x:1003:1001:Owner of Database Instance RH1:/home/sqdrh1:/bin/csh
-
Copy the groups created by installation on all nodes. Below is the list of created groups as seen in
/etc/group
. To edit/etc/group
entries thevigr
command can be used. To edit/etc/gshaddow
entries thevigr -s
command can be used.sapinst:x:1000:root,rh1adm,sqdrh1 sapsys:x:1001: sdba:x:1002:sqdrh1
-
Ensure that no shared filesystems and storage is mounted on node1 and then synchronize the files and directories to node2.
[root@node1]# rsync -av /etc/services node2:/etc/services [root@node1]# rsync -av /home/* node2:/home [root@node1]# rsync -av --exclude=sapservices /usr/sap/* node2:/usr/sap [root@node1]# rsync -av --ignore-existing /usr/sap/sapservices node2:/usr/sap/sapservices [root@node1]# rsync -av /etc/init.d/sapinit node2:/etc/init.d/ # MaxDB specific files [root@node1]# rsync -av /etc/opt node2:/etc [root@node1]# rsync -av /var/lib/sdb node2:/var/lib [root@node1]# rsync -av /sapdb/{clients,data,programs} node2:/sapdb/
-
Add
sapinit
service on node2 usingchkconfig
to be present among other system services.[root@node2]# chkconfig --add sapinit
3.4.3. Example of synchronizing files for installation with SAP HANA with SR
Important: Below example is from typical installation and may be different from your deployment!
-
Ensure that all instances are stopped and that there are no other processes accessing/altering the data that will be synchronized. SAP HANA doesn't need to be stopped.
-
Copy the users created by installation to all nodes. Below is the list of created users as seen in
/etc/passwd
. To edit/etc/passwd
entries thevipw
command can be used. To edit/etc/shadow
entries thevipw -s
command can be used.rh1adm:x:1001:79:SAP System Administrator:/home/rh1adm:/bin/csh
-
Copy the groups created by installation on all nodes. Below is the list of created groups as seen in
/etc/group
. To edit/etc/group
entries thevigr
command can be used. To edit/etc/gshaddow
entries thevigr -s
command can be used.sapinst:x:1001:root,rh1adm
-
Ensure that no shared filesystems and storage is mounted on node1 and then synchronize the files and directories to node2. Note that when synchronizing the
/usr/sap
directory thersync
command is excluding files that were installed by SAP HANA.[root@node1]# rsync -av /etc/services node2:/etc/services [root@node1]# rsync -av /home/* node2:/home [root@node1]# rsync -av --exclude=RH2/ --exclude=hostctrl/ --exclude=sapservices /usr/sap/* node2:/usr/sap
3.5. Check SAP HostAgent on all nodes
On all nodes check with command below if SAP HostAgent has same version and if it sufficient for required components you have (some DBAs requires at least some versions as described in the 'Supported scenarios' section).
[root]# /usr/sap/hostctrl/exe/saphostexec -version
To upgrade/install SAP HostAgent on nodes follow SAP note 1031096.
3.6. Manually testing the instances on other nodes
To check if data were synchronized correctly it is recommended to try starting the instances manually on other nodes to see if they work properly. To start instances manually use the commands bellow. During the tests the same resource as were present on installation node must be available. In case of issues check with the SAP and/or database vendor depending on which instance is problematic to troubleshoot further.
3.6.1. Manually starting instances
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ASCS00/exe/ /usr/sap/RH1/ASCS00/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_ASCS00_rh1-ascs -D -u rh1adm
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ASCS00/exe/ /usr/sap/RH1/ASCS00/exe/sapcontrol -nr 00 -function Start
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ERS10/exe/ /usr/sap/RH1/ERS10/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_ERS10_rh1-ers -D -u rh1adm
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ERS10/exe/ /usr/sap/RH1/ERS10/exe/sapcontrol -nr 10 -function Start
[root]# /usr/sap/hostctrl/exe/saphostctrl -function StartDatabase -dbname RH1 -dbtype ADA -service
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/D01/exe/ /usr/sap/RH1/D01/exe/sapstartsrv pf=/usr/sap/RH1/SYS/profile/RH1_D01_rh1-pas -D -u rh1adm
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/D01/exe/ /usr/sap/RH1/D01/exe/sapcontrol -nr 01 -function Start
3.6.2. Manually stopping instances
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/D01/exe/ /usr/sap/RH1/D01/exe/sapcontrol -nr 01 -function Stop
[root]# /usr/sap/hostctrl/exe/saphostctrl -function StopDatabase -dbname RH1 -dbtype ADA -service
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ERS10/exe/ /usr/sap/RH1/ERS10/exe/sapcontrol -nr 10 -function Stop
[root]# LD_LIBRARY_PATH=/usr/sap/RH1/ASCS00/exe/ /usr/sap/RH1/ASCS00/exe/sapcontrol -nr 00 -function Stop
4. Pacemaker cluster configuration
Before starting the cluster configuration make sure that following is true:
- Pacemaker cluster is configured according to documentation (RHEL 6, RHEL 7) and has a working fencing as required by Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH.
- All instances are able to manually start and stop on all cluster nodes where they will be running as described in section Manually testing the instances on other nodes of this document.
-
Package
resource-agents-sap
is installed on all cluster nodes.[root]# yum install resource-agents-sap
4.1. Configure shared filesystems
Configure shared filesystem to provide following mountpoint on all of cluster nodes.
/sapmnt
/usr/sap/trans
4.1.1. Configure shared filesystems managed by cluster
The cloned Filesystem
cluster resource can be used to mount the shares from external NFS server on all cluster nodes as shown below.
[root]# pcs resource create fs_sapmnt Filesystem device='192.168.0.10:/export/sapmnt' directory='/sapmnt' fstype='nfs' --clone interleave=true
[root]# pcs resource create fs_sap_trans Filesystem device='192.168.0.10:/export/trans' directory='/usr/sap/trans' fstype='nfs' --clone interleave=true
After creating the Filesystem
resources verify that they have started properly on all nodes.
[root]# pcs status
...
Clone Set: fs_sapmnt-clone [fs_sapmnt]
Started: [ node1 node2 ]
Clone Set: fs_sap_trans-clone [fs_sap_trans]
Started: [ node1 node2 ]
...
4.1.2. Configure shared filesystems managed outside of cluster
In case that shared filesystems will NOT be managed by cluster, it must be ensured that they are available before the pacemaker
service is started.
In RHEL 6 this can be achieve via startup script order. Note: GFS2 filesystems and filesystems in /etc/fstab
in RHEL 6 are mounted before the pacemaker
service started by default.
In RHEL 7 due to systemd parallelization you must ensure that shared filesystems are started in resource-agents-deps
target. More details on this can be found in documentation section 9.6. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later).
4.2. Configure (A)SCS/ERS SAPInstance
cluster resource
(A)SCS/ERS SAPInstance
cluster resource is intended to be run as Master/Slave resource and have Virtual IP addresses setup for (A)SCS/ERS to follow the placements of Master/Slave instance. Before adding the ASCS/ERS Master/Slave resource into cluster make sure that ASCS profile was modified as described in the section 3.3.1. (A)SCS profile modification in this document.
Below is an example command for creating the (A)SCS/ERS Master/Slave resource.
[root]# pcs resource create rh1_ascs_ers SAPInstance InstanceName="RH1_ASCS00_rh1-ascs" DIR_PROFILE=/sapmnt/RH1/profile START_PROFILE=/sapmnt/RH1/profile/RH1_ASCS00_rh1-ascs ERS_InstanceName="RH1_ERS10_rh1-ers" ERS_START_PROFILE=/sapmnt/RH1/profile/RH1_ERS10_rh1-ers --master meta master-max="1" clone-max="2" notify="true" interleave="true"
When running pcs-0.9.158-6.el7
, or newer, use the command below to avoid deprecation warning. More information about the change is explained in What are differences between master
and --master
option in pcs resource create
command?.
[root]# pcs resource create rh1_ascs_ers SAPInstance InstanceName="RH1_ASCS00_rh1-ascs" DIR_PROFILE=/sapmnt/RH1/profile START_PROFILE=/sapmnt/RH1/profile/RH1_ASCS00_rh1-ascs ERS_InstanceName="RH1_ERS10_rh1-ers" ERS_START_PROFILE=/sapmnt/RH1/profile/RH1_ERS10_rh1-ers master master-max="1" clone-max="2" notify="true" interleave="true"
Once the resource is created its configuration can be verified using the command below:
[root]# pcs resource show rh1_ascs_ers-master
Master: rh1_ascs_ers-master
Meta Attrs: clone-max=2 interleave=true master-max=1 notify=true
Resource: rh1_ascs_ers (class=ocf provider=heartbeat type=SAPInstance)
Attributes: DIR_PROFILE=/sapmnt/RH1/profile ERS_InstanceName=RH1_ERS10_rh1-ers ERS_START_PROFILE=/sapmnt/RH1/profile/RH1_ERS10_rh1-ers InstanceName=RH1_ASCS00_rh1-ascs START_PROFILE=/sapmnt/RH1/profile/RH1_ASCS00_rh1-ascs
Operations: demote interval=0s timeout=320 (rh1_ascs_ers-demote-interval-0s)
monitor interval=120 timeout=60 (rh1_ascs_ers-monitor-interval-120)
monitor interval=121 role=Slave timeout=60 (rh1_ascs_ers-monitor-interval-121)
monitor interval=119 role=Master timeout=60 (rh1_ascs_ers-monitor-interval-119)
promote interval=0s timeout=320 (rh1_ascs_ers-promote-interval-0s)
start interval=0s timeout=180 (rh1_ascs_ers-start-interval-0s)
stop interval=0s timeout=240 (rh1_ascs_ers-stop-interval-0s)
4.2.1. ASCS/ERS Virtual IP addresses
To setup virtual IP addresses that will be used by ASCS and ERS create two IPaddr2
resources as shown below.
[root]# pcs resource create rh1_vip_ascs IPaddr2 ip=192.168.0.13
[root]# pcs resource create rh1_vip_ers IPaddr2 ip=192.168.0.14
To make the virtual IP addresses to follow correct nodes where ASCS and ERS are running following constraints are needed. Note that score 2000 is used to keep the IP resources running even when the ASCS or ERS resources are not started. This may be needed by some tools that can still try to access the nodes where the ASCS/ERS were running before. SAPInstance
resource agent considers the (A)SCS to be 'Master' instance while ERS is considered to be 'Slave' instance.
[root]# pcs constraint colocation add rh1_vip_ascs with Master rh1_ascs_ers-master 2000
[root]# pcs constraint colocation add rh1_vip_ers with Slave rh1_ascs_ers-master 2000
4.2.2. ASCS/ERS with shared filesystems managed by cluster
If the shared filesystems, /sapmnt
and /usr/sap/trans
, are managed by cluster, then following constraints ensures that SAPInstance
(A)SCS/ERS Master/Slave resource is started only once the filesystem are available.
[root]# pcs constraint order fs_sapmnt-clone then rh1_ascs_ers-master
[root]# pcs constraint order fs_sap_trans-clone then rh1_ascs_ers-master
4.3. Check that (A)SCS/ERS SAPInstance
cluster resource is running correctly
Below outputs shows how the properly configured (A)SCS/ERS Master/Slave resource should look like in cluster once started. Please note that your instances may run on different nodes as compared to ones shown below.
[root]# pcs status
...
rh1_vip_ascs (ocf::heartbeat:IPaddr2): Started node1
rh1_vip_ers (ocf::heartbeat:IPaddr2): Started node2
Master/Slave Set: rh1_ascs_ers-master [rh1_ascs_ers]
Masters: [ node1 ]
Slaves: [ node2 ]
...
[root]# pcs constraint
...
Colocation Constraints:
rh1_vip_ascs with rh1_ascs_ers-master (score:2000) (rsc-role:Started) (with-rsc-role:Master)
rh1_vip_ers with rh1_ascs_ers-master (score:2000) (rsc-role:Started) (with-rsc-role:Slave)
...
If the shared filesystems providing the /sapmnt
and /usr/sap/trans
are managed by cluster, then also the folowing constraints should be present.
[root]# pcs constraint
...
Ordering Constraints:
start fs_sapmnt-clone then start rh1_ascs_ers-master (kind:Mandatory)
start fs_sap_trans-clone then start rh1_ascs_ers-master (kind:Mandatory)
...
4.4. Configure cluster resource group containing SAPDatabase
cluster resource
When using the SAP HANA SR that was configured according to SAP HANA system replication in pacemaker cluster article skip the 'SAPDatabase
cluster resource group configuration' here and continue with 4.6 Configure Primary Application Server group (PAS).
Below is example of configuring SAPDatabase
cluster resource and the resources associated with it. Example is expecting that we need a virtual IP address (rh1_vip_db
) for the database and filesystem (rh1_db_filesystem
) that is placed on HA-LVM. As these resources should run in defined order and on same node they will be placed in the resource group (rh1_SAPDatabase_group
).
4.4.1. Create Virtual IP address for SAPDatabase
To create the virtual IP address that will be part of the rh1_SAPDatabase_group
use the command below.
[root]# pcs resource create rh1_vip_db IPaddr2 ip=192.168.0.15 --group rh1_SAPDatabase_group
To verify that resource got created in new group rh1_SAPDatabase_group
the output from pcs status
command should look like below.
[root]# pcs status
...
Resource Group: rh1_SAPDatabase_group
rh1_vip_db (ocf::heartbeat:IPaddr2): Started node1
...
4.4.2. Configure LVM
and Filesystem
cluster resources in SAPDatabase resource group
First LVM
cluster resource is added then followed by Filesystem
cluster resource. LVM shared by cluster is expected to be configured as described in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
[root]# pcs resource create rh1_lvm_db LVM volgrpname=vg_db exclusive=true --group rh1_SAPDatabase_group
[root]# pcs resource create rh1_fs_db Filesystem device=/dev/vg_db/lv_db directory=/sapdb/RH1 fstype=xfs --group rh1_SAPDatabase_group
Verify that resources were added to rh1_SAPDatabase_group
resource group and started as shown below.
[root]# pcs status
...
Resource Group: rh1_SAPDatabase_group
rh1_vip_db (ocf::heartbeat:IPaddr2): Started node1
rh1_lvm_db (ocf::heartbeat:LVM): Started node1
rh1_fs_db (ocf::heartbeat:Filesystem): Started node1
...
4.4.3. Configure SAPDatabase
cluster resource
As last the SAPDatabase
cluster resource is added to the resource group as shown using command below.
[root]# pcs resource create rh1_SAPDatabase SAPDatabase DBTYPE="ADA" SID="RH1" STRICT_MONITORING="TRUE" AUTOMATIC_RECOVER="TRUE" --group rh1_SAPDatabase_group
After adding resource verify the SAPDatabase
resource configuration using command below.
[root]# pcs resource show rh1_SAPDatabase
Resource: rh1_SAPDatabase (class=ocf provider=heartbeat type=SAPDatabase)
Attributes: AUTOMATIC_RECOVER=TRUE DBTYPE=ADA SID=RH1 STRICT_MONITORING=TRUE
Operations: monitor interval=120 timeout=60 (rh1_SAPDatabase-monitor-interval-120)
start interval=0s timeout=1800 (rh1_SAPDatabase-start-interval-0s)
stop interval=0s timeout=1800 (rh1_SAPDatabase-stop-interval-0s)
4.5. Check status of SAPDatabase
resource group
Use the command below to check if the group containing SAPDatabase
cluster resource has fully started and looks similar to example output.
[root]# pcs status
...
Resource Group: rh1_SAPDatabase_group
rh1_vip_db (ocf::heartbeat:IPaddr2): Started node1
rh1_lvm_db (ocf::heartbeat:LVM): Started node1
rh1_fs_db (ocf::heartbeat:Filesystem): Started node1
rh1_SAPDatabase (ocf::heartbeat:SAPDatabase): Started node1
...
Note: Resource SAPDatabase
is independent from the (A)SCS/ERS instance and it can be started independently from it.
4.6. Configure Primary Application Server group (PAS)
Below is example of configuring resource group rh1_PAS_D01_group
containing the Primary Application Server (PAS) instance rh1_pas_d01
managed by SAPInstance
resource agent.
4.6.1. Create Virtual IP address for PAS instance
To create the virtual IP address that will be part of the rh1_PAS_D01_group
use the command below.
[root]# pcs resource create rh1_vip_pas_d01 IPaddr2 ip=192.168.0.16 --group rh1_PAS_D01_group
To verify that resource got created in new group rh1_PAS_D01_group
the output from pcs status
command should look like below.
[root]# pcs status
...
Resource Group: rh1_PAS_D01_group
rh1_vip_pas_d01 (ocf::heartbeat:IPaddr2): Started node1
...
4.6.2. Configure LVM
and Filesystem
cluster resources in PAS resource group
First LVM
cluster resource is added then followed by Filesystem
cluster resource. LVM shared by cluster is expected to be configured as described in the article What is a Highly Available LVM (HA-LVM) configuration and how do I implement it?.
[root]# pcs resource create rh1_lvm_pas_d01 LVM volgrpname=vg_d01 exclusive=true --group rh1_PAS_D01_group
[root]# pcs resource create rh1_fs_pas_d01 Filesystem device=/dev/vg_d01/lv_d01 directory=/usr/sap/RH1/D01 fstype=xfs --group rh1_PAS_D01_group
Verify that resources were added to rh1_PAS_D01_group
resource group and started as shown below.
[root]# pcs status
...
Resource Group: rh1_PAS_D01_group
rh1_vip_pas_d01 (ocf::heartbeat:IPaddr2): Started node1
rh1_lvm_pas_d01 (ocf::heartbeat:LVM): Started node1
rh1_fs_pas_d01 (ocf::heartbeat:Filesystem): Started node1
...
4.6.3. Configure constraints for PAS resource group
PAS requires the ASCS and database instance to be running before it can start properly. Below are example commands on how to setup constraints to achieve this for various databases that can be used by SAP Netweaver.
4.6.3.1. Deployments with rh1_SAPDatabase_group
group
For configurations that has one cluster resource group that will start all resources needed by database. In example here the SAPDatabase
resource agent is used to manage the database and is part of the the database group rh1_SAPDatabase_group
. Commands bellow will create constraints that will start the whole rh1_PAS_D01_group
only once the ASCS instance was promoted and when the database group rh1_SAPDatabase_group
is running.
[root]# pcs constraint order rh1_SAPDatabase_group then rh1_PAS_D01_group symmetrical=false
[root]# pcs constraint order promote rh1_ascs_ers-master then rh1_PAS_D01_group
After executing the commands verify that the constraints were added properly.
[root]# pcs constraint
Ordering Constraints:
...
start rh1_SAPDatabase_group then start rh1_PAS_group (kind:Mandatory) (non-symmetrical)
promote rh1_ascs_ers-master then start rh1_PAS_group (kind:Mandatory)
...
4.6.3.2. Deployments with SAP HANA with SR as database
When using SAP HANA database that is configured for system replication (SR) that is managed by cluster, the following constraints will ensure that whole rh1_PAS_D01_group
will start only once the ASCS instance was promoted and when the SAP HANA SAPHana_RH2_02-master
was promoted.
[root]# pcs constraint order promote SAPHana_RH2_02-master then rh1_PAS_D01_group symmetrical=false
[root]# pcs constraint order promote rh1_ascs_ers-master then rh1_PAS_D01_group
After executing the commands verify that the constraints were added properly.
[root]# pcs constraint
Ordering Constraints:
...
promote SAPHana_RH2_02-master then start rh1_PAS_group (kind:Mandatory) (non-symmetrical)
promote rh1_ascs_ers-master then start rh1_PAS_group (kind:Mandatory)
...
4.6.4. Configure PAS SAPInstance
cluster resource
To run the PAS instance the same SAPInstance
resource agents as for (A)SCS/ERS instance is used. The PAS instance is compared to (A)SCS/ERS a simple instance and requires less attributes to be configured. Check the command below for example on how to create a PAS instance for 'D01' instance and place it at the end of the rh1_PAS_D01_group
resource group.
[root]# pcs resource create rh1_pas_d01 SAPInstance InstanceName="RH1_D01_rh1-pas" DIR_PROFILE=/sapmnt/RH1/profile START_PROFILE=/sapmnt/RH1/profile/RH1_D01_rh1-pas --group rh1_PAS_D01_group
Verify the configuration of the PAS SAPInstance
resource using command below.
[root]# pcs resource show rh1_pas_d01
Resource: rh1_pas_d01 (class=ocf provider=heartbeat type=SAPInstance)
Attributes: DIR_PROFILE=/sapmnt/RH1/profile InstanceName=RH1_D01_rh1-pas START_PROFILE=/sapmnt/RH1/profile/RH1_D01_rh1-pas
Operations: demote interval=0s timeout=320 (rh1_pas_d01-demote-interval-0s)
monitor interval=120 timeout=60 (rh1_pas_d01-monitor-interval-120)
monitor interval=121 role=Slave timeout=60 (rh1_pas_d01-monitor-interval-121)
monitor interval=119 role=Master timeout=60 (rh1_pas_d01-monitor-interval-119)
promote interval=0s timeout=320 (rh1_pas_d01-promote-interval-0s)
start interval=0s timeout=180 (rh1_pas_d01-start-interval-0s)
stop interval=0s timeout=240 (rh1_pas_d01-stop-interval-0s)
4.7. Check status and configuration of PAS resource group
Use the commands below to check if the rh1_PAS_D01_group
resource group has fully started and looks similar to example outputs.
[root]# pcs status
...
Resource Group: rh1_PAS_D01_group
rh1_vip_pas_d01 (ocf::heartbeat:IPaddr2): Started node1
rh1_lvm_pas_d01 (ocf::heartbeat:LVM): Started node1
rh1_fs_pas_d01 (ocf::heartbeat:Filesystem): Started node1
rh1_pas_d01 (ocf::heartbeat:SAPInstance): Started node1
...
### for deployments with rh1_SAPDatabase_group
[root]# pcs constraint
Ordering Constraints:
...
start rh1_SAPDatabase_group then start rh1_PAS_group (kind:Mandatory) (non-symmetrical)
promote rh1_ascs_ers-master then start rh1_PAS_group (kind:Mandatory)
...
### for deployments with SAP HANA SR
[root]# pcs constraint
Ordering Constraints:
...
promote SAPHana_RH2_02-master then start rh1_PAS_group (kind:Mandatory) (non-symmetrical)
promote rh1_ascs_ers-master then start rh1_PAS_group (kind:Mandatory)
...
4.8. (optional) Configuring SAP HAlib for SAPInstance
resources
SAP instances should be started, stopped or relocated only using the cluster tools such as pcs
or PCSD web GUI. If other tools such as SAP MC, SAP LVM and similar would be used to start, stop or relocate SAP instances managed by SAPInstance
resource agent, then it is required to implement the sap_redhat_cluster_connector
script that instructs cluster to do the desired operation instead of reacting on sudden change. To configure sap_redhat_cluster_connector
follow the article How to configure SAP HAlib for SAPInstance
resources?.
Comments