Configuring SAP HANA Scale-Up System Replication with the RHEL HA Add-On on Amazon Web Services (AWS)
Contents
- 1. Overview
- 2. Required AWS Configurations
- 3. Install SAP HANA and setup HANA SR
- 4. Configure SAP HANA in a pacemaker cluster on AWS
- 4.1. Install resource agents and other components required for managing SAP HANA Scale-Up System Replication using the RHEL HA Add-On
- 4.2. Enable the SAP HANA
srConnectionChanged()
hook - 4.3. Create
SAPHanaTopology
andSAPHana
resources - 4.4. Configure the OverlayIP Resource Agent
- 4.5. Configure constraints
- 4.6. Adding a secondary virtual IP address for an Active/Active (Read-Enabled) HANA System Replication setup
- 4.7. Test failover of the
SAPHana
resource
1. Overview
This article describes how to configure SAP HANA System Replication in a scale-up scenario, in a pacemaker-based cluster on supported Red Hat Enterprise Linux (RHEL) virtual machines on Amazon Web Services. For more generic documentation check the SAP HANA system replication in pacemaker cluster.
1.1 Supported Scenarios
See: Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster.
Note the parts specific to Amazon Web Services.
1.2. System registration and required repositories
Make sure that system has attached subscription providing EUS or E4S repositories and enable them as described in the articles below.
- EUS: RHEL 7 SAP HANA EUS repositories and RHEL 7 SAP HANA High Availability EUS repositories
- E4S: Update services for SAP (E4S) repositories and Update services for SAP (E4S) High Availability repositories
2. Required AWS Configurations
2.1. Initial AWS Setup
For instructions on the initial setup of the AWS environment, please refer to Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Amazon Web Services.
As a summary, you should have configured the following components:
- An AWS account
- A Key Pair
- An IAM user with permissions to: modify routing tables and create security groups, create IAM policies and roles
- A VPC
- 3 subnets: one public subnet, and two private subnets spanning in two different availability zones (that's recommended to minimize the service disruption related to zone-wise failures. However single availability zone is also acceptable)
- NAT Gateway
- Security Group for the jump server
- Security Group for the HANA cluster nodes
2.2. Choose the Supported SAP HANA Instance Type and Storage
Instance Type of the two HANA nodes:
Please follow the guidelines in the SAP HANA Quick Start Guide, Planning the Deployment, on instance types, storage, memory sizing, and Multi-AZ deployments.
- It's required to create the two nodes in different availability zones.
- To enable access, copy your private key to the jump server, and each of the HANA nodes.
- The hostname should meet the SAP HANA requirement. Optionally, change hostname following the instructions.
To launch the HANA Instances it is needed to use either the Red Hat Cloud Access program, or AWS Marketplace Amazon Machine Images (AMIs).
AWS Marketplace:
- Navigate to https://aws.amazon.com/marketplace
Search for “Red Hat Enterprise Linux for SAP with HA and US” and select the desired version
- Red Hat Cloud Access Program:
- Refer to How to Locate Red Hat Cloud Access Gold Images on AWS EC2
2.3. Create Policies
For the IAM user, you need to create three policies:
Services -> IAM -> Policies -> Create DataProvider
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"EC2:DescribeInstances",
"EC2:DescribeVolumes"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:GetMetricStatistics",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::aws-data-provider/config.properties"
}
]
}
Services -> IAM -> Policies -> Create OverlayIPAgent
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1424870324000",
"Effect": "Allow",
"Action": "ec2:DescribeRouteTables",
"Resource": "*"
},
{
"Sid": "Stmt1424860166260",
"Action": [
"ec2:CreateRoute",
"ec2:ReplaceRoute"
],
"Effect": "Allow",
"Resource": "arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>"
}
]
}
In the last Resource clause, replace the following parameters with the real values:
region
: e.g.us-east-1
account-id
: the account ID of the user accountClusterRouteTableID
: the route table ID for the existing cluster VPC route table, in format of rtb-XXXXX.
Services -> IAM -> Policies -> Create STONITH
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1424870324000",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeTags"
],
"Resource": "*"
},
{
"Sid": "Stmt1424870324001",
"Effect": "Allow",
"Action": [
"ec2:ModifyInstanceAttribute",
"ec2:StartInstances",
"ec2:StopInstances"
],
"Resource": [
"arn:aws:ec2:<region>:<account-id>:instance/<instance-id-node1>",
"arn:aws:ec2:<region>:<account-id>:instance/<instance-id-node2>"
]
}
]
}
In the last Resource clause, replace the following parameters with the real values:
region
: e.g.us-east-1
account-id
: the account ID of the user accountinstance-id-node1, instance-id-node2
: instance ID of the two SAP HANA instances
2.4. Create an IAM Role
Create an IAM Role, attach the 3 policies that were created in the previous step, and assign the role to the two HANA instances.
- In EC2 -> IAM -> Roles -> Create a new role, e.g.
PacemakerRole
, attach the 3 policies to it:DataProvider
,OverlayIPAgent
, andSTONITH
- Assign the role to the HANA instances: Perform for BOTH HANA nodes: in AWS EC2 console, right click the HANA node -> Instance Settings -> Attach/Replace IAM Role -> Select PacemakerRole, click Apply.
2.5. Install AWS CLI
Follow the Install the AWS CLI section to install and verify the AWS CLI configuration on both nodes.
3. Install SAP HANA and setup HANA SR
Installation of SAP HANA and setup of HANA SR is not specific to AWS environment and you can follow the usual procedures:
- For installing SAP HANA on RHEL - SAP Note 2009879 - SAP HANA Guidelines for RedHat Enterprise Linux (RHEL)
- For setting up SAP HANA SR - whole section 2. SAP HANA System Replication - SAP HANA system replication in pacemaker cluster
4. Configure SAP HANA in a pacemaker cluster on AWS
Install and setup pacemaker cluster in AWS and then configure and test fencing as described in all following sections of Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Amazon Web Services article:
At this point this guide assumes that following is true:
- SAP HANA is installed on both nodes and HANA system replication (HANA SR) works and is in synced mode - 2.4. Checking SAP HANA System Replication state
- There is an pacemaker cluster running on both nodes and fencing of nodes was tested to work properly - Configure fencing
- AWS CLI is working from both nodes - Install the AWS CLI
4.1. Install resource agents and other components required for managing SAP HANA Scale-Up System Replication using the RHEL HA Add-On
Please follow the instructions detailed at 4.1. Install resource agents and other components required for managing SAP HANA Scale-Up System Replication using the RHEL HA Add-On.
4.2. Enable the SAP HANA srConnectionChanged()
hook
Please follow the instructions detailed at 4.2. Enable the SAP HANA srConnectionChanged() hook
4.3. Create SAPHanaTopology
and SAPHana
resources
Follow documentation below to create SAPHanaTopology
and SAPHana
resources:
4.4. Configure the OverlayIP Resource Agent
-
Disable "Source/Destination Check" on BOTH HANA nodes:
Perform for BOTH HANA nodes: in AWS EC2 console, right click the HANA node -> Networking -> Change Source/Des. Check -> In the pop up window, click Yes, Disable. -
On ONE node in the cluster, enter the following command:
[root]# aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>
Note:
ClusterRouteTableID
is the route table ID for the existing cluster VPC route table.NewCIDRblockIP/NetMask
is a new IP address and netmask outside of the VPC CIDR block. For example, if the VPC CIDR block is172.31.0.0/16
, the new IP address/netmask could be192.168.0.15/32
.ClusterNodeID
is the instance ID for another node in the cluster.
-
Create the
aws-vpc-move-ip
resource
Choose a free IP for the virtual IP address, that the client can access HANA, e.g.192.168.0.15
. On BOTH nodes, modify the/etc/hosts
file:[root]# vi /etc/hosts 192.168.0.15 hanadb
On ONE node in the cluster, create the
aws-vpc-move-ip
resource, e.g.vpcip
:[root]# pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
On ONE node in the cluster, create the
aws-vpc-move-ip
resource with the following command, e.g.vpcip
:[root]# pcs resource create vpcip aws-vpc-move-ip ip=192.168.0.15 interface=eth0 routing_table=<ClusterRouteTableID>
Note: Earlier versions of this document included the
monapi=true
option in the command above. This was a workaround for a bug in the probe operation that has since been fixed. However, settingmonapi=true
can result in unnecessary failovers due to external factors such as API throttling. For this reason, Red Hat and Amazon do not recommend settingmonapi=true
. Please ensure that the latest available version of theresource-agents
package for your OS minor release is installed, so that the bug fix is included. -
Test failover of the resource
Don't forget to clear the automatically created constraint after the succesful move ofvpcip
resource[root]# pcs resource move vpcip [root]# pcs resource clear vpcip
4.5. Configure constraints
For correct operation we need to ensure that SAPHanaTopology
resources are started before starting the SAPHana
resources and also that the virtual IP address is present on the node where the Master resource of SAPHana
is running. To achieve this, the following 2 constraints need to be created.
4.5.1. constraint - start SAPHanaTopology
before SAPHana
Follow same steps as in 4.5.1. constraint - start SAPHanaTopology
before SAPHana
.
4.5.2. constraint - colocate aws-vpc-move-ip
resource with Master of SAPHana
resource
Below is an example command that will colocate the aws-vpc-move-ip
resource with SAPHana
resource that was promoted as Master.
[root]# pcs constraint colocation add vpcip with master SAPHana_RH1_00-master 2000
Note that the constraint is using a score of 2000 instead of the default INFINITY. This allows the aws-vpc-move-ip
resource to be taken down by the cluster in case there is no Master promoted in the SAPHana
resource so it is still possible to use this address with tools like SAP Management Console or SAP LVM that can use this address to query the status information about the SAP Instance.
The resulting constraint should look like one in the example below.
[root]# pcs constraint
...
Colocation Constraints:
vpcip with SAPHana_RH1_00-master (score:2000) (rsc-role:Started) (with-rsc-role:Master)
...
4.6. Adding a secondary virtual IP address for an Active/Active (Read-Enabled) HANA System Replication setup
Follow the instructions detailed in 4.8. Adding a secondary virtual IP address for an Active/Active (Read-Enabled) HANA System Replication setup.
4.7. Test failover of the SAPHana
resource
There are several ways how to simulate a failure to test the proper functionality of cluster in such case. You can for example Stop the master node in the AWS EC2 console, or simulate a crash and panic the master node with sysrq-trigger
:
[root]# echo c > /proc/sysrq-trigger
Kill the HANA instance on the master node, by running the command:
[rh2adm]# HDB kill-9
- Please note that when stopping the HANA DB instance by running the
HDB kill-9
command, it is expected to see the cluster's hana resource monitor operation(s) fail. This is because the resource agents use SAP HANA commands to monitor the status of the HANA instances, and if the processes of the HANA instance are killed, then those SAP HANA commands won't work anymore and therefore the monitor will fail.
Comments