Chapter 2. Configuring a Red Hat High Availability cluster on Microsoft Azure
This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Azure using Azure virtual machine (VM) instances as cluster nodes. The procedures in this chapter assume you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 8 images you use for your cluster. See Red Hat Enterprise Linux Image Options on Azure for information on image options for Azure.
This chapter includes prerequisite procedures for setting up your environment for Azure. Once you have set up your environment, you can create and configure Azure VM instances.
The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.
The chapter refers to the Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for more information.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for a Microsoft Azure account with administrator privileges.
- You need to install Azure command line interface (CLI). For more information, see Section 1.5, “Installing the Azure CLI”.
- Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto Azure with full support from Red Hat.
Additional resources
2.1. Creating resources in Azure
Complete the following procedure to create a region, resource group, storage account, virtual network, and availability set. You need these resources to complete subsequent tasks in this chapter.
Procedure
Authenticate your system with Azure and log in.
$ az login
NoteIf a browser is available in your environment, the CLI opens your browser to the Azure sign-in page.
Example:
[clouduser@localhost]$ az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code FDMSCMETZ to authenticate. [ { "cloudName": "AzureCloud", "id": "Subscription ID", "isDefault": true, "name": "MySubscriptionName", "state": "Enabled", "tenantId": "Tenant ID", "user": { "name": "clouduser@company.com", "type": "user" } } ]
Create a resource group in an Azure region.
$ az group create --name resource-group --location azure-region
Example:
[clouduser@localhost]$ az group create --name azrhelclirsgrp --location southcentralus { "id": "/subscriptions//resourceGroups/azrhelclirsgrp", "location": "southcentralus", "managedBy": null, "name": "azrhelclirsgrp", "properties": { "provisioningState": "Succeeded" }, "tags": null }
Create a storage account.
$ az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2
Example:
[clouduser@localhost]$ az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS --kind StorageV2 { "accessTier": null, "creationTime": "2017-04-05T19:10:29.855470+00:00", "customDomain": null, "encryption": null, "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact", "kind": "StorageV2", "lastGeoFailoverTime": null, "location": "southcentralus", "name": "azrhelclistact", "primaryEndpoints": { "blob": "https://azrhelclistact.blob.core.windows.net/", "file": "https://azrhelclistact.file.core.windows.net/", "queue": "https://azrhelclistact.queue.core.windows.net/", "table": "https://azrhelclistact.table.core.windows.net/" }, "primaryLocation": "southcentralus", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "secondaryEndpoints": null, "secondaryLocation": null, "sku": { "name": "Standard_LRS", "tier": "Standard" }, "statusOfPrimary": "available", "statusOfSecondary": null, "tags": {}, "type": "Microsoft.Storage/storageAccounts" }
Get the storage account connection string.
$ az storage account show-connection-string -n storage-account-name -g resource-group
Example:
[clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==" }
Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.
$ export AZURE_STORAGE_CONNECTION_STRING="storage-connection-string"
Example:
[clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
Create the storage container.
$ az storage container create -n container-name
Example:
[clouduser@localhost]$ az storage container create -n azrhelclistcont { "created": true }
Create a virtual network. All cluster nodes must be in the same virtual network.
$ az network vnet create -g resource group --name vnet-name --subnet-name subnet-name
Example:
[clouduser@localhost]$ az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { "newVNet": { "addressSpace": { "addressPrefixes": [ "10.0.0.0/16" ] }, "dhcpOptions": { "dnsServers": [] }, "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1", "location": "southcentralus", "name": "azrhelclivnet1", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceGuid": "0f25efee-e2a6-4abe-a4e9-817061ee1e79", "subnets": [ { "addressPrefix": "10.0.0.0/24", "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1", "ipConfigurations": null, "name": "azrhelclisubnet1", "networkSecurityGroup": null, "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceNavigationLinks": null, "routeTable": null } ], "tags": {}, "type": "Microsoft.Network/virtualNetworks", "virtualNetworkPeerings": null } }
Create an availability set. All cluster nodes must be in the same availability set.
$ az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup
Example:
[clouduser@localhost]$ az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { "additionalProperties": {}, "id": "/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1", "location": "southcentralus", "name": “rhelha-avset1", "platformFaultDomainCount": 2, "platformUpdateDomainCount": 5, ...omitted
Additional resources
2.2. Required system packages for High Availability
The procedure assumes you are creating a VM image for Azure HA using Red Hat Enterprise Linux. To successfully complete the procedure, the following packages must be installed.
Table 2.1. System packages
Package | Repository | Description |
---|---|---|
libvirt | rhel-8-for-x86_64-appstream-rpms | Open source API, daemon, and management tool for managing platform virtualization |
virt-install | rhel-8-for-x86_64-appstream-rpms | A command line utility for building VMs |
libguestfs | rhel-8-for-x86_64-appstream-rpms | A library for accessing and modifying VM file systems |
libguestfs-tools | rhel-8-for-x86_64-appstream-rpms | System administration tools for VMs; includes the guestfish utility |
2.3. Azure VM configuration settings
Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary.
Table 2.2. VM configuration settings
Setting | Recommendation |
---|---|
ssh | ssh must be enabled to provide remote access to your Azure VMs. |
dhcp | The primary virtual adapter should be configured for dhcp (IPv4 only). |
Swap Space | Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). |
NIC | Choose virtio for the primary virtual network adapter. |
encryption | For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. |
2.4. Installing Hyper-V device drivers
Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure VM. Use the lsinitrd | grep hv
command to verify that the drivers are installed.
Procedure
Enter the following
grep
command to determine if the required Hyper-V device drivers are installed.# lsinitrd | grep hv
In the example below, all required drivers are installed.
# lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz
If all the drivers are not installed, complete the remaining steps.
NoteAn
hv_vmbus
driver may exist in the environment. Even if this driver is present, complete the following steps.-
Create a file named
hv.conf
in/etc/dracut.conf.d
. Add the following driver parameters to the
hv.conf
file.add_drivers+=" hv_vmbus " add_drivers+=" hv_netvsc " add_drivers+=" hv_storvsc "
NoteNote the spaces before and after the quotes, for example,
add_drivers+=" hv_vmbus "
. This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.Regenerate the
initramfs
image.# dracut -f -v --regenerate-all
Verification steps
- Reboot the machine.
-
Run the
lsinitrd | grep hv
command to verify that the drivers are installed.
2.5. Making additional configuration changes
The VM requires further configuration changes to operate in Azure. Perform the following procedure to make the additional changes.
Procedure
- If necessary, power on the VM.
Register the VM and enable the Red Hat Enterprise Linux 8 repository.
# subscription-manager register --auto-attach
Stopping and removing cloud-init
Stop the
cloud-init
service (if present).# systemctl stop cloud-init
Remove the
cloud-init
software.# yum remove cloud-init
Completing other VM changes
Edit the
/etc/ssh/sshd_config
file and enable password authentication.PasswordAuthentication yes
Set a generic host name.
# hostnamectl set-hostname localhost.localdomain
Edit (or create) the
/etc/sysconfig/network-scripts/ifcfg-eth0
file. Use only the parameters listed below.NoteThe
ifcfg-eth0
file does not exist on the RHEL 8 DVD ISO image and must be created.DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" TYPE="Ethernet" USERCTL="yes" PEERDNS="yes" IPV6INIT="no"
Remove all persistent network device rules, if present.
# rm -f /etc/udev/rules.d/70-persistent-net.rules # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules # rm -f /etc/udev/rules.d/80-net-name-slot-rules
Set
ssh
to start automatically.# systemctl enable sshd # systemctl is-enabled sshd
Modify the kernel boot parameters.
-
Add
crashkernel=256M
to the start of theGRUB_CMDLINE_LINUX
line in the/etc/default/grub
file. Ifcrashkernel=auto
is present, change it tocrashkernel=256M
. Add the following lines to the end of the
GRUB_CMDLINE_LINUX
line, if not present.earlyprintk=ttyS0 console=ttyS0 rootdelay=300
Remove the following options, if present.
rhgb quiet
-
Add
Regenerate the
grub.cfg
file.# grub2-mkconfig -o /boot/grub2/grub.cfg
Install and enable the Windows Azure Linux Agent (WALinuxAgent). Red Hat Enterprise Linux 8 Application Stream (AppStream) includes the WALinuxAgent. See Using AppStream for more information.
# yum install WALinuxAgent -y # systemctl enable waagent
Edit the following lines in the
/etc/waagent.conf
file to configure swap space for provisioned VMs. Set swap space for whatever is appropriate for your provisioned VMs.Provisioning.DeleteRootPassword=n ResourceDisk.Filesystem=ext4 ResourceDisk.EnableSwap=y ResourceDisk.SwapSizeMB=2048
Preparing to provision
Unregister the VM from Red Hat Subscription Manager.
# subscription-manager unregister
Prepare the VM for Azure provisioning by cleaning up the existing provisioning details. Azure reprovisions the VM in Azure. This command generates warnings, which is expected.
# waagent -force -deprovision
Clean the shell history and shut down the VM.
# export HISTSIZE=0 # poweroff
2.6. Creating an Azure Active Directory Application
Complete the following procedure to create an Azure AD Application. The Azure AD Application authorizes and automates access for HA operations for all nodes in the cluster.
Prerequisites
Install the Azure Command Line Interface (CLI).
Procedure
- Ensure you are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
Log in to your Azure account.
$ az login
Enter the following command to create the Azure AD Application. To use your own password, add the
--password
option to the command. Ensure that you create a strong password.$ az ad sp create-for-rbac --name FencingApplicationName --role owner --scopes "/subscriptions/SubscriptionID/resourceGroups/MyResourseGroup"
Example:
[clouduser@localhost ~] $ az ad sp create-for-rbac --name FencingApp --role owner --scopes "/subscriptions/2586c64b-xxxxxx-xxxxxxx-xxxxxxx/resourceGroups/azrhelclirsgrp" Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 { "appId": "1a3dfe06-df55-42ad-937b-326d1c211739", "displayName": "FencingApp", "name": "http://FencingApp", "password": "43a603f0-64bb-482e-800d-402efe5f3d47", "tenant": "77ecefb6-xxxxxxxxxx-xxxxxxx-757a69cb9485" }
Save the following information before proceeding. You need this information to set up the fencing agent.
- Azure AD Application ID
- Azure AD Application Password
- Tenant ID
- Microsoft Azure Subscription ID
Additional resources
2.7. Converting the image to a fixed VHD format
All Microsoft Azure VM images must be in a fixed VHD
format. The image must be aligned on a 1 MB boundary before it is converted to VHD. This section describes how to convert the image from qcow2
to a fixed VHD
format and align the image, if necessary. Once you have converted the image, you can upload it to Azure.
Procedure
Convert the image from
qcow2
toraw
format.$ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
Create a shell script using the contents below.
#!/bin/bash MB=$((1024 * 1024)) size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}') rounded_size=$((($size/$MB + 1) * $MB)) if [ $(($size % $MB)) -eq 0 ] then echo "Your image is already aligned. You do not need to resize." exit 1 fi echo "rounded size = $rounded_size" export rounded_size
Run the script. This example uses the name
align.sh
.$ sh align.sh <image-xxx>.raw
- If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
- If a value displays, your image is not aligned.
Use the following command to convert the file to a fixed
VHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.
Aligning the image
Complete the following steps only if the raw
file is not aligned.
Resize the
raw
file using the rounded value displayed when you ran the verification script.$ qemu-img resize -f raw <image-xxx>.raw <rounded-value>
Convert the
raw
image file to aVHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.
2.8. Uploading and creating an Azure image
Complete the following steps to upload the VHD
file to your container and create an Azure custom image.
The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.
Procedure
Upload the
VHD
file to the storage container. It may take several minutes. To get a list of storage containers, enter theaz storage container list
command.$ az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd
Example:
[clouduser@localhost]$ az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-8.vhd --name rhel-image-8.vhd Percent complete: %100.0
Get the URL for the uploaded
VHD
file to use in the following step.$ az storage blob url -c <container-name> -n <image-name>.vhd
Example:
[clouduser@localhost]$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
Create the Azure custom image.
$ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
NoteThe default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option
--hyper-v-generation V2
. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information on generation 2 VMs.The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to
VHD
.Example:
[clouduser@localhost]$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux
2.9. Installing Red Hat HA packages and agents
Complete the following steps on all nodes.
Procedure
Launch an SSH terminal session and connect to the VM using the administrator name and public IP address.
$ ssh administrator@PublicIP
To get the public IP address for an Azure VM, open the VM properties in the Azure Portal or enter the following Azure CLI command.
$ az vm list -g <resource-group> -d --output table
Example:
[clouduser@localhost ~] $ az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralus
Register the VM with Red Hat.
$ sudo -i # subscription-manager register --auto-attach
NoteIf the
--auto-attach
command fails, manually register the VM to your subscription.Disable all repositories.
# subscription-manager repos --disable=*
Enable the RHEL 8 Server and RHEL 8 Server HA repositories.
# subscription-manager repos --enable=rhel-8-server-rpms # subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
Update all packages.
# yum update -y
Install the Red Hat High Availability Add-On software packages, along with all available fencing agents from the High Availability channel.
# yum install pcs pacemaker fence-agents-azure-arm
The user
hacluster
was created during the pcs and pacemaker installation in the previous step. Create a password forhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
Add the
high availability
service to the RHEL Firewall iffirewalld.service
is installed.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl start pcsd.service # systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
Verification step
Ensure the pcs
service is running.
# systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 46235 (pcsd) CGroup: /system.slice/pcsd.service └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
2.10. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster
. In the command, specify the name of each node in the cluster.# pcs host auth hostname1 hostname2 hostname3 Username: hacluster Password: hostname1: Authorized hostname2: Authorized hostname3: Authorized
Example:
[root@node01 clouduser]# pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized
Create the cluster.
# pcs cluster setup cluster-name hostname1 hostname2 hostname3
Example:
[root@node01 clouduser]# pcs cluster setup --name newcluster node01 node02 node03 ...omitted Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success
Verification steps
Enable the cluster.
[root@node01 clouduser]# pcs cluster enable --all
Start the cluster.
[root@node01 clouduser]# pcs cluster start --all
Example:
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled [root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...
2.11. Fencing overview
If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent.
A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head," and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node.
Additional resources
2.12. Creating a fencing device
Complete the following steps to configure fencing. Complete these commands from any node in the cluster
Prerequisites
You need to set the cluster property stonith-enabled
to true
.
Procedure
Identify the Azure node name for each RHEL VM. You use the Azure node names to configure the fence device.
# fence_azure_arm -l AD-Application-ID -p AD-Password --resourceGroup MyResourceGroup --tenantId Tenant-ID --subscriptionId Subscription-ID -o list
Example:
[root@node01 clouduser]# fence_azure_arm -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list node01, node02, node03,
View the options for the Azure ARM STONITH agent.
pcs stonith describe fence_azure_arm
Example:
# pass:quotes[
pcs stonith describe fence_apc
] Stonith options: password: Authentication key password_script: Script to run to retrieve passwordWarningFor fence agents that provide a method option, do not specify a value of cycle as it is not supported and can cause data corruption.
Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires.
You can use the
pcmk_host_list
parameter when creating a fencing device to specify all of the machines that are controlled by that fencing device.You can use
pcmk_host_map
parameter when creating a fencing device to map host names to the specifications that comprehends the fence device.Create a fence device.
# pcs stonith create clusterfence fence_azure_arm
Test the fencing agent for one of the other nodes.
# pcs stonith fence azurenodename
Example:
[root@node01 clouduser]# pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:44:35 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node03 ] OFFLINE: [ node02 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Start the node that was fenced in the previous step.
# pcs cluster start hostname
Check the status to verify the node started.
# pcs status
Example:
[root@node01 clouduser]# pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:34:59 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Additional resources
2.13. Creating an Azure internal load balancer
The Azure internal load balancer removes cluster nodes that do not answer health probe requests.
Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.
Prerequisites
Procedure
- Create a Basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
- Create a back-end address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
- Create a health probe. For the health probe, select TCP and enter port 61000. You can use TCP port number that does not interfere with another service. For certain HA product applications (for example, SAP HANA and SQL Server), you may need to work with Microsoft to identify the correct port to use.
- Create a load balancer rule. To create the load balancing rule, the default values are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.
2.14. Configuring the load balancer resource agent
After you have created the health probe, you must configure the load balancer
resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.
Procedure
Install the
nmap-ncat
resource agents on all nodes.# yum install nmap-ncat resource-agents
Perform the following steps on a single node.
Create the
pcs
resources and group. Use your load balancer FrontendIP for the IPaddr2 address.# pcs resource create resource-name IPaddr2 ip="10.0.0.7" --group cluster-resources-group
Configure the
load balancer
resource agent.# pcs resource create resource-loadbalancer-name azure-lb port=port-number --group cluster-resources-group
Verification step
Run pcs status
to see the results.
[root@node01 clouduser]# pcs status
Example:
Cluster name: clusterfence01 Stack: corosync Current DC: node02 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum Last updated: Tue Jan 30 12:42:35 2018 Last change: Tue Jan 30 12:26:42 2018 by root via cibadmin on node01 3 nodes configured 3 resources configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Resource Group: g_azure vip_azure (ocf::heartbeat:IPaddr2): Started node02 lb_azure (ocf::heartbeat:azure-lb): Started node02 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled