Red Hat Training
A Red Hat training course is available for RHEL 8
Deploying Red Hat Enterprise Linux 8 on public cloud platforms
Obtaining RHEL system images and creating RHEL instances in the public cloud
Abstract
Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Providing feedback on Red Hat documentation
We appreciate your feedback on our documentation. Let us know how we can improve it.
Submitting feedback through Jira (account required)
- Log in to the Jira website.
- Click Create in the top navigation bar.
- Enter a descriptive title in the Summary field.
- Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation.
- Click Create at the bottom of the dialogue.
Chapter 1. Introducing RHEL on public cloud platforms
Public cloud platforms provide computing resources as a service. Instead of using on-premises hardware, you can run your IT workloads, including Red Hat Enterprise Linux (RHEL) systems, as public cloud instances.
To learn more about RHEL on public cloud platforms, see:
1.1. Benefits of using RHEL in a public cloud
RHEL as a cloud instance located on a public cloud platform has the following benefits over RHEL on-premises physical systems or virtual machines (VMs):
Flexible and fine-grained allocation of resources
A cloud instance of RHEL runs as a VM on a cloud platform, which typically means a cluster of remote servers maintained by the provider of the cloud service. Therefore, allocating hardware resources to the instance, such as a specific type of CPU or storage, happens on the software level and is easily customizable.
In comparison to a local RHEL system, you are also not limited by the capabilities of your physical host. Instead, you can choose from a variety of features, based on selection offered by the cloud provider.
Space and cost efficiency
You do not need to own any on-premises servers to host your cloud workloads. This avoids the space, power, and maintenance requirements associated with physical hardware.
Instead, on public cloud platforms, you pay the cloud provider directly for using a cloud instance. The cost is typically based on the hardware allocated to the instance and the time you spend using it. Therefore, you can optimize your costs based on your requirements.
Software-controlled configurations
The entire configuration of a cloud instance is saved as data on the cloud platform, and is controlled by software. Therefore, you can easily create, remove, clone, or migrate the instance. A cloud instance is also operated remotely in a cloud provider console and is connected to remote storage by default.
In addition, you can back up the current state of a cloud instance as a snapshot at any time. Afterwards, you can load the snapshot to restore the instance to the saved state.
Separation from the host and software compatibility
Similarly to a local VM, the RHEL guest operating system on a cloud instance runs on a virtualized kernel. This kernel is separate from the host operating system and from the client system that you use to connect to the instance.
Therefore, any operating system can be installed on the cloud instance. This means that on a RHEL public cloud instance, you can run RHEL-specific applications that cannot be used on your local operating system.
In addition, even if the operating system of the instance becomes unstable or is compromised, your client system is not affected in any way.
1.2. Public cloud use cases for RHEL
Deploying on a public cloud provides many benefits, but might not be the most efficient solution in every scenario. If you are evaluating whether to migrate your RHEL deployments to the public cloud, consider whether your use case will benefit from the advantages of the public cloud.
Beneficial use cases
Deploying public cloud instances is very effective for flexibly increasing and decreasing the active computing power of your deployments, also known as scaling up and scaling down. Therefore, using RHEL on public cloud is recommended in the following scenarios:
- Clusters with high peak workloads and low general performance requirements. Scaling up and down based on your demands can be highly efficient in terms of resource costs.
- Quickly setting up or expanding your clusters. This avoids high upfront costs of setting up local servers.
- Cloud instances are not affected by what happens in your local environment. Therefore, you can use them for backup and disaster recovery.
Potentially problematic use cases
- You are running an existing environment that cannot be adjusted. Customizing a cloud instance to fit the specific needs of an existing deployment may not be cost-effective in comparison with your current host platform.
- You are operating with a hard limit on your budget. Maintaining your deployment in a local data center typically provides less flexibility but more control over the maximum resource costs than the public cloud does.
Next steps
Additional resources
1.3. Frequent concerns when migrating to a public cloud
Moving your RHEL workloads from a local environment to a public cloud platform might raise concerns about the changes involved. The following are the most commonly asked questions.
Will my RHEL work differently as a cloud instance than as a local virtual machine?
In most respects, RHEL instances on a public cloud platform work the same as RHEL virtual machines on a local host, such as an on-premises server. Notable exceptions include:
- Instead of private orchestration interfaces, public cloud instances use provider-specific console interfaces for managing your cloud resources.
- Certain features, such as nested virtualization, may not work correctly. If a specific feature is critical for your deployment, check the feature’s compatibility in advance with your chosen public cloud provider.
Will my data stay safe in a public cloud as opposed to a local server?
The data in your RHEL cloud instances is in your ownership, and your public cloud provider does not have any access to it. In addition, major cloud providers support data encryption in transit, which improves the security of data when migrating your virtual machines to the public cloud.
The general security of your RHEL public cloud instances is managed as follows:
- Your public cloud provider is responsible for the security of the cloud hypervisor
- Red Hat provides the security features of the RHEL guest operating systems in your instances
- You manage the specific security settings and practices in your cloud infrastructure
What effect does my geographic region have on the functionality of RHEL public cloud instances?
You can use RHEL instances on a public cloud platform regardless of your geographical location. Therefore, you can run your instances in the same region as your on-premises server.
However, hosting your instances in a physically distant region might cause high latency when operating them. In addition, depending on the public cloud provider, certain regions may provide additional features or be more cost-efficient. Before creating your RHEL instances, review the properties of the hosting regions available for your chosen cloud provider.
1.4. Obtaining RHEL for public cloud deployments
To deploy a RHEL system in a public cloud environment:
Select the optimal cloud provider for your use case, based on your requirements and the current offer on the market.
The cloud providers currently certified for running RHEL instances are:
- Create a RHEL cloud instance on your chosen cloud platform. For more information, see Methods for creating RHEL cloud instances.
- To keep your RHEL deployment up-to-date, use Red Hat Update Infrastructure (RHUI).
Additional resources
1.5. Methods for creating RHEL cloud instances
To deploy a RHEL instance on a public cloud platform, you can use one of the following methods:
Create a system image of RHEL and import it to the cloud platform.
|
Purchase a RHEL instance directly from the cloud provider marketplace.
|
For detailed instructions on using various methods to deploy RHEL instances on the certified cloud platforms, see the following chapters in this document.
Additional resources
Chapter 2. Deploying a Red Hat Enterprise Linux image as a virtual machine on Microsoft Azure
To deploy a Red Hat Enterprise Linux 8 (RHEL 8) image on Microsoft Azure, follow the information below. This chapter:
- Discusses your options for choosing an image
- Lists or refers to system requirements for your host system and virtual machine (VM)
- Provides procedures for creating a custom VM from an ISO image, uploading it to Azure, and launching an Azure VM instance
You can create a custom VM from an ISO image, but Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Azure Disk Image (VHD format). See Composing a Customized RHEL System Image for more information.
For a list of Red Hat products that you can use securely on Azure, refer to Red Hat on Microsoft Azure.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for a Microsoft Azure account.
2.1. Red Hat Enterprise Linux image options on Azure
The following table lists image choices for RHEL 8 on Microsoft Azure, and notes the differences in the image options.
Table 2.1. Image options
Image option | Subscriptions | Sample scenario | Considerations |
---|---|---|---|
Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on Azure. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. |
Deploy a custom image that you move to Azure. | Use your existing Red Hat subscriptions. | Upload your custom image and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay Microsoft for all other instance costs. |
Deploy an existing Azure image that includes RHEL. | The Azure images include a Red Hat product. | Choose a RHEL image when you create a VM using the Azure console, or choose a VM from the Azure Marketplace. | You pay Microsoft hourly on a pay-as-you-go model. Such images are called "on-demand." Azure provides support for on-demand images through a support agreement. Red Hat provides updates to the images. Azure makes the updates available through the Red Hat Update Infrastructure (RHUI). |
You can create a custom image for Azure using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.
2.2. Understanding base images
This section includes information about using preconfigured base images and their configuration settings.
2.2.1. Using a custom base image
To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.
To prepare a cloud image of RHEL, follow the instructions in the sections below. To prepare a Hyper-V cloud image of RHEL, see the Prepare a Red Hat-based virtual machine from Hyper-V Manager.
2.2.2. Required system packages
To create and configure a base image of RHEL, your host system must have the following packages installed.
Table 2.2. System packages
Package | Repository | Description |
---|---|---|
libvirt | rhel-8-for-x86_64-appstream-rpms | Open source API, daemon, and management tool for managing platform virtualization |
virt-install | rhel-8-for-x86_64-appstream-rpms | A command-line utility for building VMs |
libguestfs | rhel-8-for-x86_64-appstream-rpms | A library for accessing and modifying VM file systems |
libguestfs-tools | rhel-8-for-x86_64-appstream-rpms |
System administration tools for VMs; includes the |
2.2.3. Azure VM configuration settings
Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary.
Table 2.3. VM configuration settings
Setting | Recommendation |
---|---|
ssh | ssh must be enabled to provide remote access to your Azure VMs. |
dhcp | The primary virtual adapter should be configured for dhcp (IPv4 only). |
Swap Space | Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). |
NIC | Choose virtio for the primary virtual network adapter. |
encryption | For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. |
2.2.4. Creating a base image from an ISO image
The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances.
Prerequisites
- Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
Procedure
- Download the latest Red Hat Enterprise Linux 8 DVD ISO image from the Red Hat Customer Portal.
Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines.
If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.
For example, the following command creates a
kvmtest
VM using therhel-8.0-x86_64-kvm.qcow2
image:# virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio \ --import --os-variant=rhel8.0
If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:
- Do not check Immediately Start VM.
- Change your Memory size to your preferred settings.
- Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
Review the following additional installation selection and modifications.
- Select Minimal Install with the standard RHEL option.
For Installation Destination, select Custom Storage Configuration. Use the following configuration information to make your selections.
- Verify at least 500 MB for /boot.
- For file system, use xfs, ext4, or ext3 for both boot and root partitions.
- Remove swap space. Swap space is configured on the physical blade server in Azure by the WALinuxAgent.
- On the Installation Summary screen, select Network and Host Name. Switch Ethernet to On.
When the install starts:
-
Create a
root
password. - Create an administrative user account.
-
Create a
- When installation is complete, reboot the VM and log in to the root account.
-
Once you are logged in as
root
, you can configure the image.
2.3. Configuring a custom base image for Microsoft Azure
To deploy a RHEL 8 virtual machine (VM) with specific settings in Azure, you can create a custom base image for the VM. The following sections describe additional configuration changes that Azure requires.
2.3.1. Installing Hyper-V device drivers
Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure virtual machine (VM). Use the lsinitrd | grep hv
command to verify that the drivers are installed.
Procedure
Enter the following
grep
command to determine if the required Hyper-V device drivers are installed.# lsinitrd | grep hv
In the example below, all required drivers are installed.
# lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz
If all the drivers are not installed, complete the remaining steps.
NoteAn
hv_vmbus
driver may exist in the environment. Even if this driver is present, complete the following steps.-
Create a file named
hv.conf
in/etc/dracut.conf.d
. Add the following driver parameters to the
hv.conf
file.add_drivers+=" hv_vmbus " add_drivers+=" hv_netvsc " add_drivers+=" hv_storvsc " add_drivers+=" nvme "
NoteNote the spaces before and after the quotes, for example,
add_drivers+=" hv_vmbus "
. This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.Regenerate the
initramfs
image.# dracut -f -v --regenerate-all
Verification
- Reboot the machine.
-
Run the
lsinitrd | grep hv
command to verify that the drivers are installed.
2.3.2. Making configuration changes required for a Microsoft Azure deployment
Before you deploy your custom base image to Azure, you must perform additional configuration changes to ensure that the virtual machine (VM) can properly operate in Azure.
Procedure
- Log in to the VM.
Register the VM, and enable the Red Hat Enterprise Linux 8 repository.
# subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed
Ensure that the
cloud-init
andhyperv-daemons
packages are installed.# yum install cloud-init hyperv-daemons -y
Create
cloud-init
configuration files that are needed for integration with Azure services:To enable logging to the Hyper-V Data Exchange Service (KVP), create the
/etc/cloud/cloud.cfg.d/10-azure-kvp.cfg
configuration file and add the following lines to that file.reporting: logging: type: log telemetry: type: hyperv
To add Azure as a datasource, create the
/etc/cloud/cloud.cfg.d/91-azure_datasource.cfg
configuration file, and add the following lines to that file.datasource_list: [ Azure ] datasource: Azure: apply_network_config: False
To ensure that specific kernel modules are blocked from loading automatically, edit or create the
/etc/modprobe.d/blocklist.conf
file and add the following lines to that file.blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate
Modify
udev
network device rules:Remove the following persistent network device rules if present.
# rm -f /etc/udev/rules.d/70-persistent-net.rules # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules # rm -f /etc/udev/rules.d/80-net-name-slot-rules
To ensure that Accelerated Networking on Azure works as intended, create a new network device rule
/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
and add the following line to it.SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
Set the
sshd
service to start automatically.# systemctl enable sshd # systemctl is-enabled sshd
Modify kernel boot parameters:
Open the
/etc/default/grub
file, and ensure theGRUB_TIMEOUT
line has the following value.GRUB_TIMEOUT=10
Remove the following options from the end of the
GRUB_CMDLINE_LINUX
line if present.rhgb quiet
Ensure the
/etc/default/grub
file contains the following lines with all the specified options.GRUB_CMDLINE_LINUX="loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
Regenerate the
grub.cfg
file.On a BIOS-based machine:
# grub2-mkconfig -o /boot/grub2/grub.cfg
On a UEFI-based machine:
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
If your system uses a non-default location for
grub.cfg
, adjust the command accordingly.
Configure the Windows Azure Linux Agent (
WALinuxAgent
):Install and enable the
WALinuxAgent
package.# yum install WALinuxAgent -y # systemctl enable waagent
To ensure that a swap partition is not used in provisioned VMs, edit the following lines in the
/etc/waagent.conf
file.Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n
Prepare the VM for Azure provisioning:
Unregister the VM from Red Hat Subscription Manager.
# subscription-manager unregister
Clean up the existing provisioning details.
# waagent -force -deprovision
NoteThis command generates warnings, which are expected because Azure handles the provisioning of VMs automatically.
Clean the shell history and shut down the VM.
# export HISTSIZE=0 # poweroff
2.4. Converting the image to a fixed VHD format
All Microsoft Azure VM images must be in a fixed VHD
format. The image must be aligned on a 1 MB boundary before it is converted to VHD. To convert the image from qcow2
to a fixed VHD
format and align the image, see the following procedure. Once you have converted the image, you can upload it to Azure.
Procedure
Convert the image from
qcow2
toraw
format.$ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
Create a shell script using the contents below.
#!/bin/bash MB=$((1024 * 1024)) size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}') rounded_size=$((($size/$MB + 1) * $MB)) if [ $(($size % $MB)) -eq 0 ] then echo "Your image is already aligned. You do not need to resize." exit 1 fi echo "rounded size = $rounded_size" export rounded_size
Run the script. This example uses the name
align.sh
.$ sh align.sh <image-xxx>.raw
- If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
- If a value displays, your image is not aligned.
Use the following command to convert the file to a fixed
VHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.If the
raw
image is not aligned, complete the following steps to align it.Resize the
raw
file using the rounded value displayed when you ran the verification script.$ qemu-img resize -f raw <image-xxx>.raw <rounded-value>
Convert the
raw
image file to aVHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.
2.5. Installing the Azure CLI
Complete the following steps to install the Azure command line interface (Azure CLI 2.1). Azure CLI 2.1 is a Python-based utility that creates and manages VMs in Azure.
Prerequisites
- You need to have an account with Microsoft Azure before you can use the Azure CLI.
- The Azure CLI installation requires Python 3.x.
Procedure
Import the Microsoft repository key.
$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
Create a local Azure CLI repository entry.
$ sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
Update the
yum
package index.$ yum check-update
Check your Python version (
python --version
) and install Python 3.x, if necessary.$ sudo yum install python3
Install the Azure CLI.
$ sudo yum install -y azure-cli
Run the Azure CLI.
$ az
Additional resources
2.6. Creating resources in Azure
Complete the following procedure to create the Azure resources that you need before you can upload the VHD
file and create the Azure image.
Procedure
Authenticate your system with Azure and log in.
$ az login
NoteIf a browser is available in your environment, the CLI opens your browser to the Azure sign-in page. See Sign in with Azure CLI for more information and options.
Create a resource group in an Azure region.
$ az group create --name <resource-group> --location <azure-region>
Example:
[clouduser@localhost]$ az group create --name azrhelclirsgrp --location southcentralus { "id": "/subscriptions//resourceGroups/azrhelclirsgrp", "location": "southcentralus", "managedBy": null, "name": "azrhelclirsgrp", "properties": { "provisioningState": "Succeeded" }, "tags": null }
Create a storage account. See SKU Types for more information about valid SKU values.
$ az storage account create -l <azure-region> -n <storage-account-name> -g <resource-group> --sku <sku_type>
Example:
[clouduser@localhost]$ az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS { "accessTier": null, "creationTime": "2017-04-05T19:10:29.855470+00:00", "customDomain": null, "encryption": null, "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact", "kind": "StorageV2", "lastGeoFailoverTime": null, "location": "southcentralus", "name": "azrhelclistact", "primaryEndpoints": { "blob": "https://azrhelclistact.blob.core.windows.net/", "file": "https://azrhelclistact.file.core.windows.net/", "queue": "https://azrhelclistact.queue.core.windows.net/", "table": "https://azrhelclistact.table.core.windows.net/" }, "primaryLocation": "southcentralus", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "secondaryEndpoints": null, "secondaryLocation": null, "sku": { "name": "Standard_LRS", "tier": "Standard" }, "statusOfPrimary": "available", "statusOfSecondary": null, "tags": {}, "type": "Microsoft.Storage/storageAccounts" }
Get the storage account connection string.
$ az storage account show-connection-string -n <storage-account-name> -g <resource-group>
Example:
[clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==" }
Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.
$ export AZURE_STORAGE_CONNECTION_STRING="<storage-connection-string>"
Example:
[clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
Create the storage container.
$ az storage container create -n <container-name>
Example:
[clouduser@localhost]$ az storage container create -n azrhelclistcont { "created": true }
Create a virtual network.
$ az network vnet create -g <resource group> --name <vnet-name> --subnet-name <subnet-name>
Example:
[clouduser@localhost]$ az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { "newVNet": { "addressSpace": { "addressPrefixes": [ "10.0.0.0/16" ] }, "dhcpOptions": { "dnsServers": [] }, "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1", "location": "southcentralus", "name": "azrhelclivnet1", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceGuid": "0f25efee-e2a6-4abe-a4e9-817061ee1e79", "subnets": [ { "addressPrefix": "10.0.0.0/24", "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1", "ipConfigurations": null, "name": "azrhelclisubnet1", "networkSecurityGroup": null, "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceNavigationLinks": null, "routeTable": null } ], "tags": {}, "type": "Microsoft.Network/virtualNetworks", "virtualNetworkPeerings": null } }
Additional resources
2.7. Uploading and creating an Azure image
Complete the following steps to upload the VHD
file to your container and create an Azure custom image.
The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.
Procedure
Upload the
VHD
file to the storage container. It may take several minutes. To get a list of storage containers, enter theaz storage container list
command.$ az storage blob upload \ --account-name <storage-account-name> --container-name <container-name> \ --type page --file <path-to-vhd> --name <image-name>.vhd
Example:
[clouduser@localhost]$ az storage blob upload \ --account-name azrhelclistact --container-name azrhelclistcont \ --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0
Get the URL for the uploaded
VHD
file to use in the following step.$ az storage blob url -c <container-name> -n <image-name>.vhd
Example:
$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
Create the Azure custom image.
$ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
NoteThe default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option
--hyper-v-generation V2
. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information about generation 2 VMs.The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to
VHD
.Example:
$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux
2.8. Creating and starting the VM in Azure
The following steps provide the minimum command options to create a managed-disk Azure VM from the image. See az vm create for additional options.
Procedure
Enter the following command to create the VM.
$ az vm create \ -g <resource-group> -l <azure-region> -n <vm-name> \ --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 \ --os-disk-name <simple-name> --admin-username <administrator-name> \ --generate-ssh-keys --image <path-to-image>
NoteThe option
--generate-ssh-keys
creates a private/public key pair. Private and public key files are created in~/.ssh
on your system. The public key is added to theauthorized_keys
file on the VM for the user specified by the--admin-username
option. See Other authentication methods for additional information.Example:
[clouduser@localhost]$ az vm create \ -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 \ --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1 --size Standard_A2 \ --os-disk-name vm-1-osdisk --admin-username clouduser \ --generate-ssh-keys --image rhel8 { "fqdns": "", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1", "location": "southcentralus", "macAddress": "", "powerState": "VM running", "privateIpAddress": "10.0.0.4", "publicIpAddress": "<public-IP-address>", "resourceGroup": "azrhelclirsgrp2"
Note the
publicIpAddress
. You need this address to log in to the VM in the following step.Start an SSH session and log in to the VM.
[clouduser@localhost]$ ssh -i /home/clouduser/.ssh/id_rsa clouduser@<public-IP-address>. The authenticity of host ',<public-IP-address>' can't be established. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '<public-IP-address>' (ECDSA) to the list of known hosts. [clouduser@rhel-azure-vm-1 ~]$
If you see a user prompt, you have successfully deployed your Azure VM.
You can now go to the Microsoft Azure portal and check the audit logs and properties of your resources. You can manage your VMs directly in this portal. If you are managing multiple VMs, you should use the Azure CLI. The Azure CLI provides a powerful interface to your resources in Azure. Enter az --help
in the CLI or see the Azure CLI command reference to learn more about the commands you use to manage your VMs in Microsoft Azure.
2.9. Other authentication methods
While recommended for increased security, using the Azure-generated key pair is not required. The following examples show two methods for SSH authentication.
Example 1: These command options provision a new VM without generating a public key file. They allow SSH authentication using a password.
$ az vm create \ -g <resource-group> -l <azure-region> -n <vm-name> \ --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 \ --os-disk-name <simple-name> --authentication-type password \ --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image>
$ ssh <admin-username>@<public-ip-address>
Example 2: These command options provision a new Azure VM and allow SSH authentication using an existing public key file.
$ az vm create \ -g <resource-group> -l <azure-region> -n <vm-name> \ --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 \ --os-disk-name <simple-name> --admin-username <administrator-name> \ --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image>
$ ssh -i <path-to-existing-ssh-key> <admin-username>@<public-ip-address>
2.10. Attaching Red Hat subscriptions
To attach your Red Hat subscription to a RHEL instance, use the following steps.
Prerequisites
- You must have enabled your subscriptions.
Procedure
Register your system.
# subscription-manager register --auto-attach
Attach your subscriptions.
- You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information.
- Alternatively, you can manually attach a subscription using the ID of the subscription pool (Pool ID). See Attaching and Removing Subscriptions Through the Command Line.
2.11. Setting up automatic registration on Azure Gold Images
To make deploying RHEL 8 virtual machines (VM) on Micorsoft Azure faster and more comfortable, you can set up Gold Images of RHEL 8 to be automatically registered to the Red Hat Subscription Manager (RHSM).
Prerequisites
RHEL 8 Gold Images are available to you in Microsoft Azure. For instructions, see Using Gold Images on Azure.
NoteA Microsoft Azure account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the Azure account before attaching it to your Red Hat one.
Procedure
- Use the Gold Image to create a RHEL 8 VM in your Azure instance. For instructions, see Creating and starting the VM in Azure.
- Start the created VM.
In the RHEL 8 VM, enable automatic registration.
# subscription-manager config --rhsmcertd.auto_registration=1
Enable the
rhsmcertd
service.# systemctl enable rhsmcertd.service
Disable the
redhat.repo
repository.# subscription-manager config --rhsm.manage_repos=0
- Power off the VM, and save it as a managed image on Azure. For instructions, see How to create a managed image of a virtual machine or VHD.
- Create VMs using the managed image. They will be automatically subscribed to RHSM.
Verification
In a RHEL 8 VM created using the above instructions, verify the system is registered to RHSM by executing the
subscription-manager identity
command. On a successfully registered system, this displays the UUID of the system. For example:# subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056
2.12. Additional resources
Chapter 3. Configuring a Red Hat High Availability cluster on Microsoft Azure
To configure a Red Hat High Availability (HA) cluster on Azure using Azure virtual machine (VM) instances as cluster nodes, see the following sections. The procedures in these sections assume that you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 8 images you use for your cluster. See Red Hat Enterprise Linux Image Options on Azure for information about image options for Azure.
The following sections provide:
- Prerequisite procedures for setting up your environment for Azure. After you set up your environment, you can create and configure Azure VM instances.
- Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for a Microsoft Azure account with administrator privileges.
- You need to install Azure command line interface (CLI). For more information, see Installing the Azure CLI.
3.1. The benefits of using high-availability clusters on public cloud platforms
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
Additional resources
3.2. Creating resources in Azure
Complete the following procedure to create a region, resource group, storage account, virtual network, and availability set. You need these resources to set up a cluster on Microsoft Azure.
Procedure
Authenticate your system with Azure and log in.
$ az login
NoteIf a browser is available in your environment, the CLI opens your browser to the Azure sign-in page.
Example:
[clouduser@localhost]$ az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code FDMSCMETZ to authenticate. [ { "cloudName": "AzureCloud", "id": "Subscription ID", "isDefault": true, "name": "MySubscriptionName", "state": "Enabled", "tenantId": "Tenant ID", "user": { "name": "clouduser@company.com", "type": "user" } } ]
Create a resource group in an Azure region.
$ az group create --name resource-group --location azure-region
Example:
[clouduser@localhost]$ az group create --name azrhelclirsgrp --location southcentralus { "id": "/subscriptions//resourceGroups/azrhelclirsgrp", "location": "southcentralus", "managedBy": null, "name": "azrhelclirsgrp", "properties": { "provisioningState": "Succeeded" }, "tags": null }
Create a storage account.
$ az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2
Example:
[clouduser@localhost]$ az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS --kind StorageV2 { "accessTier": null, "creationTime": "2017-04-05T19:10:29.855470+00:00", "customDomain": null, "encryption": null, "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact", "kind": "StorageV2", "lastGeoFailoverTime": null, "location": "southcentralus", "name": "azrhelclistact", "primaryEndpoints": { "blob": "https://azrhelclistact.blob.core.windows.net/", "file": "https://azrhelclistact.file.core.windows.net/", "queue": "https://azrhelclistact.queue.core.windows.net/", "table": "https://azrhelclistact.table.core.windows.net/" }, "primaryLocation": "southcentralus", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "secondaryEndpoints": null, "secondaryLocation": null, "sku": { "name": "Standard_LRS", "tier": "Standard" }, "statusOfPrimary": "available", "statusOfSecondary": null, "tags": {}, "type": "Microsoft.Storage/storageAccounts" }
Get the storage account connection string.
$ az storage account show-connection-string -n storage-account-name -g resource-group
Example:
[clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==" }
Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.
$ export AZURE_STORAGE_CONNECTION_STRING="storage-connection-string"
Example:
[clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
Create the storage container.
$ az storage container create -n container-name
Example:
[clouduser@localhost]$ az storage container create -n azrhelclistcont { "created": true }
Create a virtual network. All cluster nodes must be in the same virtual network.
$ az network vnet create -g resource group --name vnet-name --subnet-name subnet-name
Example:
[clouduser@localhost]$ az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { "newVNet": { "addressSpace": { "addressPrefixes": [ "10.0.0.0/16" ] }, "dhcpOptions": { "dnsServers": [] }, "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1", "location": "southcentralus", "name": "azrhelclivnet1", "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceGuid": "0f25efee-e2a6-4abe-a4e9-817061ee1e79", "subnets": [ { "addressPrefix": "10.0.0.0/24", "etag": "W/\"\"", "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1", "ipConfigurations": null, "name": "azrhelclisubnet1", "networkSecurityGroup": null, "provisioningState": "Succeeded", "resourceGroup": "azrhelclirsgrp", "resourceNavigationLinks": null, "routeTable": null } ], "tags": {}, "type": "Microsoft.Network/virtualNetworks", "virtualNetworkPeerings": null } }
Create an availability set. All cluster nodes must be in the same availability set.
$ az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup
Example:
[clouduser@localhost]$ az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { "additionalProperties": {}, "id": "/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1", "location": "southcentralus", "name": “rhelha-avset1", "platformFaultDomainCount": 2, "platformUpdateDomainCount": 5, [omitted]
Additional resources
3.3. Required system packages for High Availability
The procedure assumes you are creating a VM image for Azure HA using Red Hat Enterprise Linux. To successfully complete the procedure, the following packages must be installed.
Table 3.1. System packages
Package | Repository | Description |
---|---|---|
libvirt | rhel-8-for-x86_64-appstream-rpms | Open source API, daemon, and management tool for managing platform virtualization |
virt-install | rhel-8-for-x86_64-appstream-rpms | A command-line utility for building VMs |
libguestfs | rhel-8-for-x86_64-appstream-rpms | A library for accessing and modifying VM file systems |
libguestfs-tools | rhel-8-for-x86_64-appstream-rpms |
System administration tools for VMs; includes the |
3.4. Azure VM configuration settings
Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary.
Table 3.2. VM configuration settings
Setting | Recommendation |
---|---|
ssh | ssh must be enabled to provide remote access to your Azure VMs. |
dhcp | The primary virtual adapter should be configured for dhcp (IPv4 only). |
Swap Space | Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). |
NIC | Choose virtio for the primary virtual network adapter. |
encryption | For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. |
3.5. Installing Hyper-V device drivers
Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure virtual machine (VM). Use the lsinitrd | grep hv
command to verify that the drivers are installed.
Procedure
Enter the following
grep
command to determine if the required Hyper-V device drivers are installed.# lsinitrd | grep hv
In the example below, all required drivers are installed.
# lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz
If all the drivers are not installed, complete the remaining steps.
NoteAn
hv_vmbus
driver may exist in the environment. Even if this driver is present, complete the following steps.-
Create a file named
hv.conf
in/etc/dracut.conf.d
. Add the following driver parameters to the
hv.conf
file.add_drivers+=" hv_vmbus " add_drivers+=" hv_netvsc " add_drivers+=" hv_storvsc " add_drivers+=" nvme "
NoteNote the spaces before and after the quotes, for example,
add_drivers+=" hv_vmbus "
. This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.Regenerate the
initramfs
image.# dracut -f -v --regenerate-all
Verification
- Reboot the machine.
-
Run the
lsinitrd | grep hv
command to verify that the drivers are installed.
3.6. Making configuration changes required for a Microsoft Azure deployment
Before you deploy your custom base image to Azure, you must perform additional configuration changes to ensure that the virtual machine (VM) can properly operate in Azure.
Procedure
- Log in to the VM.
Register the VM, and enable the Red Hat Enterprise Linux 8 repository.
# subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed
Ensure that the
cloud-init
andhyperv-daemons
packages are installed.# yum install cloud-init hyperv-daemons -y
Create
cloud-init
configuration files that are needed for integration with Azure services:To enable logging to the Hyper-V Data Exchange Service (KVP), create the
/etc/cloud/cloud.cfg.d/10-azure-kvp.cfg
configuration file and add the following lines to that file.reporting: logging: type: log telemetry: type: hyperv
To add Azure as a datasource, create the
/etc/cloud/cloud.cfg.d/91-azure_datasource.cfg
configuration file, and add the following lines to that file.datasource_list: [ Azure ] datasource: Azure: apply_network_config: False
To ensure that specific kernel modules are blocked from loading automatically, edit or create the
/etc/modprobe.d/blocklist.conf
file and add the following lines to that file.blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate
Modify
udev
network device rules:Remove the following persistent network device rules if present.
# rm -f /etc/udev/rules.d/70-persistent-net.rules # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules # rm -f /etc/udev/rules.d/80-net-name-slot-rules
To ensure that Accelerated Networking on Azure works as intended, create a new network device rule
/etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
and add the following line to it.SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
Set the
sshd
service to start automatically.# systemctl enable sshd # systemctl is-enabled sshd
Modify kernel boot parameters:
Open the
/etc/default/grub
file, and ensure theGRUB_TIMEOUT
line has the following value.GRUB_TIMEOUT=10
Remove the following options from the end of the
GRUB_CMDLINE_LINUX
line if present.rhgb quiet
Ensure the
/etc/default/grub
file contains the following lines with all the specified options.GRUB_CMDLINE_LINUX="loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
Regenerate the
grub.cfg
file.On a BIOS-based machine:
# grub2-mkconfig -o /boot/grub2/grub.cfg
On a UEFI-based machine:
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
If your system uses a non-default location for
grub.cfg
, adjust the command accordingly.
Configure the Windows Azure Linux Agent (
WALinuxAgent
):Install and enable the
WALinuxAgent
package.# yum install WALinuxAgent -y # systemctl enable waagent
To ensure that a swap partition is not used in provisioned VMs, edit the following lines in the
/etc/waagent.conf
file.Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n
Prepare the VM for Azure provisioning:
Unregister the VM from Red Hat Subscription Manager.
# subscription-manager unregister
Clean up the existing provisioning details.
# waagent -force -deprovision
NoteThis command generates warnings, which are expected because Azure handles the provisioning of VMs automatically.
Clean the shell history and shut down the VM.
# export HISTSIZE=0 # poweroff
3.7. Creating an Azure Active Directory application
Complete the following procedure to create an Azure Active Directory (AD) application. The Azure AD application authorizes and automates access for HA operations for all nodes in the cluster.
Prerequisites
- The Azure Command Line Interface (CLI) is installed on your system.
- You are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
Procedure
On any node in the HA cluster, log in to your Azure account.
$ az login
Create a
json
configuration file for a custom role for the Azure fence agent. Use the following configuration, but replace <subscription-id> with your subscription IDs.{ "Name": "Linux Fence Agent Role", "description": "Allows to power-off and start virtual machines", "assignableScopes": [ "/subscriptions/<subscription-id>" ], "actions": [ "Microsoft.Compute/*/read", "Microsoft.Compute/virtualMachines/powerOff/action", "Microsoft.Compute/virtualMachines/start/action" ], "notActions": [], "dataActions": [], "notDataActions": [] }
Define the custom role for the Azure fence agent. Use the
json
file created in the previous step to do this.$ az role definition create --role-definition azure-fence-role.json { "assignableScopes": [ "/subscriptions/<my-subscription-id>" ], "description": "Allows to power-off and start virtual machines", "id": "/subscriptions/<my-subscription-id>/providers/Microsoft.Authorization/roleDefinitions/<role-id>", "name": "<role-id>", "permissions": [ { "actions": [ "Microsoft.Compute/*/read", "Microsoft.Compute/virtualMachines/powerOff/action", "Microsoft.Compute/virtualMachines/start/action" ], "dataActions": [], "notActions": [], "notDataActions": [] } ], "roleName": "Linux Fence Agent Role", "roleType": "CustomRole", "type": "Microsoft.Authorization/roleDefinitions" }
- In the Azure web console interface, select Virtual Machine → Click Identity in the left-side menu.
- Select On → Click Save → click Yes to confirm.
- Click Azure role assignments → Add role assignment.
-
Select the Scope required for the role, for example
Resource Group
. - Select the required Resource Group.
- Optional: Change the Subscription if necessary.
- Select the Linux Fence Agent Role role.
- Click Save.
Verification
Display nodes visible to Azure AD.
# fence_azure_arm --msi -o list node1, node2, [...]
If this command outputs all nodes on your cluster, the AD application has been configured successfully.
3.8. Converting the image to a fixed VHD format
All Microsoft Azure VM images must be in a fixed VHD
format. The image must be aligned on a 1 MB boundary before it is converted to VHD. To convert the image from qcow2
to a fixed VHD
format and align the image, see the following procedure. Once you have converted the image, you can upload it to Azure.
Procedure
Convert the image from
qcow2
toraw
format.$ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
Create a shell script using the contents below.
#!/bin/bash MB=$((1024 * 1024)) size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}') rounded_size=$((($size/$MB + 1) * $MB)) if [ $(($size % $MB)) -eq 0 ] then echo "Your image is already aligned. You do not need to resize." exit 1 fi echo "rounded size = $rounded_size" export rounded_size
Run the script. This example uses the name
align.sh
.$ sh align.sh <image-xxx>.raw
- If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
- If a value displays, your image is not aligned.
Use the following command to convert the file to a fixed
VHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.If the
raw
image is not aligned, complete the following steps to align it.Resize the
raw
file using the rounded value displayed when you ran the verification script.$ qemu-img resize -f raw <image-xxx>.raw <rounded-value>
Convert the
raw
image file to aVHD
format.The sample uses qemu-img version 2.12.0.
$ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd
Once converted, the
VHD
file is ready to upload to Azure.
3.9. Uploading and creating an Azure image
Complete the following steps to upload the VHD
file to your container and create an Azure custom image.
The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.
Procedure
Upload the
VHD
file to the storage container. It may take several minutes. To get a list of storage containers, enter theaz storage container list
command.$ az storage blob upload \ --account-name <storage-account-name> --container-name <container-name> \ --type page --file <path-to-vhd> --name <image-name>.vhd
Example:
[clouduser@localhost]$ az storage blob upload \ --account-name azrhelclistact --container-name azrhelclistcont \ --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0
Get the URL for the uploaded
VHD
file to use in the following step.$ az storage blob url -c <container-name> -n <image-name>.vhd
Example:
$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
Create the Azure custom image.
$ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
NoteThe default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option
--hyper-v-generation V2
. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information about generation 2 VMs.The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to
VHD
.Example:
$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux
3.10. Installing Red Hat HA packages and agents
Complete the following steps on all nodes.
Procedure
Launch an SSH terminal session and connect to the VM using the administrator name and public IP address.
$ ssh administrator@PublicIP
To get the public IP address for an Azure VM, open the VM properties in the Azure Portal or enter the following Azure CLI command.
$ az vm list -g <resource-group> -d --output table
Example:
[clouduser@localhost ~] $ az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralus
Register the VM with Red Hat.
$ sudo -i # subscription-manager register --auto-attach
NoteIf the
--auto-attach
command fails, manually register the VM to your subscription.Disable all repositories.
# subscription-manager repos --disable=*
Enable the RHEL 8 Server HA repositories.
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
Update all packages.
# yum update -y
Install the Red Hat High Availability Add-On software packages, along with all available fencing agents from the High Availability channel.
# yum install pcs pacemaker fence-agents-azure-arm
The user
hacluster
was created during the pcs and pacemaker installation in the previous step. Create a password forhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
Add the
high availability
service to the RHEL Firewall iffirewalld.service
is installed.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl start pcsd.service # systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
Verification
Ensure the
pcs
service is running.# systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 46235 (pcsd) CGroup: /system.slice/pcsd.service └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
3.11. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster
. In the command, specify the name of each node in the cluster.# pcs host auth <hostname1> <hostname2> <hostname3>
Example:
[root@node01 clouduser]# pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized
Create the cluster.
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>
Example:
[root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success
Verification
Enable the cluster.
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled
Start the cluster.
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...
3.12. Fencing overview
If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent.
A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head," and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node.
Additional resources
3.13. Creating a fencing device
Complete the following steps to configure fencing. Complete these commands from any node in the cluster
Prerequisites
You need to set the cluster property stonith-enabled
to true
.
Procedure
Identify the Azure node name for each RHEL VM. You use the Azure node names to configure the fence device.
# fence_azure_arm \ -l <AD-Application-ID> -p <AD-Password> \ --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> \ --subscriptionId <Subscription-ID> -o list
Example:
[root@node01 clouduser]# fence_azure_arm \ -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list node01, node02, node03,
View the options for the Azure ARM STONITH agent.
# pcs stonith describe fence_azure_arm
Example:
# pcs stonith describe fence_apc Stonith options: password: Authentication key password_script: Script to run to retrieve password
WarningFor fence agents that provide a method option, do not specify a value of cycle as it is not supported and can cause data corruption.
Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires.
You can use the
pcmk_host_list
parameter when creating a fencing device to specify all of the machines that are controlled by that fencing device.You can use
pcmk_host_map
parameter when creating a fencing device to map host names to the specifications that comprehends the fence device.Create a fence device.
# pcs stonith create clusterfence fence_azure_arm
Verification
Test the fencing agent for one of the other nodes.
# pcs stonith fence azurenodename
Example:
[root@node01 clouduser]# pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:44:35 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node03 ] OFFLINE: [ node02 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Start the node that was fenced in the previous step.
# pcs cluster start <hostname>
Check the status to verify the node started.
# pcs status
Example:
[root@node01 clouduser]# pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:34:59 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Additional resources
3.14. Creating an Azure internal load balancer
The Azure internal load balancer removes cluster nodes that do not answer health probe requests.
Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.
Prerequisites
Procedure
- Create a Basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
- Create a back-end address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
- Create a health probe. For the health probe, select TCP and enter port 61000. You can use TCP port number that does not interfere with another service. For certain HA product applications (for example, SAP HANA and SQL Server), you may need to work with Microsoft to identify the correct port to use.
- Create a load balancer rule. To create the load balancing rule, the default values are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.
3.15. Configuring the load balancer resource agent
After you have created the health probe, you must configure the load balancer
resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.
Procedure
Install the
nmap-ncat
resource agents on all nodes.# yum install nmap-ncat resource-agents
Perform the following steps on a single node.
Create the
pcs
resources and group. Use your load balancer FrontendIP for the IPaddr2 address.# pcs resource create resource-name IPaddr2 ip="10.0.0.7" --group cluster-resources-group
Configure the
load balancer
resource agent.# pcs resource create resource-loadbalancer-name azure-lb port=port-number --group cluster-resources-group
Verification
Run
pcs status
to see the results.[root@node01 clouduser]# pcs status
Example output:
Cluster name: clusterfence01 Stack: corosync Current DC: node02 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum Last updated: Tue Jan 30 12:42:35 2018 Last change: Tue Jan 30 12:26:42 2018 by root via cibadmin on node01 3 nodes configured 3 resources configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Resource Group: g_azure vip_azure (ocf::heartbeat:IPaddr2): Started node02 lb_azure (ocf::heartbeat:azure-lb): Started node02 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
3.16. Configuring shared block storage
To configure shared block storage for a Red Hat High Availability cluster with Microsoft Azure Shared Disks, use the following procedure. Note that this procedure is optional, and the steps below assume three Azure VMs (a three-node cluster) with a 1 TB shared disk.
This is a stand-alone sample procedure for configuring block storage. The procedure assumes that you have not yet created your cluster.
Prerequisites
- You must have installed the Azure CLI on your host system and created your SSH key(s).
You must have created your cluster environment in Azure, which includes creating the following resources. Links are to the Microsoft Azure documentation.
Procedure
Create a shared block volume using the Azure command
az disk create
.$ az disk create -g <resource_group> -n <shared_block_volume_name> --size-gb <disk_size> --max-shares <number_vms> -l <location>
For example, the following command creates a shared block volume named
shared-block-volume.vhd
in the resource groupsharedblock
within the Azure Availability Zonewestcentralus
.$ az disk create -g sharedblock-rg -n shared-block-volume.vhd --size-gb 1024 --max-shares 3 -l westcentralus { "creationData": { "createOption": "Empty", "galleryImageReference": null, "imageReference": null, "sourceResourceId": null, "sourceUniqueId": null, "sourceUri": null, "storageAccountId": null, "uploadSizeBytes": null }, "diskAccessId": null, "diskIopsReadOnly": null, "diskIopsReadWrite": 5000, "diskMbpsReadOnly": null, "diskMbpsReadWrite": 200, "diskSizeBytes": 1099511627776, "diskSizeGb": 1024, "diskState": "Unattached", "encryption": { "diskEncryptionSetId": null, "type": "EncryptionAtRestWithPlatformKey" }, "encryptionSettingsCollection": null, "hyperVgeneration": "V1", "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd", "location": "westcentralus", "managedBy": null, "managedByExtended": null, "maxShares": 3, "name": "shared-block-volume.vhd", "networkAccessPolicy": "AllowAll", "osType": null, "provisioningState": "Succeeded", "resourceGroup": "sharedblock-rg", "shareInfo": null, "sku": { "name": "Premium_LRS", "tier": "Premium" }, "tags": {}, "timeCreated": "2020-08-27T15:36:56.263382+00:00", "type": "Microsoft.Compute/disks", "uniqueId": "cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2", "zones": null }
Verify that you have created the shared block volume using the Azure command
az disk show
.$ az disk show -g <resource_group> -n <shared_block_volume_name>
For example, the following command shows details for the shared block volume
shared-block-volume.vhd
within the resource groupsharedblock-rg
.$ az disk show -g sharedblock-rg -n shared-block-volume.vhd { "creationData": { "createOption": "Empty", "galleryImageReference": null, "imageReference": null, "sourceResourceId": null, "sourceUniqueId": null, "sourceUri": null, "storageAccountId": null, "uploadSizeBytes": null }, "diskAccessId": null, "diskIopsReadOnly": null, "diskIopsReadWrite": 5000, "diskMbpsReadOnly": null, "diskMbpsReadWrite": 200, "diskSizeBytes": 1099511627776, "diskSizeGb": 1024, "diskState": "Unattached", "encryption": { "diskEncryptionSetId": null, "type": "EncryptionAtRestWithPlatformKey" }, "encryptionSettingsCollection": null, "hyperVgeneration": "V1", "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd", "location": "westcentralus", "managedBy": null, "managedByExtended": null, "maxShares": 3, "name": "shared-block-volume.vhd", "networkAccessPolicy": "AllowAll", "osType": null, "provisioningState": "Succeeded", "resourceGroup": "sharedblock-rg", "shareInfo": null, "sku": { "name": "Premium_LRS", "tier": "Premium" }, "tags": {}, "timeCreated": "2020-08-27T15:36:56.263382+00:00", "type": "Microsoft.Compute/disks", "uniqueId": "cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2", "zones": null }
Create three network interfaces using the Azure command
az network nic create
. Run the following command three times using a different<nic_name>
for each.$ az network nic create \ -g <resource_group> -n <nic_name> --subnet <subnet_name> \ --vnet-name <virtual_network> --location <location> \ --network-security-group <network_security_group> --private-ip-address-version IPv4
For example, the following command creates a network interface with the name
shareblock-nodea-vm-nic-protected
.$ az network nic create \ -g sharedblock-rg -n sharedblock-nodea-vm-nic-protected --subnet sharedblock-subnet-protected \ --vnet-name sharedblock-vn --location westcentralus \ --network-security-group sharedblock-nsg --private-ip-address-version IPv4
Create three VMs and attach the shared block volume using the Azure command
az vm create
. Option values are the same for each VM except that each VM has its own<vm_name>
,<new_vm_disk_name>
, and<nic_name>
.$ az vm create \ -n <vm_name> -g <resource_group> --attach-data-disks <shared_block_volume_name> \ --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name <new-vm-disk-name> \ --os-disk-size-gb <disk_size> --location <location> --size <virtual_machine_size> \ --image <image_name> --admin-username <vm_username> --authentication-type ssh \ --ssh-key-values <ssh_key> --nics <nic_name> --availability-set <availability_set> --ppg <proximity_placement_group>
For example, the following command creates a VM named
sharedblock-nodea-vm
.$ az vm create \ -n sharedblock-nodea-vm -g sharedblock-rg --attach-data-disks shared-block-volume.vhd \ --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name sharedblock-nodea-vm.vhd \ --os-disk-size-gb 64 --location westcentralus --size Standard_D2s_v3 \ --image /subscriptions/12345678910-12345678910/resourceGroups/sample-azureimagesgroupwestcentralus/providers/Microsoft.Compute/images/sample-azure-rhel-8.3.0-20200713.n.0.x86_64 --admin-username sharedblock-user --authentication-type ssh \ --ssh-key-values @sharedblock-key.pub --nics sharedblock-nodea-vm-nic-protected --availability-set sharedblock-as --ppg sharedblock-ppg { "fqdns": "", "id": "/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/virtualMachines/sharedblock-nodea-vm", "location": "westcentralus", "macAddress": "00-22-48-5D-EE-FB", "powerState": "VM running", "privateIpAddress": "198.51.100.3", "publicIpAddress": "", "resourceGroup": "sharedblock-rg", "zones": "" }
Verification
For each VM in your cluster, verify that the block device is available by using the
ssh
command with your VM’s IP address.# ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '"
For example, the following command lists details including the host name and block device for the VM IP
198.51.100.3
.# ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '" nodea sdb 8:16 0 1T 0 disk
Use the
ssh
command to verify that each VM in your cluster uses the same shared disk.# ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
For example, the following command lists details including the host name and shared disk volume ID for the instance IP address
198.51.100.3
.# ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='" nodea E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89
After you have verified that the shared disk is attached to each VM, you can configure resilient storage for the cluster.
Additional resources
3.17. Additional resources
Chapter 4. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services
You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 8 image as an EC2 instance on Amazon Web Services (AWS). This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and virtual machine (VM). This chapter also provides procedures for creating a custom VM from an ISO image, uploading it to EC2, and launching an EC2 instance.
To deploy a Red Hat Enterprise Linux 8 (RHEL 8) as an EC2 instance on Amazon Web Services (AWS), follow the information below. This chapter:
- Discusses your options for choosing an image
- Lists or refers to system requirements for your host system and virtual machine (VM)
- Provides procedures for creating a custom VM from an ISO image, uploading it to EC2, and launching an EC2 instance
While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami
format. See Composing a Customized RHEL System Image for more information.
For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
4.1. Red Hat Enterprise Linux Image options on AWS
The following table lists image choices and notes the differences in the image options.
Table 4.1. Image options
Image option | Subscriptions | Sample scenario | Considerations |
---|---|---|---|
Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images. |
Deploy a custom image that you move to AWS. | Use your existing Red Hat subscriptions. | Upload your custom image, and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images. |
Deploy an existing Amazon image that includes RHEL. | The AWS EC2 images include a Red Hat product. | Select a RHEL image when you launch an instance on the AWS Management Console, or choose an image from the AWS Marketplace. | You pay Amazon hourly on a pay-as-you-go model. Such images are called "on-demand" images. Amazon provides support for on-demand images. Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI). |
You can create a custom image for AWS using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.
You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image:
- Create a new custom RHEL instance and migrate data from your on-demand instance.
- Cancel your on-demand instance after you migrate your data to avoid double billing.
Additional resources
4.2. Understanding base images
This section includes information about using preconfigured base images and their configuration settings.
4.2.1. Using a custom base image
To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.
Additional resources
4.2.2. Virtual machine configuration settings
Cloud VMs must have the following configuration settings.
Table 4.2. VM configuration settings
Setting | Recommendation |
---|---|
ssh | ssh must be enabled to provide remote access to your VMs. |
dhcp | The primary virtual adapter should be configured for dhcp. |
4.3. Creating a base VM from an ISO image
Follow the procedures in this section to create a RHEL 8 base image from an ISO image.
Prerequisites
- Virtualization is enabled on your host machine.
-
You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to
/var/lib/libvirt/images
.
4.3.1. Creating a VM from the RHEL ISO image
Procedure
- Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines.
If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.
For example, the following command creates a
kvmtest
VM using the/home/username/Downloads/rhel8.iso
image:# virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --cdrom /home/username/Downloads/rhel8.iso,bus=virtio \ --os-variant=rhel8.0
If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:
- Do not check Immediately Start VM.
- Change your Memory size to your preferred settings.
- Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
4.3.2. Completing the RHEL installation
Perform the following steps to complete the installation and to enable root access once the VM launches.
Procedure
- Choose the language you want to use during the installation process.
On the Installation Summary view:
- Click Software Selection and check Minimal Install.
- Click Done.
Click Installation Destination and check Custom under Storage Configuration.
-
Verify at least 500 MB for
/boot
. You can use the remaining space for root/
. - Standard partitions are recommended, but you can use Logical Volume Management (LVM).
- You can use xfs, ext4, or ext3 for the file system.
- Click Done when you are finished with changes.
-
Verify at least 500 MB for
- Click Begin Installation.
- Set a Root Password. Create other users as applicable.
-
Reboot the VM and log in as
root
once the installation completes. Configure the image.
Register the VM and enable the Red Hat Enterprise Linux 8 repository.
# subscription-manager register --auto-attach
Ensure that the
cloud-init
package is installed and enabled.# yum install cloud-init # systemctl enable --now cloud-init.service
Important: This step is only for VMs you intend to upload to AWS.
For AMD64 or Intel 64 (x86_64)VMs, install the
nvme
,xen-netfront
, andxen-blkfront
drivers.# dracut -f --add-drivers "nvme xen-netfront xen-blkfront"
For ARM 64 (aarch64) VMs, install the
nvme
driver.# dracut -f --add-drivers "nvme"
Including these drivers removes the possibility of a dracut time-out.
Alternatively, you can add the drivers to
/etc/dracut.conf.d/
and then enterdracut -f
to overwrite the existinginitramfs
file.
- Power down the VM.
Additional resources
4.4. Uploading the Red Hat Enterprise Linux image to AWS
Follow the procedures in this section to upload your image to AWS.
4.4.1. Installing the AWS CLI
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI. Complete the following steps to install the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools using the
yum
command.# yum install awscli
Use the
aws --version
command to verify that you installed the AWS CLI.$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77
Configure the AWS command line client according to your AWS access details.
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:
Additional resources
4.4.2. Creating an S3 bucket
Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you create an S3 bucket and then move your image to the bucket. Complete the following steps to create a bucket.
Procedure
- Launch the Amazon S3 Console.
- Click Create Bucket. The Create Bucket dialog appears.
In the Name and region view:
- Enter a Bucket name.
- Enter a Region.
- Click Next.
- In the Configure options view, select the desired options and click Next.
- In the Set permissions view, change or accept the default options and click Next.
- Review your bucket configuration.
Click Create bucket.
NoteAlternatively, you can use the AWS CLI to create a bucket. For example, the
aws s3 mb s3://my-new-bucket
command creates an S3 bucket namedmy-new-bucket
. See the AWS CLI Command Reference for more information about themb
command.
Additional resources
4.4.3. Creating the vmimport role
Perform the following procedure to create the vmimport
role, which is required by VM import. See VM Import Service Role in the Amazon documentation for more information.
Procedure
Create a file named
trust-policy.json
and include the following policy. Save the file on your system and note its location.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "vmie.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals":{ "sts:Externalid": "vmimport" } } } ] }
Use the
create role
command to create thevmimport
role. Specify the full path to the location of thetrust-policy.json
file. Prefixfile://
to the path. For example:$ aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json
Create a file named
role-policy.json
and include the following policy. Replaces3-bucket-name
with the name of your S3 bucket.{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::s3-bucket-name", "arn:aws:s3:::s3-bucket-name/*" ] }, { "Effect":"Allow", "Action":[ "ec2:ModifySnapshotAttribute", "ec2:CopySnapshot", "ec2:RegisterImage", "ec2:Describe*" ], "Resource":"*" } ] }
Use the
put-role-policy
command to attach the policy to the role you created. Specify the full path of therole-policy.json
file. For example:$ aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json
Additional resources
4.4.4. Converting and pushing your image to S3
Complete the following procedure to convert and push your image to S3. The samples are representative; they convert an image formatted in the qcow2
file format to raw
format. Amazon accepts images in OVA
, VHD
, VHDX
, VMDK
, and raw
formats. See How VM Import/Export Works for more information about image formats that Amazon accepts.
Procedure
Run the
qemu-img
command to convert your image. For example:# qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 rhel-8.0-sample.raw
Push the image to S3.
$ aws s3 cp rhel-8.0-sample.raw s3://s3-bucket-name
NoteThis procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket using the AWS S3 Console.
Additional resources
4.4.5. Importing your image as a snapshot
Perform the following procedure to import an image as a snapshot.
Procedure
Create a file to specify a bucket and path for your image. Name the file
containers.json
. In the sample that follows, replaces3-bucket-name
with your bucket name ands3-key
with your key. You can get the key for the image using the Amazon S3 Console.{ "Description": "rhel-8.0-sample.raw", "Format": "raw", "UserBucket": { "S3Bucket": "s3-bucket-name", "S3Key": "s3-key" } }
Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket.
aws ec2 import-snapshot --disk-container file://containers.json
The terminal displays a message such as the following. Note the
ImportTaskID
within the message.{ "SnapshotTaskDetail": { "Status": "active", "Format": "RAW", "DiskImageSize": 0.0, "UserBucket": { "S3Bucket": "s3-bucket-name", "S3Key": "rhel-8.0-sample.raw" }, "Progress": "3", "StatusMessage": "pending" }, "ImportTaskId": "import-snap-06cea01fa0f1166a8" }
Track the progress of the import using the
describe-import-snapshot-tasks
command. Include theImportTaskID
.$ aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8
The returned message shows the current status of the task. When complete,
Status
showscompleted
. Within the status, note the snapshot ID.
Additional resources
4.4.6. Creating an AMI from the uploaded snapshot
Within EC2, you must choose an Amazon Machine Image (AMI) when launching an instance. Perform the following procedure to create an AMI from your uploaded snapshot.
Procedure
- Go to the AWS EC2 Dashboard.
- Under Elastic Block Store, select Snapshots.
-
Search for your snapshot ID (for example,
snap-0e718930bd72bcda0
). - Right-click on the snapshot and select Create image.
- Name your image.
- Under Virtualization type, choose Hardware-assisted virtualization.
- Click Create. In the note regarding image creation, there is a link to your image.
Click on the image link. Your image shows up under Images>AMIs.
NoteAlternatively, you can use the AWS CLI
register-image
command to create an AMI from a snapshot. See register-image for more information. An example follows.$ aws ec2 register-image \ --name "myimagename" --description "myimagedescription" --architecture x86_64 \ --virtualization-type hvm --root-device-name "/dev/sda1" --ena-support \ --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}"
You must specify the root device volume
/dev/sda1
as yourroot-device-name
. For conceptual information about device mapping for AWS, see Example block device mapping.
4.4.7. Launching an instance from the AMI
Perform the following procedure to launch and configure an instance from the AMI.
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
- Enter the Number of instances you want to create.
- For Network, select the VPC you created when setting up your AWS environment. Select a subnet for the instance or create a new subnet.
Select Enable for Auto-assign Public IP.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.
- Click Next: Add Storage. Verify that the default storage is sufficient.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the security group you created when setting up your AWS environment.
- Click Review and Launch. Verify your selections.
Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment.
NoteVerify that the permissions for your private key are correct. Use the command options
chmod 400 <keyname>.pem
to change the permissions, if necessary.- Click Launch Instances.
Click View Instances. You can name the instance(s).
You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect. Use the example provided for A standalone SSH client.
NoteAlternatively, you can launch an instance using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
Additional resources
4.4.8. Attaching Red Hat subscriptions
To attach your Red Hat subscription to a RHEL instance, use the following steps.
Prerequisites
- You must have enabled your subscriptions.
Procedure
Register your system.
# subscription-manager register --auto-attach
Attach your subscriptions.
- You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information.
- Alternatively, you can manually attach a subscription using the ID of the subscription pool (Pool ID). See Attaching and Removing Subscriptions Through the Command Line.
4.4.9. Setting up automatic registration on AWS Gold Images
To make deploying RHEL 8 virtual machines on Amazon Web Services (AWS) faster and more comfortable, you can set up Gold Images of RHEL 8 to be automatically registered to the Red Hat Subscription Manager (RHSM).
Prerequisites
You have downloaded the latest RHEL 8 Gold Image for AWS. For instructions, see Using Gold Images on AWS.
NoteAn AWS account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one.
Procedure
- Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS.
- Create VMs using the uploaded image. They will be automatically subscribed to RHSM.
Verification
In a RHEL 8 VM created using the above instructions, verify the system is registered to RHSM by executing the
subscription-manager identity
command. On a successfully registered system, this displays the UUID of the system. For example:# subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056
Additional resources
4.5. Additional resources
Chapter 5. Configuring a Red Hat High Availability cluster on AWS
This chapter provides information and procedures for configuring a Red Hat High Availability (HA) cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes. Note that you have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information about image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.
This chapter includes:
- Prerequisite procedures for setting up your environment for AWS. Once you have set up your environment, you can create and configure EC2 instances.
- Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on AWS. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing AWS network resource agents.
Prerequisites
- Sign up for a Red Hat Customer Portal account.
- Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information.
5.1. The benefits of using high-availability clusters on public cloud platforms
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
Additional resources
5.2. Creating the AWS Access Key and AWS Secret Access Key
You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.
Complete the following steps to create these keys.
Prerequisites
- Your IAM user account must have Programmatic access. See Setting up the AWS Environment for more information.
Procedure
- Launch the AWS Console.
- Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
- Click Users.
- Select the user and open the Summary screen.
- Click the Security credentials tab.
- Click Create access key.
-
Download the
.csv
file (or save both keys). You need to enter these keys when creating the fencing device.
5.3. Installing the AWS CLI
Many of the procedures required to manage HA clusters in AWS include using the AWS CLI. Complete the following steps to install the AWS CLI.
Prerequisites
- You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI.
Procedure
Install the AWS command line tools using the
yum
command.# yum install awscli
Use the
aws --version
command to verify that you installed the AWS CLI.$ aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77
Configure the AWS command line client according to your AWS access details.
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:
Additional resources
5.4. Creating an HA EC2 instance
Complete the following steps to create the instances that you use as your HA cluster nodes. Note that you have a number of options for obtaining the RHEL images you use for your cluster. See Red Hat Enterprise Linux Image options on AWS for information about image options for AWS.
You can create and upload a custom image that you use for your cluster nodes, or you can use a Gold Image or an on-demand image.
Prerequisites
- You need to have set up an AWS environment. See Setting Up with Amazon EC2 for more information.
Procedure
- From the AWS EC2 Dashboard, select Images and then AMIs.
- Right-click on your image and select Launch.
Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance may need to have higher capacity.
See Amazon EC2 Instance Types for information about instance types.
Click Next: Configure Instance Details.
Enter the Number of instances you want to create for the cluster. This example procedure uses three cluster nodes.
NoteDo not launch into an Auto Scaling Group.
- For Network, select the VPC you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you may need to make additional selections.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.
- Click Next: Add Storage and verify that the default storage is sufficient. You do not need to modify these settings unless your HA application requires other storage options.
Click Next: Add Tags.
NoteTags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging.
- Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
- Click Review and Launch and verify your selections.
- Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when Setting up the AWS environment.
- Click Launch Instances.
Click View Instances. You can name the instance(s).
NoteAlternatively, you can launch instances using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.
Additional resources
5.5. Configuring the private key
Complete the following configuration tasks to use the private SSH key file (.pem
) before it can be used in an SSH session.
Procedure
-
Move the key file from the
Downloads
directory to yourHome
directory or to your~/.ssh directory
. Change the permissions of the key file so that only the root user can read it.
# chmod 400 KeyName.pem
5.6. Connecting to an EC2 instance
Complete the following steps on all nodes to connect to an EC2 instance.
Procedure
- Launch the AWS Console and select the EC2 instance.
- Click Connect and select A standalone SSH client.
-
From your SSH terminal session, connect to the instance using the AWS example provided in the pop-up window. Add the correct path to your
KeyName.pem
file if the path is not shown in the example.
5.7. Installing the High Availability packages and agents
Complete the following steps on all nodes to install the High Availability packages and agents.
Procedure
Remove the AWS Red Hat Update Infrastructure (RHUI) client.
$ sudo -i # yum -y remove rh-amazon-rhui-client*
Register the VM with Red Hat.
# subscription-manager register --auto-attach
Disable all repositories.
# subscription-manager repos --disable=*
Enable the RHEL 8 Server HA repositories.
# subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
Update the RHEL AWS instance.
# yum update -y
Install the Red Hat High Availability Add-On software packages, along with all available fencing agents from the High Availability channel.
# yum install pcs pacemaker fence-agents-aws
The user
hacluster
was created during thepcs
andpacemaker
installation in the previous step. Create a password forhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
Add the
high availability
service to the RHEL Firewall iffirewalld.service
is installed.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl start pcsd.service # systemctl enable pcsd.service
-
Edit
/etc/hosts
and add RHEL host names and internal IP addresses. See How should the /etc/hosts file be set up on RHEL cluster nodes? for details.
Verification
Ensure the
pcs
service is running.# systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5437 (pcsd) CGroup: /system.slice/pcsd.service └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface… Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.
5.8. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, enter the following command to authenticate the pcs user
hacluster
. In the command, specify the name of each node in the cluster.# pcs host auth <hostname1> <hostname2> <hostname3>
Example:
[root@node01 clouduser]# pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized
Create the cluster.
# pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>
Example:
[root@node01 clouduser]# pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success
Verification
Enable the cluster.
[root@node01 clouduser]# pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled
Start the cluster.
[root@node01 clouduser]# pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster...
5.9. Configuring fencing
Fencing configuration ensures that a malfunctioning node on your AWS cluster is automatically isolated, which prevents the node from consuming the cluster’s resources or compromising the cluster’s functionality.
You can configure fencing on AWS cluster using multiple methods. This section provides the following:
- A standard procedure for default configuration.
- An alternate configuration procedure for more advanced configuration, focused on automation.
Standard procedure
Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.
# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
Example:
[root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
Enter the following command to configure the fence device. Use the
pcmk_host_map
command to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key that you previously set up.# pcs stonith \ create <name> fence_aws access_key=access-key secret_key=<secret-access-key> \ region=<region> pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
Example:
[root@ip-10-0-0-48 ~]# pcs stonith \ create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ \ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" \ power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
Alternate procedure
Obtain the VPC ID of the cluster.
# aws ec2 describe-vpcs --output text --filters "Name=tag:Name,Values=clustername-vpc" --query 'Vpcs[*].VpcId' vpc-06bc10ac8f6006664
Using the VPC ID of the cluster, obtain the VPC instances.
$ aws ec2 describe-instances --output text --filters "Name=vpc-id,Values=vpc-06bc10ac8f6006664" --query 'Reservations[*].Instances[*].{Name:Tags[? Key==
Name
]|[0].Value,Instance:InstanceId}' | grep "\-node[a-c]" i-0b02af8927a895137 clustername-nodea-vm i-0cceb4ba8ab743b69 clustername-nodeb-vm i-0502291ab38c762a5 clustername-nodec-vmUse the obtained instance IDs to configure fencing on each node on the cluster. For example:
[root@nodea ~]# CLUSTER=clustername && pcs stonith create fence${CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=$(for NODE \ in node{a..c}; do ssh ${NODE} "echo -n \${HOSTNAME}:\$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\;"; done) \ pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX [root@nodea ~]# pcs stonith config fence${CLUSTER} Resource: clustername (class=stonith type=fence_aws) Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5; pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Operations: monitor interval=60s (clustername-monitor-interval-60s)
Verification
Test the fencing agent for one of the cluster nodes.
# pcs stonith fence awsnodename
NoteThe command response may take several minutes to display. If you watch the active terminal session for the node being fenced, you see that the terminal connection is immediately terminated after you enter the fence command.
Example:
[root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced
Check the status to verify that the node is fenced.
# pcs status
Example:
[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 19:55:41 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ] OFFLINE: [ ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Start the node that was fenced in the previous step.
# pcs cluster start awshostname
Check the status to verify the node started.
# pcs status
Example:
[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
5.10. Installing the AWS CLI on cluster nodes
Previously, you installed the AWS CLI on your host system. You need to install the AWS CLI on cluster nodes before you configure the network resource agents.
Complete the following procedure on each cluster node.
Prerequisites
- You must have created an AWS Access Key and AWS Secret Access Key. See Creating the AWS Access Key and AWS Secret Access Key for more information.
Procedure
- Install the AWS CLI. For instructions, see Installing the AWS CLI.
Verify that the AWS CLI is configured properly. The instance IDs and instance names should display.
Example:
[root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[].Instances[].[InstanceId,Tags[?Key==
Name
].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58
5.11. Installing network resource agents
For HA operations to work, the cluster uses AWS networking resource agents to enable failover functionality. If a node does not respond to a heartbeat check in a set amount of time, the node is fenced and operations fail over to an additional node in the cluster. Network resource agents need to be configured for this to work.
Add the two resources to the same group to enforce order
and colocation
constraints.
Create a secondary private IP resource and virtual IP resource
Complete the following procedure to add a secondary private IP address and create a virtual IP. You can complete this procedure from any node in the cluster.
Procedure
View the
AWS Secondary Private IP Address
resource agent (awsvip) description. This shows the options and default operations for this agent.# pcs resource describe awsvip
Create the Secondary Private IP address using an unused private IP address in the
VPC CIDR
block.# pcs resource create privip awsvip secondary_private_ip=Unused-IP-Address --group group-name
Example:
[root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet.
# pcs resource create vip IPaddr2 ip=secondary-private-IP --group group-name
Example:
root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group
Verification
Verify that the resources are running.
# pcs status
Example:
[root@ip-10-0-0-48 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Create an elastic IP address
An elastic IP address is a public IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node.
Note that this is different from the virtual IP resource created earlier. The elastic IP address is used for public-facing Internet connections instead of subnet connections.
-
Add the two resources to the same group that was previously created to enforce
order
andcolocation
constraints. Enter the following AWS CLI command to create an elastic IP address.
[root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122
View the AWS Secondary Elastic IP Address resource agent (awseip) description. The following command shows the options and default operations for this agent.
# pcs resource describe awseip
Create the Secondary Elastic IP address resource using the allocated IP address created in Step 1.
# pcs resource create elastic awseip elastic_ip=Elastic-IP-Address allocation_id=Elastic-IP-Association-ID --group networking-group
Example:
# pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group
Verification
Enter the
pcs status
command to verify that the resource is running.# pcs status
Example:
[root@ip-10-0-0-58 ~]# pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled
Test the elastic IP address
Enter the following commands to verify the virtual IP (awsvip) and elastic IP (awseip) resources are working.
Procedure
Launch an SSH session from your local workstation to the elastic IP address previously created.
$ ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP
Example:
$ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
- Verify that the host you connected to via SSH is the host associated with the elastic resource created.
Additional resources
5.12. Configuring shared block storage
To configure shared block storage for a Red Hat High Availability cluster with Amazon Elastic Block Storage (EBS) Multi-Attach volumes, use the following procedure. Note that this procedure is optional, and the steps below assume three instances (a three-node cluster) with a 1 TB shared disk.
Prerequisites
- You must be using an AWS Nitro System-based Amazon EC2 instance.
Procedure
Create a shared block volume using the AWS command create-volume.
$ aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled
For example, the following command creates a volume in the
us-east-1a
availability zone.$ aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled { "AvailabilityZone": "us-east-1a", "CreateTime": "2020-08-27T19:16:42.000Z", "Encrypted": false, "Size": 1024, "SnapshotId": "", "State": "creating", "VolumeId": "vol-042a5652867304f09", "Iops": 51200, "Tags": [ ], "VolumeType": "io1" }
NoteYou need the
VolumeId
in the next step.For each instance in your cluster, attach a shared block volume using the AWS command attach-volume. Use your
<instance_id>
and<volume_id>
.$ aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>
For example, the following command attaches a shared block volume
vol-042a5652867304f09
toinstance i-0eb803361c2c887f2
.$ aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09 { "AttachTime": "2020-08-27T19:26:16.086Z", "Device": "/dev/xvdd", "InstanceId": "i-0eb803361c2c887f2", "State": "attaching", "VolumeId": "vol-042a5652867304f09" }
Verification
For each instance in your cluster, verify that the block device is available by using the
ssh
command with your instance<ip_address>
.# ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '"
For example, the following command lists details including the host name and block device for the instance IP
198.51.100.3
.# ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '" nodea nvme2n1 259:1 0 1T 0 disk
Use the
ssh
command to verify that each instance in your cluster uses the same shared disk.# ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='"
For example, the following command lists details including the host name and shared disk volume ID for the instance IP address
198.51.100.3
.# ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \$1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='" nodea E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7
Additional resources
5.13. Additional resources
Chapter 6. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform
To deploy a Red Hat Enterprise Linux 8 (RHEL 8) as a Google Compute Engine (GCE) instance on Google Cloud Platform (GCP), follow the information below. This chapter:
- Discusses your options for choosing an image
- Lists or refers to system requirements for your host system and virtual machine (VM)
- Provides procedures for creating a custom VM from an ISO image, uploading it to GCE, and launching an instance
For a list of Red Hat product certifications for GCP, see Red Hat on Google Cloud Platform.
You can create a custom VM from an ISO image, but Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. See Composing a Customized RHEL System Image for more information.
Prerequisites
- You need a Red Hat Customer Portal account to complete the procedures in this chapter.
- Create an account with GCP to access the Google Cloud Platform Console. See Google Cloud for more information.
6.1. Red Hat Enterprise Linux image options on GCP
The following table lists image choices for RHEL 8 on Google Cloud Platform and the differences in the image options.
Table 6.1. Image options
Image option | Subscriptions | Sample scenario | Considerations |
---|---|---|---|
Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on Google Cloud Platform. For details on Gold Images and how to access them on Google Cloud Platform, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Google for all other instance costs. Red Hat provides support directly for custom RHEL images. |
Deploy a custom image that you move to GCP. | Use your existing Red Hat subscriptions. | Upload your custom image and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay all other instance costs. Red Hat provides support directly for custom RHEL images. |
Deploy an existing GCP image that includes RHEL. | The GCP images include a Red Hat product. | Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace. | You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. |
You can create a custom image for GCP using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.
You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image:
- Create a new custom RHEL instance and migrate data from your on-demand instance.
- Cancel your on-demand instance after you migrate your data to avoid double billing.
Additional resources
6.2. Understanding base images
This section includes information about using preconfigured base images and their configuration settings.
6.2.1. Using a custom base image
To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.
Additional resources
6.2.2. Virtual machine configuration settings
Cloud VMs must have the following configuration settings.
Table 6.2. VM configuration settings
Setting | Recommendation |
---|---|
ssh | ssh must be enabled to provide remote access to your VMs. |
dhcp | The primary virtual adapter should be configured for dhcp. |
6.3. Creating a base VM from an ISO image
Follow the procedures in this section to create a RHEL 8 base image from an ISO image.
Prerequisites
- Virtualization is enabled on your host machine.
-
You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to
/var/lib/libvirt/images
.
6.3.1. Creating a VM from the RHEL ISO image
Procedure
- Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines.
If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.
For example, the following command creates a
kvmtest
VM using the/home/username/Downloads/rhel8.iso
image:# virt-install \ --name kvmtest --memory 2048 --vcpus 2 \ --cdrom /home/username/Downloads/rhel8.iso,bus=virtio \ --os-variant=rhel8.0
If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:
- Do not check Immediately Start VM.
- Change your Memory size to your preferred settings.
- Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
6.3.2. Completing the RHEL installation
Perform the following steps to complete the installation and to enable root access once the VM launches.
Procedure
- Choose the language you want to use during the installation process.
On the Installation Summary view:
- Click Software Selection and check Minimal Install.
- Click Done.
Click Installation Destination and check Custom under Storage Configuration.
-
Verify at least 500 MB for
/boot
. You can use the remaining space for root/
. - Standard partitions are recommended, but you can use Logical Volume Management (LVM).
- You can use xfs, ext4, or ext3 for the file system.
- Click Done when you are finished with changes.
-
Verify at least 500 MB for
- Click Begin Installation.
- Set a Root Password. Create other users as applicable.
-
Reboot the VM and log in as
root
once the installation completes. Configure the image.
Register the VM and enable the Red Hat Enterprise Linux 8 repository.
# subscription-manager register --auto-attach
Ensure that the
cloud-init
package is installed and enabled.# yum install cloud-init # systemctl enable --now cloud-init.service
- Power down the VM.
Additional resources
6.4. Uploading the RHEL image to GCP
To upload your RHEL 8 image to Google Cloud Platform (GCP), follow the procedures in this section.
6.4.1. Creating a new project on GCP
Complete the following steps to create a new project on Google Cloud Platform (GCP).
Prerequisites
- You must have an account with GCP. If you do not, see Google Cloud for more information.
Procedure
- Launch the GCP Console.
- Click the drop-down menu to the right of Google Cloud Platform.
- From the pop-up menu, click NEW PROJECT.
- From the New Project window, enter a name for your new project.
- Check Organization. Click the drop-down menu to change the organization, if necessary.
- Confirm the Location of your parent organization or folder. Click Browse to search for and change this value, if necessary.
Click CREATE to create your new GCP project.
NoteOnce you have installed the Google Cloud SDK, you can use the
gcloud projects create
CLI command to create a project. For example:# gcloud projects create my-gcp-project3 --name project3
The example creates a project with the project ID
my-gcp-project3
and the project nameproject3
. See gcloud project create for more information.
Additional resources
6.4.2. Installing the Google Cloud SDK
Complete the following steps to install the Google Cloud SDK.
Procedure
- Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details.
Follow the same instructions for initializing the Google Cloud SDK.
NoteOnce you have initialized the Google Cloud SDK, you can use the
gcloud
CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with thegcloud compute project-info describe --project <project-name>
command.
Additional resources
6.4.3. Creating SSH keys for Google Compute Engine
Perform the following procedure to generate and register SSH keys with GCE so that you can SSH directly into an instance using its public IP address.
Procedure
Use the
ssh-keygen
command to generate an SSH key pair for use with GCE.# ssh-keygen -t rsa -f ~/.ssh/google_compute_engine
- From the GCP Console Dashboard page, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Metadata.
- Click SSH Keys and then click Edit.
Enter the output generated from the
~/.ssh/google_compute_engine.pub
file and click Save.You can now connect to your instance using standard SSH.
# ssh -i ~/.ssh/google_compute_engine <username>@<instance_external_ip>
You can run the gcloud compute config-ssh
command to populate your config file with aliases for your instances. The aliases allow simple SSH connections by instance name. For information about the gcloud compute config-ssh
command, see gcloud compute config-ssh.
Additional resources
6.4.4. Creating a storage bucket in GCP Storage
Importing to GCP requires a GCP Storage Bucket. Complete the following steps to create a bucket.
Procedure
If you are not already logged in to GCP, log in with the following command.
# gcloud auth login
Create a storage bucket.
# gsutil mb gs://bucket_name
NoteAlternatively, you can use the Google Cloud Console to create a bucket. See Create a bucket for information.
Additional resources
6.4.5. Converting and uploading your image to your GCP Bucket
Complete the following procedure to convert and upload your image to your GCP Bucket. The samples are representative; they convert an qcow2
image to raw
format and then tar that image for upload.
Procedure
Run the
qemu-img
command to convert your image. The converted image must have the namedisk.raw
.# qemu-img convert -f qcow2 -O raw rhel-{ProductNumber}.0-sample.qcow2 disk.raw
Tar the image.
# tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw
Upload the image to the bucket you created previously. Upload could take a few minutes.
# gsutil cp disk.raw.tar.gz gs://bucket_name
- From the Google Cloud Platform home screen, click the collapsed menu icon and select Storage and then select Browser.
Click the name of your bucket.
The tarred image is listed under your bucket name.
NoteYou can also upload your image using the GCP Console. To do so, click the name of your bucket and then click Upload files.
Additional resources
6.4.6. Creating an image from the object in the GCP bucket
Perform the following procedure to create an image from the object in your GCP bucket.
Procedure
Run the following command to create an image for GCE. Specify the name of the image you are creating, the bucket name, and the name of the tarred image.
# gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz
NoteAlternatively, you can use the Google Cloud Console to create an image. See Creating, deleting, and deprecating custom images for more information.
Optionally, find the image in the GCP Console.
- Click the Navigation menu to the left of the Google Cloud Console banner.
- Select Compute Engine and then Images.
Additional resources
6.4.7. Creating a Google Compute Engine instance from an image
Complete the following steps to configure a GCE VM instance using the GCP Console.
The following procedure provides instructions for creating a basic VM instance using the GCP Console. See Creating and starting a VM instance for more information about GCE VM instances and their configuration options.
Procedure
- From the GCP Console Dashboard page, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Images.
- Select your image.
- Click Create Instance.
- On the Create an instance page, enter a Name for your instance.
- Choose a Region and Zone.
- Choose a Machine configuration that meets or exceeds the requirements of your workload.
- Ensure that Boot disk specifies the name of your image.
- Optionally, under Firewall, select Allow HTTP traffic or Allow HTTPS traffic.
Click Create.
NoteThese are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.
- Find your image under VM instances.
From the GCP Console Dashboard, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select VM instances.
NoteAlternatively, you can use the
gcloud compute instances create
CLI command to create a GCE VM instance from an image. A simple example follows.gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image
The example creates a VM instance named
myinstance3
in zoneus-central1-a
based upon the existing imagetest-iso2-image
. See gcloud compute instances create for more information.
6.4.8. Connecting to your instance
Perform the following procedure to connect to your GCE instance using its public IP address.
Procedure
Run the following command to ensure that your instance is running. The command lists information about your GCE instance, including whether the instance is running, and, if so, the public IP address of the running instance.
# gcloud compute instances list
Connect to your instance using standard SSH. The example uses the
google_compute_engine
key created earlier.# ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>
NoteGCP offers a number of ways to SSH into your instance. See Connecting to instances for more information. You can also connect to your instance using the root account and password you set previously.
Additional resources
6.4.9. Attaching Red Hat subscriptions
To attach your Red Hat subscription to a RHEL instance, use the following steps.
Prerequisites
- You must have enabled your subscriptions.
Procedure
Register your system.
# subscription-manager register --auto-attach
Attach your subscriptions.
- You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information.
- Alternatively, you can manually attach a subscription using the ID of the subscription pool (Pool ID). See Attaching and Removing Subscriptions Through the Command Line.
6.5. Additional resources
Chapter 7. Configuring Red Hat High Availability Cluster on Google Cloud Platform
To configure a Red Hat High Availability (HA) cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes, see the following sections.
These provide information on:
- Prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure VM instances.
- Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing network resource agents.
Prerequisites
- Red Hat Enterprise Linux 8 Server: rhel-8-server-rpms/8Server/x86_64
Red Hat Enterprise Linux 8 Server (High Availability): rhel-8-server-ha-rpms/8Server/x86_64
- You must belong to an active GCP project and have sufficient permissions to create resources in the project.
- Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account.
If you or your project administrator create a custom service account, the service account should be configured for the following roles.
- Cloud Trace Agent
- Compute Admin
- Compute Network Admin
- Cloud Datastore User
- Logging Admin
- Monitoring Editor
- Monitoring Metric Writer
- Service Account Administrator
- Storage Admin
7.1. The benefits of using high-availability clusters on public cloud platforms
A high-availability (HA) cluster is a set of computers (called nodes) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster.
You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits:
- Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted.
- Scalability: Additional nodes can be started when demand is high and stopped when demand is low.
- Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running.
- Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier.
To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools.
Additional resources
7.2. Required system packages
To create and configure a base image of RHEL, your host system must have the following packages installed.
Table 7.1. System packages
Package | Repository | Description |
---|---|---|
libvirt | rhel-8-for-x86_64-appstream-rpms | Open source API, daemon, and management tool for managing platform virtualization |
virt-install | rhel-8-for-x86_64-appstream-rpms | A command-line utility for building VMs |
libguestfs | rhel-8-for-x86_64-appstream-rpms | A library for accessing and modifying VM file systems |
libguestfs-tools | rhel-8-for-x86_64-appstream-rpms |
System administration tools for VMs; includes the |
7.3. Red Hat Enterprise Linux image options on GCP
The following table lists image choices for RHEL 8 on Google Cloud Platform and the differences in the image options.
Table 7.2. Image options
Image option | Subscriptions | Sample scenario | Considerations |
---|---|---|---|
Deploy a Red Hat Gold Image. | Use your existing Red Hat subscriptions. | Select a Red Hat Gold Image on Google Cloud Platform. For details on Gold Images and how to access them on Google Cloud Platform, see the Red Hat Cloud Access Reference Guide. | The subscription includes the Red Hat product cost; you pay Google for all other instance costs. Red Hat provides support directly for custom RHEL images. |
Deploy a custom image that you move to GCP. | Use your existing Red Hat subscriptions. | Upload your custom image and attach your subscriptions. | The subscription includes the Red Hat product cost; you pay all other instance costs. Red Hat provides support directly for custom RHEL images. |
Deploy an existing GCP image that includes RHEL. | The GCP images include a Red Hat product. | Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace. | You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement. |
You can create a custom image for GCP using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.
You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image:
- Create a new custom RHEL instance and migrate data from your on-demand instance.
- Cancel your on-demand instance after you migrate your data to avoid double billing.
Additional resources
7.4. Installing the Google Cloud SDK
Complete the following steps to install the Google Cloud SDK.
Procedure
- Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details.
Follow the same instructions for initializing the Google Cloud SDK.
NoteOnce you have initialized the Google Cloud SDK, you can use the
gcloud
CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with thegcloud compute project-info describe --project <project-name>
command.
Additional resources
7.5. Creating a GCP image bucket
The following document includes the minimum requirements for creating a multi-regional bucket in your default location.
Prerequisites
- GCP storage utility (gsutil)
Procedure
If you are not already logged in to Google Cloud Platform, log in with the following command.
# gcloud auth login
Create a storage bucket.
$ gsutil mb gs://BucketName
Example:
$ gsutil mb gs://rhel-ha-bucket
Additional resources
7.6. Creating a custom virtual private cloud network and subnet
Complete the following steps to create a custom virtual private cloud (VPC) network and subnet.
Procedure
- Launch the GCP Console.
- Select VPC networks under Networking in the left navigation pane.
- Click Create VPC Network.
- Enter a name for the VPC network.
- Under the New subnet, create a Custom subnet in the region where you want to create the cluster.
- Click Create.
7.7. Preparing and importing a base GCP image
Complete the following steps to prepare a Red Hat Enterprise Linux 8 image for GCP.
Procedure
Convert the file. Images uploaded to GCP must be in
raw
format and nameddisk.raw
.$ qemu-img convert -f qcow2 ImageName.qcow2 -O raw disk.raw
Compress the
raw
file. Images uploaded to GCP must be compressed.$ tar -Sczf ImageName.tar.gz disk.raw
Import the compressed image to the bucket created earlier.
$ gsutil cp ImageName.tar.gz gs://BucketName
7.8. Creating and configuring a base GCP instance
Complete the following steps to create and configure a GCP instance that complies with GCP operating and security requirements.
Procedure
Create an image from the compressed file in the bucket.
$ gcloud compute images create BaseImageName --source-uri gs://BucketName/BaseImageName.tar.gz
Example:
[admin@localhost ~] $ gcloud compute images create rhel-76-server --source-uri gs://user-rhelha/rhel-server-76.tar.gz Created [https://www.googleapis.com/compute/v1/projects/MyProject/global/images/rhel-server-76]. NAME PROJECT FAMILY DEPRECATED STATUS rhel-76-server rhel-ha-testing-on-gcp READY
Create a template instance from the image. The minimum size required for a base RHEL instance is n1-standard-2. See gcloud compute instances create for additional configuration options.
$ gcloud compute instances create BaseInstanceName --can-ip-forward --machine-type n1-standard-2 --image BaseImageName --service-account ServiceAccountEmail
Example:
[admin@localhost ~] $ gcloud compute instances create rhel-76-server-base-instance --can-ip-forward --machine-type n1-standard-2 --image rhel-76-server --service-account account@project-name-on-gcp.iam.gserviceaccount.com Created [https://www.googleapis.com/compute/v1/projects/rhel-ha-testing-on-gcp/zones/us-east1-b/instances/rhel-76-server-base-instance]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel-76-server-base-instance us-east1-bn1-standard-2 10.10.10.3 192.227.54.211 RUNNING
Connect to the instance with an SSH terminal session.
$ ssh root@PublicIPaddress
Update the RHEL software.
- Register with Red Hat Subscription Manager (RHSM).
-
Enable a Subscription Pool ID (or use the
--auto-attach
command). Disable all repositories.
# subscription-manager repos --disable=*
Enable the following repository.
# subscription-manager repos --enable=rhel-8-server-rpms
Run the
yum update
command.# yum update -y
Install the GCP Linux Guest Environment on the running instance (in-place installation).
See Install the guest environment in-place for instructions.
- Select the CentOS/RHEL option.
- Copy the command script and paste it at the command prompt to run the script immediately.
Make the following configuration changes to the instance. These changes are based on GCP recommendations for custom images. See gcloudcompute images list for more information.
-
Edit the
/etc/chrony.conf
file and remove all NTP servers. Add the following NTP server.
metadata.google.internal iburst Google NTP server
Remove any persistent network device rules.
# rm -f /etc/udev/rules.d/70-persistent-net.rules # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules
Set the network service to start automatically.
# chkconfig network on
Set the
sshd service
to start automatically.# systemctl enable sshd # systemctl is-enabled sshd
Set the time zone to UTC.
# ln -sf /usr/share/zoneinfo/UTC /etc/localtime
(Optional) Edit the
/etc/ssh/ssh_config
file and add the following lines to the end of the file. This keeps your SSH session active during longer periods of inactivity.# Server times out connections after several minutes of inactivity. # Keep alive ssh connections by sending a packet every 7 minutes. ServerAliveInterval 420
Edit the
/etc/ssh/sshd_config
file and make the following changes, if necessary. The ClientAliveInterval 420 setting is optional; this keeps your SSH session active during longer periods of inactivity.PermitRootLogin no PasswordAuthentication no AllowTcpForwarding yes X11Forwarding no PermitTunnel no # Compute times out connections after 10 minutes of inactivity. # Keep ssh connections alive by sending a packet every 7 minutes. ClientAliveInterval 420
-
Edit the
Disable password access.
ssh_pwauth from 1 to 0. ssh_pwauth: 0
ImportantPreviously, you enabled password access to allow SSH session access to configure the instance. You must disable password access. All SSH session access must be passwordless.
Unregister the instance from the subscription manager.
# subscription-manager unregister
Clean the shell history. Keep the instance running for the next procedure.
# export HISTSIZE=0
7.9. Creating a snapshot image
Complete the following steps to preserve the instance configuration settings and create a snapshot.
Procedure
On the running instance, synchronize data to disk.
# sync
On your host system, create the snapshot.
$ gcloud compute disks snapshot InstanceName --snapshot-names SnapshotName
On your host system, create the configured image from the snapshot.
$ gcloud compute images create ConfiguredImageFromSnapshot --source-snapshot SnapshotName
Additional resources
7.10. Creating an HA node template instance and HA nodes
Once you have configured an image from the snapshot, you can create a node template. Use this template to create all HA nodes. Complete the following steps to create the template and HA nodes.
Procedure
Create an instance template.
$ gcloud compute instance-templates create InstanceTemplateName --can-ip-forward --machine-type n1-standard-2 --image ConfiguredImageFromSnapshot --service-account ServiceAccountEmailAddress
Example:
[admin@localhost ~] $ gcloud compute instance-templates create rhel-81-instance-template --can-ip-forward --machine-type n1-standard-2 --image rhel-81-gcp-image --service-account account@project-name-on-gcp.iam.gserviceaccount.com Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/global/instanceTemplates/rhel-81-instance-template]. NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP rhel-81-instance-template n1-standard-2 2018-07-25T11:09:30.506-07:00
Create multiple nodes in one zone.
# gcloud compute instances create NodeName01 NodeName02 --source-instance-template InstanceTemplateName --zone RegionZone --network=NetworkName --subnet=SubnetName
Example:
[admin@localhost ~] $ gcloud compute instances create rhel81-node-01 rhel81-node-02 rhel81-node-03 --source-instance-template rhel-81-instance-template --zone us-west1-b --network=projectVPC --subnet=range0 Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-01]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-02]. Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-03]. NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS rhel81-node-01 us-west1-b n1-standard-2 10.10.10.4 192.230.25.81 RUNNING rhel81-node-02 us-west1-b n1-standard-2 10.10.10.5 192.230.81.253 RUNNING rhel81-node-03 us-east1-b n1-standard-2 10.10.10.6 192.230.102.15 RUNNING
7.11. Installing HA packages and agents
Complete the following steps on all nodes.
Procedure
- In the Google Cloud Console, select Compute Engine and then select VM instances.
- Select the instance, click the arrow next to SSH, and select the View gcloud command option.
- Paste this command at a command prompt for passwordless access to the instance.
- Enable sudo account access and register with Red Hat Subscription Manager.
-
Enable a Subscription Pool ID (or use the
--auto-attach
command). Disable all repositories.
# subscription-manager repos --disable=*
Enable the following repositories.
# subscription-manager repos --enable=rhel-8-server-rpms # subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms
Install
pcs pacemaker
, the fence agents, and the resource agents.# yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp
Update all packages.
# yum update -y
7.12. Configuring HA services
Complete the following steps on all nodes to configure HA services.
Procedure
The user
hacluster
was created during thepcs
andpacemaker
installation in the previous step. Create a password for the userhacluster
on all cluster nodes. Use the same password for all nodes.# passwd hacluster
If the
firewalld
service is installed, add the HA service.# firewall-cmd --permanent --add-service=high-availability # firewall-cmd --reload
Start the
pcs
service and enable it to start on boot.# systemctl start pcsd.service # systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.
Verification
Ensure the
pcsd
service is running.# systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-06-25 19:21:42 UTC; 15s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5901 (pcsd) CGroup: /system.slice/pcsd.service └─5901 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
-
Edit the
/etc/hosts
file. Add RHEL host names and internal IP addresses for all nodes.
Additional resources
7.13. Creating a cluster
Complete the following steps to create the cluster of nodes.
Procedure
On one of the nodes, authenticate the
pcs
user. Specify the name of each node in the cluster in the command.# pcs host auth hostname1 hostname2 hostname3 Username: hacluster Password: hostname1: Authorized hostname2: Authorized hostname3: Authorized
Create the cluster.
# pcs cluster setup cluster-name hostname1 hostname2 hostname3
Verification
Run the following command to enable nodes to join the cluster automatically when started.
# pcs cluster enable --all
Start the cluster.
# pcs cluster start --all
7.14. Creating a fencing device
Complete the following steps to create a fencing device.
Note that for most default configurations, the GCP instance names and the RHEL host names are identical.
Procedure
Obtain GCP instance names. Note that the output of the following command also shows the internal ID for the instance.
# fence_gce --zone us-west1-b --project=rhel-ha-on-gcp -o list
Example:
[root@rhel81-node-01 ~]# fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list 4435801234567893181,InstanceName-3 4081901234567896811,InstanceName-1 7173601234567893341,InstanceName-2
Create a fence device.
# pcs stonith create FenceDeviceName fence_gce zone=Region-Zone project=MyProject
Verification
Verify that the fence devices started.
# pcs status
Example:
[root@rhel81-node-01 ~]# pcs status Cluster name: gcp-cluster Stack: corosync Current DC: rhel81-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 27 12:53:25 2018 Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel81-node-01 3 nodes configured 3 resources configured Online: [ rhel81-node-01 rhel81-node-02 rhel81-node-03 ] Full list of resources: us-west1-b-fence (stonith:fence_gce): Started rhel81-node-01 Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled
7.15. Configuring GCP node authorization
Configure cloud SDK tools to use your account credentials to access GCP.
Procedure
Enter the following command on each node to initialize each node with your project ID and account credentials.
# gcloud-ra init
7.16. Configuring the gcp-vcp-move-vip resource agent
The gcp-vpc-move-vip
resource agent attaches a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster.
To show more information about this resource:
# pcs resource describe gcp-vpc-move-vip
You can configure the resource agent to use a primary subnet address range or a secondary subnet address range:
Primary subnet address range
Complete the following steps to configure the resource for the primary VPC subnet.
Procedure
Create the
aliasip
resource. Include an unused internal IP address. Include the CIDR block in the command.# pcs resource create aliasip gcp-vpc-move-vip alias_ip=UnusedIPaddress/CIDRblock
Example:
[root@rhel81-node-01 ~]# pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.10.200/32
Create an
IPaddr2
resource for managing the IP on the node.# pcs resource create vip IPaddr2 nic=interface ip=AliasIPaddress cidr_netmask=32
Example:
[root@rhel81-node-01 ~]# pcs resource create vip IPaddr2 nic=eth0 ip=10.10.10.200 cidr_netmask=32
Group the network resources under
vipgrp
.# pcs resource group add vipgrp aliasip vip
Verification
Verify that the resources have started and are grouped under
vipgrp
.# pcs status
Verify that the resource can move to a different node.
# pcs resource move vip Node
Example:
[root@rhel81-node-01 ~]# pcs resource move vip rhel81-node-03
Verify that the
vip
successfully started on a different node.# pcs status
Secondary subnet address range
Complete the following steps to configure the resource for a secondary subnet address range.
Prerequisites
Procedure
Create a secondary subnet address range.
# gcloud-ra compute networks subnets update SubnetName --region RegionName --add-secondary-ranges SecondarySubnetName=SecondarySubnetRange
Example:
# gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24
Create the
aliasip
resource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command.# pcs resource create aliasip gcp-vpc-move-vip alias_ip=UnusedIPaddress/CIDRblock
Example:
[root@rhel81-node-01 ~]# pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.20.200/32
Create an
IPaddr2
resource for managing the IP on the node.# pcs resource create vip IPaddr2 nic=interface ip=AliasIPaddress cidr_netmask=32
Example:
[root@rhel81-node-01 ~]# pcs resource create vip IPaddr2 nic=eth0 ip=10.10.20.200 cidr_netmask=32
Group the network resources under
vipgrp
.# pcs resource group add vipgrp aliasip vip
Verification
Verify that the resources have started and are grouped under
vipgrp
.# pcs status
Verify that the resource can move to a different node.
# pcs resource move vip Node
Example:
[root@rhel81-node-01 ~]# pcs resource move vip rhel81-node-03
Verify that the
vip
successfully started on a different node.# pcs status