Red Hat Training

A Red Hat training course is available for RHEL 8

Deploying Red Hat Enterprise Linux 8 on public cloud platforms

Red Hat Enterprise Linux 8

Creating custom Red Hat Enterprise Linux images and configuring a Red Hat High Availability cluster for public cloud platforms

Red Hat Customer Content Services

Abstract

You can create and deploy custom Red Hat Enterprise Linux images to various cloud platforms, including Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
You can also create and configure a Red Hat High Availability cluster on each cloud platform. This document describes two choices for creating images: Cloud Access images and on-demand (marketplace) images. It includes procedures for creating HA clusters, including installing required packages and agents, configuring fencing, and installing network resource agents.
Each cloud provider has its own chapter that describes creating and deploying a custom image. There is also a separate chapter for configuring HA clusters for each cloud provider.

Providing feedback on Red Hat documentation

We appreciate your input on our documentation. Please let us know how we could make it better. To do so:

  • For simple comments on specific passages:

    1. Make sure you are viewing the documentation in the Multi-page HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document.
    2. Use your mouse cursor to highlight the part of text that you want to comment on.
    3. Click the Add Feedback pop-up that appears below the highlighted text.
    4. Follow the displayed instructions.
  • For submitting more complex feedback, create a Bugzilla ticket:

    1. Go to the Bugzilla website.
    2. As the Component, use Documentation.
    3. Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation.
    4. Click Submit Bug.

Chapter 1. Deploying a Red Hat Enterprise Linux 8 image as a virtual machine on Microsoft Azure

You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 8 image on Azure. This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and virtual machine (VM). This chapter also provides procedures for creating a custom image, uploading it to Azure, and launching an Azure VM instance.

This chapter refers to the Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for additional detail.

Note

For a list of Red Hat products that you can use securely on Azure, see Red Hat on Microsoft Azure.

Prerequisites

  • Sign up for a Red Hat Customer Portal account.
  • Sign up for a Microsoft Azure account.
  • Enable your subscriptions in the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems to Azure with full support from Red Hat.

1.1. Red Hat Enterprise Linux image options on Azure

The following table lists image choices and notes the differences in the image options.

Table 1.1. Image options

Image optionSubscriptionsSample scenarioConsiderations

Choose to deploy a Red Hat Gold Image.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, and then choose a Red Hat Gold Image on Azure. See the Red Hat Cloud Access Reference Guide for details on Gold Images and how to access them on Azure.

The subscription includes the Red Hat product cost, and you pay Microsoft for all other instance costs.

Red Hat Gold Images are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy a custom image that you move to Azure.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions.

The subscription includes the Red Hat product cost, and you pay Microsoft for all other instance costs.

Custom images that you move to Azure are "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy an existing Azure image that includes RHEL.

The Azure images include a Red Hat product.

Choose a RHEL image when you create a VM using the Azure console, or choose a VM from the Azure Marketplace.

You pay Microsoft hourly on a pay-as-you-go model. Such images are called "on-demand." Azure provides support for on-demand images through a support agreement.

Red Hat provides updates to the images. Azure makes the updates available through the Red Hat Update Infrastructure (RHUI).

Note

You can create a custom image for Azure using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.

The remainder of this chapter includes information and procedures pertaining to Red Hat Enterprise Linux custom images.

1.2. Understanding base images

This section includes information on using preconfigured base images and their configuration settings.

1.2.1. Using a custom base image

To manually configure a VM, you start with a base (starter) VM image. Once you have created the base VM image, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.

To prepare a KVM cloud image of RHEL, follow the instructions below. To prepare a Hyper-V cloud image of RHEL, see the Microsoft Documentation.

The recommended base VM image to use for all public cloud platforms is the Red Hat Enterprise Linux 8 KVM Guest Image, which you download from the Red Hat Customer Portal. The KVM Guest Image is preconfigured with the following cloud configuration settings.

  • The root account is disabled. You temporarily enable root account access to make configuration changes and install packages that the cloud may require. This guide provides instructions for enabling root account access.
  • The image has cloud-init installed and enabled. cloud-init is a service that handles provisioning of the VM (or instance) at initial boot.

You can choose to use a custom Red Hat Enterprise Linux ISO image; however, when using a custom ISO image, you may need to make additional configuration changes.

Additional resources

Red Hat Enterprise Linux

1.2.2. Required system packages

The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed.

Table 1.2. System packages

PackageRepositoryDescription

libvirt

rhel-8-for-x86_64-appstream-rpms

Open source API, daemon, and management tool for managing platform virtualization

virt-install

rhel-8-for-x86_64-appstream-rpms

A command-line utility for building VMs

libguestfs

rhel-8-for-x86_64-appstream-rpms

A library for accessing and modifying VM file systems

libguestfs-tools

rhel-8-for-x86_64-appstream-rpms

System administration tools for VMs; includes the guestfish utility

1.2.3. Azure virtual machine configuration settings

Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures, and refer back to them if you need to.

Table 1.3. VM configuration settings

SettingRecommendation

ssh

ssh must be enabled to provide remote access to your Azure VMs.

dhcp

The primary virtual adapter should be configured for dhcp (IPv4 only).

Swap Space

Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent).

NIC

Choose virtio for the primary virtual network adapter.

encryption

For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure.

1.2.4. Creating a base image from a KVM Guest Image

Red Hat and the open source community continually optimize the KVM Guest Image for virtualized environments. Once you have configured the image, you can use the image as a template for creating additional VM instances.

Procedure

  1. Download the latest Red Hat Enterprise Linux 8 KVM Guest Image from the Red Hat Customer Portal.
  2. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  3. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name kvmtest --memory 2048 --vcpus 2 --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio --import --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
  4. Shut down the new VM after a login prompt appears.
  5. Set up root access to the VM. From your system, use the virt-customize command to generate a root password for the VM.

    # virt-customize -a <guest-image-path> --root-password password:<PASSWORD>

    Example:

    # virt-customize -a /var/lib/libvirt/images/rhel-guest-image-8.0-120.x86_64.qcow2 --root-password password:redhat!
    [   0.0] Examining the guest ...
    [ 103.0] Setting a random seed
    [ 103.0] Setting passwords
    [ 112.0] Finishing off
  6. Verify root access by starting the RHEL VM and logging in as root.
  7. Once you log in as root, you can configure the image.

1.2.5. Creating a base image from an ISO image

The following procedure lists the steps and initial configuration requirements for creating a custom ISO image. Once you have configured the image, you can use the image as a template for creating additional VM instances.

Procedure

  1. Download the latest Red Hat Enterprise Linux 8 Binary DVD ISO image from the Red Hat Customer Portal.
  2. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  3. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name isotest --memory 2048 --vcpus 2 --disk size=8,bus=virtio --location rhel-8.0-x86_64-dvd.iso --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory and Storage Size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
  4. Review the following additional installation selection and modifications.

    • Select Minimal Install with the standard RHEL option.
    • For Installation Destination, select Custom Storage Configuration. Use the following configuration information to make your selections.

      • Verify at least 500 MB for /boot.
      • For file system, use xfs, ext4, or ext3 for both boot and root partitions.
      • Remove swap space. Swap space is configured on the physical blade server in Azure by the WALinuxAgent.
    • On the Installation Summary screen, select Network and Host Name. Switch Ethernet to On.
  5. When the install starts:

    • Create a root password.
    • Create an administrative user account.
  6. When installation is complete, reboot the VM and log in to the root account.
  7. Once you are logged in as root, you can configure the image.

1.3. Configuring the base image for Microsoft Azure

The base image requires configuration changes to serve as your RHEL 8 VM image in Azure. The following sections provide the additional configuration changes that Azure requires.

1.3.1. Installing Hyper-V device drivers

Microsoft provides network and storage device drivers as part of their Linux Integration Services for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure VM. Use the lsinitrd | grep hv command to verify that the drivers are installed.

Procedure

  1. Enter the following grep command to determine if the required Hyper-V device drivers are installed.

    # lsinitrd | grep hv

    In the example below, all required drivers are installed.

    # lsinitrd | grep hv
    drwxr-xr-x   2 root     root            0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv
    -rw-r--r--   1 root     root        31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz
    -rw-r--r--   1 root     root        25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz
    -rw-r--r--   1 root     root         9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz

    If all the drivers are not installed, complete the remaining steps.

    Note

    An hv_vmbus driver may exist in the environment. Even if this driver is present, complete the following steps.

  2. Create a file named hv.conf in /etc/dracut.conf.d.
  3. Add the following driver parameters to the hv.conf file.

    add_drivers+=" hv_vmbus "
    add_drivers+=" hv_netvsc "
    add_drivers+=" hv_storvsc "
    Note

    Note the spaces before and after the quotes, for example, add_drivers+=" hv_vmbus ". This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.

  4. Regenerate the initramfs image.

    # dracut -f -v --regenerate-all

Verification steps

  1. Reboot the machine.
  2. Run the lsinitrd | grep hv command to verify that the drivers are installed.

1.3.2. Making additional configuration changes

The VM requires further configuration changes to operate in Azure. Perform the following procedure to make the additional changes.

Procedure

  1. If necessary, power on the VM.
  2. Register the VM and enable the Red Hat Enterprise Linux 8 repository.

    # subscription-manager register --auto-attach

Stopping and removing cloud-init

  1. Stop the cloud-init service (if present).

    # systemctl stop cloud-init
  2. Remove the cloud-init software.

    # yum remove cloud-init

Completing other VM changes

  1. Edit the /etc/ssh/sshd_config file and enable password authentication.

    PasswordAuthentication yes
  2. Set a generic host name.

    # hostnamectl set-hostname localhost.localdomain
  3. Edit (or create) the /etc/sysconfig/network-scripts/ifcfg-eth0 file. Use only the parameters listed below.

    Note

    The ifcfg-eth0 file does not exist on the RHEL 8 DVD ISO image and must be created.

    DEVICE="eth0"
    ONBOOT="yes"
    BOOTPROTO="dhcp"
    TYPE="Ethernet"
    USERCTL="yes"
    PEERDNS="yes"
    IPV6INIT="no"
  4. Remove all persistent network device rules (if present).

    # rm -f /etc/udev/rules.d/70-persistent-net.rules
    # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules
    # rm -f /etc/udev/rules.d/80-net-name-slot-rules
  5. Set ssh to start automatically.

    # systemctl enable sshd
    # systemctl is-enabled sshd
  6. Modify the kernel boot parameters.

    1. Add crashkernel=256M to the start of the GRUB_CMDLINE_LINUX line in the /etc/default/grub file. If crashkernel=auto is present, change it to crashkernel=256M.
    2. Add the following lines to the end of the GRUB_CMDLINE_LINUX line (if not present).

      earlyprintk=ttyS0
      console=ttyS0
      rootdelay=300
    3. Remove the following options (if present).

      rhgb
      quiet
  7. Regenerate the grub.cfg file.

    # grub2-mkconfig -o /boot/grub2/grub.cfg
  8. Install and enable the Windows Azure Linux Agent (WALinuxAgent). Red Hat Enterprise Linux 8 Application Stream (AppStream) includes the WALinuxAgent. See Using AppStream for more information.

    # yum install WALinuxAgent -y
    # systemctl enable waagent
  9. Edit the following lines in the /etc/waagent.conf file to configure swap space for provisioned VMs. Set swap space for whatever is appropriate for your provisioned VMs.

    Provisioning.DeleteRootPassword=n
    ResourceDisk.Filesystem=ext4
    ResourceDisk.EnableSwap=y
    ResourceDisk.SwapSizeMB=2048

Preparing to provision

  1. Unregister the VM from Red Hat Subscription Manager.

    # subscription-manager unregister
  2. Prepare the VM for Azure provisioning by cleaning up the existing provisioning details. Azure reprovisions the VM in Azure. This command generates warnings, which is expected.

    # waagent -force -deprovision
  3. Clean the shell history and shut down the VM.

    # export HISTSIZE=0
    # poweroff

1.4. Converting the image to a fixed VHD format

All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. This section describes how to convert the image from qcow2 to a fixed VHD format and align the image, if necessary. Once you have converted the image, you can upload it to Azure.

Procedure

  1. Convert the image from qcow2 to raw format.

    $ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
  2. Create a shell script using the contents below.

    #!/bin/bash
    MB=$((1024 * 1024))
    size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
    rounded_size=$((($size/$MB + 1) * $MB))
    if [ $(($size % $MB)) -eq  0 ]
    then
     echo "Your image is already aligned. You do not need to resize."
     exit 1
    fi
    echo "rounded size = $rounded_size"
    export rounded_size
  3. Run the script. This example uses the name align.sh.

    $ sh align.sh <image-xxx>.raw
    • If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
    • If a value displays, your image is not aligned.
  4. Use the following command to convert the file to a fixed VHD format.

    The sample uses qemu-img version 2.12.0.

    $ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd

    Once converted, the VHD file is ready to upload to Azure.

Aligning the image

Complete the following steps only if the raw file is not aligned.

  1. Resize the raw file using the rounded value displayed when you ran the verification script.

    $ qemu-img resize -f raw <image-xxx>.raw <rounded-value>
  2. Convert the raw image file to a VHD format.

    The sample uses qemu-img version 2.12.0.

    $ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd

    Once converted, the VHD file is ready to upload to Azure.

1.5. Installing the Azure CLI

Complete the following steps to install the Azure command line interface (Azure CLI 2.1). Azure CLI 2.1 is a Python-based utility that creates and manages VMs in Azure.

Prerequisites

  • You need to have an account with Microsoft Azure before you can use the Azure CLI.
  • The Azure CLI installation requires Python 3.x.

Procedure

  1. Import the Microsoft repository key.

    $ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
  2. Create a local Azure CLI repository entry.

    $ sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo'
  3. Update the yum package index.

    $ yum check-update
  4. Check your Python version (python --version) and install Python 3.x, if necessary.

    $ sudo yum install python3
  5. Install the Azure CLI.

    $ sudo yum install -y azure-cli
  6. Run the Azure CLI.

    $ az

1.6. Creating resources in Azure

Complete the following procedure to create the Azure resources that you need before you can upload the VHD file and create the Azure image.

Procedure

  1. Enter the following command to authenticate your system with Azure and log in.

    $ az login
    Note

    If a browser is available in your environment, the CLI opens your browser to the Azure sign-in page. See Sign in with Azure CLI for more information and options.

  2. Create a resource group in an Azure region.

    $ az group create --name <resource-group> --location <azure-region>

    Example:

    [clouduser@localhost]$ az group create --name azrhelclirsgrp --location southcentralus
    {
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp",
      "location": "southcentralus",
      "managedBy": null,
      "name": "azrhelclirsgrp",
      "properties": {
        "provisioningState": "Succeeded"
      },
      "tags": null
    }
  3. Create a storage account. See SKU Types for more information about valid SKU values.

    $ az storage account create -l <azure-region> -n <storage-account-name> -g <resource-group> --sku <sku_type>

    Example:

    [clouduser@localhost]$ az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS
    {
      "accessTier": null,
      "creationTime": "2017-04-05T19:10:29.855470+00:00",
      "customDomain": null,
      "encryption": null,
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact",
      "kind": "StorageV2",
      "lastGeoFailoverTime": null,
      "location": "southcentralus",
      "name": "azrhelclistact",
      "primaryEndpoints": {
        "blob": "https://azrhelclistact.blob.core.windows.net/",
        "file": "https://azrhelclistact.file.core.windows.net/",
        "queue": "https://azrhelclistact.queue.core.windows.net/",
        "table": "https://azrhelclistact.table.core.windows.net/"
    },
    "primaryLocation": "southcentralus",
    "provisioningState": "Succeeded",
    "resourceGroup": "azrhelclirsgrp",
    "secondaryEndpoints": null,
    "secondaryLocation": null,
    "sku": {
      "name": "Standard_LRS",
      "tier": "Standard"
    },
    "statusOfPrimary": "available",
    "statusOfSecondary": null,
    "tags": {},
      "type": "Microsoft.Storage/storageAccounts"
    }
  4. Get the storage account connection string.

    $ az storage account show-connection-string -n <storage-account-name> -g <resource-group>

    Example:

    [clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp
    {
      "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
    }
  5. Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.

    $ export AZURE_STORAGE_CONNECTION_STRING="<storage-connection-string>"

    Example:

    [clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
  6. Create the storage container.

    $ az storage container create -n <container-name>

    Example:

    [clouduser@localhost]$ az storage container create -n azrhelclistcont
    {
      "created": true
    }
  7. Create a virtual network.

    $ az network vnet create -g <resource group> --name <vnet-name> --subnet-name <subnet-name>

    Example:

    [clouduser@localhost]$ az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1
    {
      "newVNet": {
        "addressSpace": {
          "addressPrefixes": [
          "10.0.0.0/16"
          ]
      },
      "dhcpOptions": {
        "dnsServers": []
      },
      "etag": "W/\"\"",
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1",
      "location": "southcentralus",
      "name": "azrhelclivnet1",
      "provisioningState": "Succeeded",
      "resourceGroup": "azrhelclirsgrp",
      "resourceGuid": "0f25efee-e2a6-4abe-a4e9-817061ee1e79",
      "subnets": [
        {
          "addressPrefix": "10.0.0.0/24",
          "etag": "W/\"\"",
          "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1",
          "ipConfigurations": null,
          "name": "azrhelclisubnet1",
          "networkSecurityGroup": null,
          "provisioningState": "Succeeded",
          "resourceGroup": "azrhelclirsgrp",
          "resourceNavigationLinks": null,
          "routeTable": null
        }
      ],
      "tags": {},
      "type": "Microsoft.Network/virtualNetworks",
      "virtualNetworkPeerings": null
      }
    }

1.7. Uploading and creating an Azure image

Complete the following steps to upload the VHD file to your container and create an Azure custom image.

Note

The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.

Procedure

  1. Upload the VHD file to the storage container. It may take several minutes. To get a list of storage containers, enter az storage container list.

    $ az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd

    Example:

    [clouduser@localhost]$ az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-8.vhd --name rhel-image-8.vhd
    Percent complete: %100.0
  2. Get the URL for the uploaded VHD file to use in the following step.

    $ az storage blob url -c <container-name> -n <image-name>.vhd

    Example:

    [clouduser@localhost]$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd
    "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
  3. Create the Azure custom image.

    $ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
    Note

    The default hypervisor generation of the virtual machine is V1. You can optionally specify a V2 hypervisor generation by including the option --hyper-v-generation V2. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information on generation 2 VMs.

    The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to VHD.

    Example:

    [clouduser@localhost]$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux

1.8. Creating and starting the VM in Azure

The following steps provide the minimum command options to create a managed-disk Azure VM from the image. See az vm create for additional options.

Procedure

  1. Enter the following command to create the VM.

    Note

    The option --generate-ssh-keys creates a private/public key pair. Private and public key files are created in ~/.ssh on your system. The public key is added to the authorized_keys file on the VM for the user specified by the --admin-username option. See Other authentication methods for additional information.

    $ az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --generate-ssh-keys --image <path-to-image>

    Example:

    [clouduser@localhost]$ az vm create -g azrhelclirsgrp2 -l southcentralus -n rhel-azure-vm-1 --vnet-name azrhelclivnet1 --subnet azrhelclisubnet1  --size Standard_A2 --os-disk-name vm-1-osdisk --admin-username clouduser --generate-ssh-keys --image rhel8
    
    {
      "fqdns": "",
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/virtualMachines/rhel-azure-vm-1",
      "location": "southcentralus",
      "macAddress": "",
      "powerState": "VM running",
      "privateIpAddress": "10.0.0.4",
      "publicIpAddress": "<public-IP-address>",
      "resourceGroup": "azrhelclirsgrp2"

    Note the publicIpAddress. You need this address to log in to the VM in the following step.

  2. Start an SSH session and log in to the VM.

    [clouduser@localhost]$ ssh  -i /home/clouduser/.ssh/id_rsa clouduser@<public-IP-address>.
    The authenticity of host ',<public-IP-address>' can't be established.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '<public-IP-address>' (ECDSA) to the list of known hosts.
    
    [clouduser@rhel-azure-vm-1 ~]$

If you see a user prompt, you have successfully deployed your Azure VM.

You can now go to the Microsoft Azure portal and check the audit logs and properties of your resources. You can manage your VMs directly in this portal. If you are managing multiple VMs, you should use the Azure CLI. The Azure CLI provides a powerful interface to your resources in Azure. Enter az --help in the CLI or see the Azure CLI command reference to learn more about the commands you use to manage your VMs in Microsoft Azure.

1.9. Other authentication methods

While recommended for increased security, using the Azure-generated key pair is not required. The following examples show two methods for SSH authentication.

Example 1: These command options provision a new VM without generating a public key file. They allow SSH authentication using a password.

$ az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --authentication-type password --admin-username <administrator-name> --admin-password <ssh-password> --image <path-to-image>
$ ssh <admin-username>@<public-ip-address>

Example 2: These command options provision a new Azure VM and allow SSH authentication using an existing public key file.

$ az vm create -g <resource-group> -l <azure-region> -n <vm-name> --vnet-name <vnet-name> --subnet <subnet-name> --size Standard_A2 --os-disk-name <simple-name> --admin-username <administrator-name> --ssh-key-value <path-to-existing-ssh-key> --image <path-to-image>
$ ssh -i <path-to-existing-ssh-key> <admin-username>@<public-ip-address>

1.10. Attaching Red Hat subscriptions

Complete the following steps to attach the subscriptions you previously enabled through the Red Hat Cloud Access program.

Prerequisites

You must have enabled your subscriptions.

Procedure

  1. Register your system.

    subscription-manager register --auto-attach
  2. Attach your subscriptions.

Chapter 2. Configuring a Red Hat High Availability cluster on Microsoft Azure

This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Azure using Azure virtual machine (VM) instances as cluster nodes. The procedures in this chapter assume you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 8 images you use for your cluster. See Red Hat Enterprise Linux Image Options on Azure for information on image options for Azure.

The chapter includes prerequisite procedures for setting up your environment for Azure. Once you have set up your environment, you can create and configure Azure VM instances.

The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents.

The chapter refers to the Azure documentation in a number of places. For many procedures, see the referenced Azure documentation for more information.

Prerequisites

2.1. Creating resources in Azure

Complete the following procedure to create a region, resource group, storage account, virtual network, and availability set. You need these resources to complete subsequent tasks in this chapter.

Procedure

  1. Authenticate your system with Azure and log in.

    $ az login
    Note

    If a browser is available in your environment, the CLI opens your browser to the Azure sign-in page.

    Example:

    [clouduser@localhost]$ az login
    To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code FDMSCMETZ to authenticate.
      [
        {
          "cloudName": "AzureCloud",
          "id": "Subscription ID",
          "isDefault": true,
          "name": "MySubscriptionName",
          "state": "Enabled",
          "tenantId": "Tenant ID",
          "user": {
            "name": "clouduser@company.com",
            "type": "user"
          }
        }
      ]
  2. Create a resource group in an Azure region.

    $ az group create --name resource-group --location azure-region

    Example:

    [clouduser@localhost]$ az group create --name azrhelclirsgrp --location southcentralus
    {
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp",
      "location": "southcentralus",
      "managedBy": null,
      "name": "azrhelclirsgrp",
      "properties": {
        "provisioningState": "Succeeded"
      },
      "tags": null
    }
  3. Create a storage account.

    $ az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2

    Example:

    [clouduser@localhost]$ az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS --kind StorageV2
    {
      "accessTier": null,
      "creationTime": "2017-04-05T19:10:29.855470+00:00",
      "customDomain": null,
      "encryption": null,
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact",
      "kind": "StorageV2",
      "lastGeoFailoverTime": null,
      "location": "southcentralus",
      "name": "azrhelclistact",
      "primaryEndpoints": {
        "blob": "https://azrhelclistact.blob.core.windows.net/",
        "file": "https://azrhelclistact.file.core.windows.net/",
        "queue": "https://azrhelclistact.queue.core.windows.net/",
        "table": "https://azrhelclistact.table.core.windows.net/"
    },
    "primaryLocation": "southcentralus",
    "provisioningState": "Succeeded",
    "resourceGroup": "azrhelclirsgrp",
    "secondaryEndpoints": null,
    "secondaryLocation": null,
    "sku": {
      "name": "Standard_LRS",
      "tier": "Standard"
    },
    "statusOfPrimary": "available",
    "statusOfSecondary": null,
    "tags": {},
      "type": "Microsoft.Storage/storageAccounts"
    }
  4. Get the storage account connection string.

    $ az storage account show-connection-string -n storage-account-name -g resource-group

    Example:

    [clouduser@localhost]$ az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp
    {
      "connectionString": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
    }
  5. Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account.

    $ export AZURE_STORAGE_CONNECTION_STRING="storage-connection-string"

    Example:

    [clouduser@localhost]$ export AZURE_STORAGE_CONNECTION_STRING="DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...=="
  6. Create the storage container.

    $ az storage container create -n container-name

    Example:

    [clouduser@localhost]$ az storage container create -n azrhelclistcont
    {
      "created": true
    }
  7. Create a virtual network. All cluster nodes must be in the same virtual network.

    $ az network vnet create -g resource group --name vnet-name --subnet-name subnet-name

    Example:

    [clouduser@localhost]$ az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1
    {
      "newVNet": {
        "addressSpace": {
          "addressPrefixes": [
          "10.0.0.0/16"
          ]
      },
      "dhcpOptions": {
        "dnsServers": []
      },
      "etag": "W/\"\"",
      "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1",
      "location": "southcentralus",
      "name": "azrhelclivnet1",
      "provisioningState": "Succeeded",
      "resourceGroup": "azrhelclirsgrp",
      "resourceGuid": "0f25efee-e2a6-4abe-a4e9-817061ee1e79",
      "subnets": [
        {
          "addressPrefix": "10.0.0.0/24",
          "etag": "W/\"\"",
          "id": "/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1",
          "ipConfigurations": null,
          "name": "azrhelclisubnet1",
          "networkSecurityGroup": null,
          "provisioningState": "Succeeded",
          "resourceGroup": "azrhelclirsgrp",
          "resourceNavigationLinks": null,
          "routeTable": null
        }
      ],
      "tags": {},
      "type": "Microsoft.Network/virtualNetworks",
      "virtualNetworkPeerings": null
      }
    }
  8. Create an availability set. All cluster nodes must be in the same availability set.

    $ az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup

    Example:

    [clouduser@localhost]$ az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp
    {
      "additionalProperties": {},
        "id": "/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1",
        "location": "southcentralus",
        "name": “rhelha-avset1",
        "platformFaultDomainCount": 2,
        "platformUpdateDomainCount": 5,
    
    ...omitted

2.2. Required system packages for HA

The procedure assumes you are creating a VM image for Azure HA using Red Hat Enterprise Linux. To successfully complete the procedure, you need to have the packages listed in the following table installed.

Table 2.1. System packages

PackageRepositoryDescription

libvirt

rhel-8-for-x86_64-appstream-rpms

Open source API, daemon, and management tool for managing platform virtualization

virt-install

rhel-8-for-x86_64-appstream-rpms

A command-line utility for building VMs

libguestfs

rhel-8-for-x86_64-appstream-rpms

A library for accessing and modifying virtual machine file systems

libguestfs-tools

rhel-8-for-x86_64-appstream-rpms

System administration tools for virtual machines; includes the guestfish utility

2.3. Azure virtual machine configuration settings

Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures, and refer back to them if you need to.

Table 2.2. VM configuration settings

SettingRecommendation

ssh

ssh must be enabled to provide remote access to your Azure VMs.

dhcp

The primary virtual adapter should be configured for dhcp (IPv4 only).

Swap Space

Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent).

NIC

Choose virtio for the primary virtual network adapter.

encryption

For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure.

2.4. Installing Hyper-V device drivers

Microsoft provides network and storage device drivers as part of their Linux Integration Services for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure VM. Use the lsinitrd | grep hv command to verify that the drivers are installed.

Procedure

  1. Enter the following grep command to determine if the required Hyper-V device drivers are installed.

    # lsinitrd | grep hv

    In the example below, all required drivers are installed.

    # lsinitrd | grep hv
    drwxr-xr-x   2 root     root            0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv
    -rw-r--r--   1 root     root        31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz
    -rw-r--r--   1 root     root        25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz
    -rw-r--r--   1 root     root         9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el7.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz

    If all the drivers are not installed, complete the remaining steps.

    Note

    An hv_vmbus driver may exist in the environment. Even if this driver is present, complete the following steps.

  2. Create a file named hv.conf in /etc/dracut.conf.d.
  3. Add the following driver parameters to the hv.conf file.

    add_drivers+=" hv_vmbus "
    add_drivers+=" hv_netvsc "
    add_drivers+=" hv_storvsc "
    Note

    Note the spaces before and after the quotes, for example, add_drivers+=" hv_vmbus ". This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment.

  4. Regenerate the initramfs image.

    # dracut -f -v --regenerate-all

Verification steps

  1. Reboot the machine.
  2. Run the lsinitrd | grep hv command to verify that the drivers are installed.

2.5. Making additional configuration changes

The VM requires further configuration changes to operate in Azure. Perform the following procedure to make the additional changes.

Procedure

  1. If necessary, power on the VM.
  2. Register the VM and enable the Red Hat Enterprise Linux 8 repository.

    # subscription-manager register --auto-attach

Stopping and removing cloud-init

  1. Stop the cloud-init service (if present).

    # systemctl stop cloud-init
  2. Remove the cloud-init software.

    # yum remove cloud-init

Completing other VM changes

  1. Edit the /etc/ssh/sshd_config file and enable password authentication.

    PasswordAuthentication yes
  2. Set a generic host name.

    # hostnamectl set-hostname localhost.localdomain
  3. Edit (or create) the /etc/sysconfig/network-scripts/ifcfg-eth0 file. Use only the parameters listed below.

    Note

    The ifcfg-eth0 file does not exist on the RHEL 8 DVD ISO image and must be created.

    DEVICE="eth0"
    ONBOOT="yes"
    BOOTPROTO="dhcp"
    TYPE="Ethernet"
    USERCTL="yes"
    PEERDNS="yes"
    IPV6INIT="no"
  4. Remove all persistent network device rules (if present).

    # rm -f /etc/udev/rules.d/70-persistent-net.rules
    # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules
    # rm -f /etc/udev/rules.d/80-net-name-slot-rules
  5. Set ssh to start automatically.

    # systemctl enable sshd
    # systemctl is-enabled sshd
  6. Modify the kernel boot parameters.

    1. Add crashkernel=256M to the start of the GRUB_CMDLINE_LINUX line in the /etc/default/grub file. If crashkernel=auto is present, change it to crashkernel=256M.
    2. Add the following lines to the end of the GRUB_CMDLINE_LINUX line (if not present).

      earlyprintk=ttyS0
      console=ttyS0
      rootdelay=300
    3. Remove the following options (if present).

      rhgb
      quiet
  7. Regenerate the grub.cfg file.

    # grub2-mkconfig -o /boot/grub2/grub.cfg
  8. Install and enable the Windows Azure Linux Agent (WALinuxAgent). Red Hat Enterprise Linux 8 Application Stream (AppStream) includes the WALinuxAgent. See Using AppStream for more information.

    # yum install WALinuxAgent -y
    # systemctl enable waagent
  9. Edit the following lines in the /etc/waagent.conf file to configure swap space for provisioned VMs. Set swap space for whatever is appropriate for your provisioned VMs.

    Provisioning.DeleteRootPassword=n
    ResourceDisk.Filesystem=ext4
    ResourceDisk.EnableSwap=y
    ResourceDisk.SwapSizeMB=2048

Preparing to provision

  1. Unregister the VM from Red Hat Subscription Manager.

    # subscription-manager unregister
  2. Prepare the VM for Azure provisioning by cleaning up the existing provisioning details. Azure reprovisions the VM in Azure. This command generates warnings, which is expected.

    # waagent -force -deprovision
  3. Clean the shell history and shut down the VM.

    # export HISTSIZE=0
    # poweroff

2.6. Creating an Azure Active Directory Application

Complete the following procedures to create an Azure AD Application. The Azure AD Application authorizes and automates access for HA operations for all nodes in the cluster.

Prerequisites

You need to install Azure Command Line Interface (CLI).

Procedure

  1. Ensure you are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application.
  2. Log in to your Azure account.

    $ az login
  3. Enter the following command to create the Azure AD Application. To use your own password, add the --password option to the command. Ensure that you create a strong password.

    $ az ad sp create-for-rbac --name FencingApplicationName --role owner --scopes "/subscriptions/SubscriptionID/resourceGroups/MyResourseGroup"

    Example:

    [clouduser@localhost ~] $ az ad sp create-for-rbac --name FencingApp --role owner --scopes "/subscriptions/2586c64b-xxxxxx-xxxxxxx-xxxxxxx/resourceGroups/azrhelclirsgrp"
    Retrying role assignment creation: 1/36
    Retrying role assignment creation: 2/36
    Retrying role assignment creation: 3/36
    {
      "appId": "1a3dfe06-df55-42ad-937b-326d1c211739",
      "displayName": "FencingApp",
      "name": "http://FencingApp",
      "password": "43a603f0-64bb-482e-800d-402efe5f3d47",
      "tenant": "77ecefb6-xxxxxxxxxx-xxxxxxx-757a69cb9485"
    }
  4. Save the following information before proceeding. You need this information to set up the fencing agent.

    • Azure AD Application ID
    • Azure AD Application Password
    • Tenant ID
    • Microsoft Azure Subscription ID

2.7. Converting the image to a fixed VHD format

All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. This section describes how to convert the image from qcow2 to a fixed VHD format and align the image, if necessary. Once you have converted the image, you can upload it to Azure.

Procedure

  1. Convert the image from qcow2 to raw format.

    $ qemu-img convert -f qcow2 -O raw <image-name>.qcow2 <image-name>.raw
  2. Create a shell script using the contents below.

    #!/bin/bash
    MB=$((1024 * 1024))
    size=$(qemu-img info -f raw --output json "$1" | gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
    rounded_size=$((($size/$MB + 1) * $MB))
    if [ $(($size % $MB)) -eq  0 ]
    then
     echo "Your image is already aligned. You do not need to resize."
     exit 1
    fi
    echo "rounded size = $rounded_size"
    export rounded_size
  3. Run the script. This example uses the name align.sh.

    $ sh align.sh <image-xxx>.raw
    • If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step.
    • If a value displays, your image is not aligned.
  4. Use the following command to convert the file to a fixed VHD format.

    The sample uses qemu-img version 2.12.0.

    $ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd

    Once converted, the VHD file is ready to upload to Azure.

Aligning the image

Complete the following steps only if the raw file is not aligned.

  1. Resize the raw file using the rounded value displayed when you ran the verification script.

    $ qemu-img resize -f raw <image-xxx>.raw <rounded-value>
  2. Convert the raw image file to a VHD format.

    The sample uses qemu-img version 2.12.0.

    $ qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx>.raw <image.xxx>.vhd

    Once converted, the VHD file is ready to upload to Azure.

2.8. Uploading and creating an Azure image

Complete the following steps to upload the VHD file to your container and create an Azure custom image.

Note

The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again.

Procedure

  1. Upload the VHD file to the storage container. It may take several minutes. To get a list of storage containers, enter az storage container list.

    $ az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd

    Example:

    [clouduser@localhost]$ az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-8.vhd --name rhel-image-8.vhd
    Percent complete: %100.0
  2. Get the URL for the uploaded VHD file to use in the following step.

    $ az storage blob url -c <container-name> -n <image-name>.vhd

    Example:

    [clouduser@localhost]$ az storage blob url -c azrhelclistcont -n rhel-image-8.vhd
    "https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd"
  3. Create the Azure custom image.

    $ az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux
    Note

    The default hypervisor generation of the virtual machine is V1. You can optionally specify a V2 hypervisor generation by including the option --hyper-v-generation V2. Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information on generation 2 VMs.

    The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to VHD.

    Example:

    [clouduser@localhost]$ az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux

2.9. Installing Red Hat HA packages and agents

Complete the following steps on all nodes.

Procedure

  1. Launch an SSH terminal session and connect to the VM using the administrator name and public IP address.

    $ ssh administrator@PublicIP

    To get the public IP address for an Azure VM, open the VM properties in the Azure portal or enter the following Azure CLI command.

    $ az vm list -g <resource-group> -d --output table

    Example:

    [clouduser@localhost ~] $ az vm list -g azrhelclirsgrp -d --output table
    Name    ResourceGroup           PowerState      PublicIps        Location
    ------  ----------------------  --------------  -------------    --------------
    node01  azrhelclirsgrp          VM running      192.98.152.251    southcentralus
  2. Register the VM with Red Hat.

    $ sudo -i
    # subscription-manager register --auto-attach
    Note

    If --auto-attach fails, manually register the VM to your subscription.

  3. Disable all repositories.

    # subscription-manager repos --disable=*
  4. Enable the RHEL 8 Server and RHEL 8 Server HA repositories.

    # subscription-manager repos --enable=rhel-8-server-rpms
    # subscription-manager repos --enable=rhel-ha-for-rhel-8-server-rpms
  5. Update all packages.

    # yum update -y
  6. Install the Red Hat High Availability Add-On software packages, along with all available fencing agents from the High Availability channel.

    # yum install pcs pacemaker fence-agents-azure-arm
  7. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  8. Add the high availability service to the RHEL Firewall if firewalld.service is installed.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  9. Start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
    
    Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

Verification step

Ensure the pcs service is running.

# systemctl status pcsd.service
pcsd.service - PCS GUI and remote configuration interface
Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago
Docs: man:pcsd(8)
          man:pcs(8)
Main PID: 46235 (pcsd)
  CGroup: /system.slice/pcsd.service
          └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &

2.10. Creating a cluster

Complete the following steps to create the cluster of nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. In the command, specify the name of each node in the cluster.

    # pcs host auth  hostname1 hostname2 hostname3
    Username: hacluster
    Password:
    hostname1: Authorized
    hostname2: Authorized
    hostname3: Authorized

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup cluster-name hostname1 hostname2 hostname3

    Example:

    [root@node01 clouduser]# pcs cluster setup --name newcluster node01 node02 node03
    
    ...omitted
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification steps

  1. Enable the cluster.

    [root@node01 clouduser]# pcs cluster enable --all
  2. Start the cluster.

    [root@node01 clouduser]# pcs cluster start --all

    Example:

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
    
    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

2.11. Fencing overview

If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent.

A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head," and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node.

2.12. Creating a fencing device

Complete the following steps to configure fencing. Complete these commands from any node in the cluster

Prerequisites

You need to set the cluster property stonith-enabled to true.

Procedure

  1. Identify the Azure node name for each RHEL VM. You use the Azure node names to configure the fence device.

    # fence_azure_arm -l AD-Application-ID -p AD-Password --resourceGroup MyResourceGroup --tenantId Tenant-ID --subscriptionId Subscription-ID -o list

    Example:

    [root@node01 clouduser]# fence_azure_arm -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list
    node01,
    node02,
    node03,
  2. View the options for the Azure ARM STONITH agent.

    pcs stonith describe fence_azure_arm

    Example:

    # pass:quotes[pcs stonith describe fence_apc]
    Stonith options:
    password: Authentication key
    password_script: Script to run to retrieve password
    Warning

    For fence agents that provide a method option, do not specify a value of cycle as it is not supported and can cause data corruption.

    Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires.

    You can use the pcmk_host_list parameter when creating a fencing device to specify all of the machines that are controlled by that fencing device.

    You can use pcmk_host_map parameter when creating a fencing device to map host names to the specifications that comprehends the fence device.

  3. Create a fence device.

    # pcs stonith create clusterfence fence_azure_arm
  4. Test the fencing agent for one of the other nodes.

    # pcs stonith fence azurenodename

    Example:

    [root@node01 clouduser]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Feb 23 11:44:35 2018
    Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01
    
    3 nodes configured
    1 resource configured
    
    Online: [ node01 node03 ]
    OFFLINE: [ node02 ]
    
    Full list of resources:
    
      clusterfence  (stonith:fence_azure_arm):  Started node01
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled
  5. Start the node that was fenced in the previous step.

    # pcs cluster start hostname
  6. Check the status to verify the node started.

    # pcs status

    Example:

    [root@node01 clouduser]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Feb 23 11:34:59 2018
    Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01
    
    3 nodes configured
    1 resource configured
    
    Online: [ node01 node02 node03 ]
    
    Full list of resources:
    
    clusterfence    (stonith:fence_azure_arm):  Started node01
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

2.13. Creating an Azure internal load balancer

The Azure internal load balancer removes cluster nodes that do not answer health probe requests.

Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA.

Prerequisites

Azure control panel

Procedure

  1. Create a Basic load balancer. Select Internal load balancer, the Basic SKU, and Dynamic for the type of IP address assignment.
  2. Create a back-end address pool. Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations.
  3. Create a health probe. For the health probe, select TCP and enter port 61000. You can use TCP port number that does not interfere with another service. For certain HA product applications (for example, SAP HANA and SQL Server), you may need to work with Microsoft to identify the correct port to use.
  4. Create a load balancer rule. To create the load balancing rule, the default values are prepopulated. Ensure to set Floating IP (direct server return) to Enabled.

2.14. Configuring the load balancer resource agent

After you have created the health probe, you must configure the load balancer resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests.

Procedure

  1. Install the nmap-ncat resource agents on all nodes.

    # yum install nmap-ncat resource-agents

    Perform the following steps on a single node.

  2. Create the pcs resources and group. Use your load balancer FrontendIP for the IPaddr2 address.

    # pcs resource create resource-name IPaddr2 ip="10.0.0.7" --group cluster-resources-group
  3. Configure the load balancer resource agent.

    # pcs resource create resource-loadbalancer-name azure-lb port=port-number --group cluster-resources-group

Verification step

Run pcs status to see the results.

[root@node01 clouduser]# pcs status

Example:

Cluster name: clusterfence01
Stack: corosync
Current DC: node02 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum
Last updated: Tue Jan 30 12:42:35 2018
Last change: Tue Jan 30 12:26:42 2018 by root via cibadmin on node01

3 nodes configured
3 resources configured

Online: [ node01 node02 node03 ]

Full list of resources:

clusterfence (stonith:fence_azure_arm):      Started node01
Resource Group: g_azure
    vip_azure  (ocf::heartbeat:IPaddr2):       Started node02
    lb_azure   (ocf::heartbeat:azure-lb):      Started node02

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services

You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 8 image as an EC2 instance on Amazon Web Services (AWS). This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and virtual machine (VM). The chapter also provides procedures for creating a custom image, uploading it to EC2, and launching an EC2 instance.

This chapter refers to the Amazon documentation in a number of places. For many procedures, see the referenced Amazon documentation for additional detail.

Note

For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services.

Prerequisites

3.1. Red Hat Enterprise Linux Image options on AWS

The following table lists image choices and notes the differences in the image options.

Table 3.1. Image options

Image optionSubscriptionsSample scenarioConsiderations

Choose to deploy a Red Hat Gold Image.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, and then choose a Red Hat Gold Image on AWS.

The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs.

Red Hat Gold Images are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy a custom image that you move to AWS.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions.

The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs.

Custom images that you move to AWS are "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy an existing Amazon image that includes RHEL.

The AWS EC2 images include a Red Hat product.

Choose a Red Hat Enterprise Linux image when you launch an instance on the AWS Management Console, or choose an image from the AWS Marketplace.

You pay Amazon hourly on a pay-as-you-go model. Such images are called "on-demand" images. Amazon provides support for on-demand images.

Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI).

Note

You can create a custom image for AWS using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.

Important

You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing.

The remainder of this chapter includes information and procedures pertaining to custom images.

3.2. Understanding base images

This section includes information on using preconfigured base images and their configuration settings.

3.2.1. Using a custom base image

To manually configure a VM, you start with a base (starter) VM image. Once you have created the base VM image, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.

The recommended base VM image is the Red Hat Enterprise Linux 8 KVM Guest Image, which you download from the Red Hat Customer Portal. The KVM Guest Image is preconfigured with the following cloud configuration settings.

  • The root account is disabled. You temporarily enable root account access to make configuration changes and install packages that the cloud may require. This guide provides instructions for enabling root account access.
  • A user account named cloud-user is preconfigured on the image. The cloud-user account has sudo access.
  • The image has cloud-init installed and enabled. cloud-init is a service that handles provisioning of the VM (or instance) at initial boot.

You can choose to use a custom Red Hat Enterprise Linux ISO image; however, when using a custom ISO image, you may need to make additional configuration changes.

Additional resources

Red Hat Enterprise Linux

3.2.2. Virtual machine configuration settings

Cloud VMs must have the following configuration settings.

Table 3.2. VM configuration settings

SettingRecommendation

ssh

ssh must be enabled to provide remote access to your VMs.

dhcp

The primary virtual adapter should be configured for dhcp.

3.3. Creating a base image from a KVM Guest Image

Follow the procedures in this section to create a base image from a KVM Guest Image.

Prerequisites

Enable virtualization for your Red Hat Enterprise Linux 8 host machine.

3.3.1. Downloading the KVM Guest Image

Procedure

  1. Download the latest Red Hat Enterprise Linux KVM Guest Image from the Red Hat Customer Portal.
  2. Move the image to /var/lib/libvirt/images.

3.3.2. Creating the VM from the KVM Guest Image

Procedure

  1. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  2. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name kvmtest --memory 2048 --vcpus 2 --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio --import --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
  3. Shut down the new VM after a login prompt appears.

3.3.3. Setting up root access to your KVM Guest Image

You need root access to make additional configuration changes to your image. You can also use root as one method of accessing your image once you have uploaded the image to the cloud. Perform the following procedure to enable root access to your VM.

Procedure

  1. From your host system, use the virt-customize command to generate a root password for the VM.

    # virt-customize -a <guest-image-path> --root-password password:<PASSWORD>

    Example:

    # virt-customize -a /var/lib/libvirt/images/rhel-guest-image-8.0-120.x86_64.qcow2 --root-password password:redhat!
    [   0.0] Examining the guest ...
    [ 103.0] Setting a random seed
    [ 103.0] Setting passwords
    [ 112.0] Finishing off
  2. Use the virt-edit command to edit the cloud.cfg file on your VM. Within the file, enable root login and password authentication by setting disable_root to 0 and ssh_pwauth to 1.

    # virt-edit -a <guest-image-path> /etc/cloud/cloud.cfg
  3. Verify root access by starting the RHEL VM and logging in as root.
  4. Configure the image.
  5. Important: This step is only for VMs you intend to upload to AWS. Install the nvme, xen-netfront, and xen-blkfront drivers. which are required for RHEL 8.x images on AWS.

     # dracut -f --add-drivers "nvme xen-netfront xen-blkfront"

    Including these driver removes the possibility of a dracut time-out.

    Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file.

  6. Power down the VM.

3.4. Creating a base VM from an ISO image

Follow the procedures in this section to create a base image from an ISO image.

Prerequisites

Enable virtualization for your Red Hat Enterprise Linux 8 host machine.

3.4.1. Downloading the ISO image

Procedure

  1. Download the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal.
  2. Move the image to /var/lib/libvirt/images.

3.4.2. Creating a VM from the ISO image

Procedure

  1. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  2. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name isotest --memory 2048 --vcpus 2 --disk size=8,bus=virtio --location rhel-8.0-x86_64-dvd.iso --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory and Storage Size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.

3.4.3. Completing the RHEL installation

Perform the following steps to complete the installation and to enable root access once the VM launches.

Procedure

  1. Choose the language you want to use during the installation process.
  2. On the Installation Summary view:

    1. Click Software Selection and check Minimal Install.
    2. Click Done.
    3. Click Installation Destination and check Custom under Storage Configuration.

      • Verify at least 500 MB for /boot. You can use the remaining space for root /.
      • Standard partitions are recommended, but you can use Logical Volume Management (LVM).
      • You can use xfs, ext4, or ext3 for File System.
      • Click Done when you are finished with changes.
  3. Click Begin Installation.
  4. Set a Root Password. Create other users as applicable.
  5. Reboot the VM and log in as root once the installation completes.
  6. Configure the image.

    Note

    Ensure that the cloud-init package is installed and enabled.

  7. Important: This step is only for VMs you intend to upload to AWS. Install the nvme, xen-netfront, and xen-blkfront drivers. which are required for RHEL 8.x images on AWS.

     # dracut -f --add-drivers "nvme xen-netfront xen-blkfront"

    Including these driver removes the possibility of a dracut time-out.

    Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file.

  8. Power down the VM.

3.5. Uploading the Red Hat Enterprise Linux image to AWS

Follow the procedures in this section to upload your image to AWS.

3.5.1. Installing the AWS CLI

Many of the procedures in this chapter include using the AWS CLI. Complete the following steps to install the AWS CLI.

Prerequisites

You need to have created and have access to an AWS Access Key ID and an AWS Secret Access Key. See Quickly Configuring the AWS CLI for information and instructions.

Procedure

  1. Install Python 3 and the pip tool.

    # yum install python3
    # yum install python3-pip
  2. Install the AWS command line tools with the pip command.

    # pip3 install awscli
  3. Run the aws --version command to verify that you installed the AWS CLI.

    $ aws --version
    aws-cli/1.16.182 Python/2.7.5 Linux/3.10.0-957.21.3.el7.x86_64 botocore/1.12.172
  4. Configure the AWS command line client according to your AWS access details.

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:

3.5.2. Creating an S3 bucket

Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you create an S3 bucket and then move your image to the bucket. Complete the following steps to create a bucket.

Procedure

  1. Launch the Amazon S3 Console.
  2. Click Create Bucket. The Create Bucket dialog appears.
  3. In the Name and region view:

    1. Enter a Bucket name.
    2. Enter a Region.
    3. Click Next.
  4. In the Configure options view, select desired options and click Next.
  5. In the Set permissions view, change or accept the default options and click Next.
  6. Review your bucket configuration.
  7. Click Create bucket.

    Note

    Alternatively, you can use the AWS CLI to create a bucket. For example, aws s3 mb s3://my-new-bucket creates an S3 bucket named my-new-bucket. See the AWS CLI Command Reference for information on the mb command.

3.5.3. Creating the vmimport role

Perform the following procedure to create the vmimport role, which is required by VM import. See VM Import Service Role in the Amazon documentation for more information.

Procedure

  1. Create a file named trust-policy.json and include the following policy. Save the file on your system and note its location.

    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Principal": { "Service": "vmie.amazonaws.com" },
             "Action": "sts:AssumeRole",
             "Condition": {
                "StringEquals":{
                   "sts:Externalid": "vmimport"
                }
             }
          }
       ]
    }
  2. Use the create role command to create the vmimport role. Specify the full path to the location of the trust-policy.json file. Prefix file:// to the path. A sample follows.

    aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json
  3. Create a file named role-policy.json and include the following policy. Replace s3-bucket-name with the name of your S3 bucket.

    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect":"Allow",
             "Action":[
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:ListBucket"
             ],
             "Resource":[
                "arn:aws:s3:::s3-bucket-name",
                "arn:aws:s3:::s3-bucket-name/*"
             ]
          },
          {
             "Effect":"Allow",
             "Action":[
                "ec2:ModifySnapshotAttribute",
                "ec2:CopySnapshot",
                "ec2:RegisterImage",
                "ec2:Describe*"
             ],
             "Resource":"*"
          }
       ]
    }
  4. Use the put-role-policy command to attach the policy to the role you created. Specify the full path of the role-policy.json file. A sample follows.

    aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json

3.5.4. Converting and pushing your image to S3

Complete the following procedure to convert and push your image to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA, VHD, VHDX, VMDK, and raw formats. See How VM Import/Export Works for more information on image formats that Amazon accepts.

Procedure

  1. Run the qemu-img command to convert your image. A sample follows.

    qemu-img convert -f qcow2 -O raw rhel-8.1-x86_64-kvm.qcow2 rhel-8.1-x86_64-kvm.raw
  2. Push the image to S3.

    aws s3 cp rhel-8.1-x86_64-kvm.raw s3://s3-bucket-name
    Note

    This procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket using the AWS S3 Console.

3.5.5. Importing your image as a snapshot

Perform the following procedure to import an image as a snapshot.

Procedure

  1. Create a file to specify a bucket and path for your image. Name the file containers.json. In the sample that follows, replace s3-bucket-name with your bucket name and s3-key with your key. You can get the key for the image using the Amazon S3 Console.

    {
        "Description": "rhel-8.1-x86_64-kvm.raw",
        "Format": "raw",
        "UserBucket": {
            "S3Bucket": "s3-bucket-name",
            "S3Key": "s3-key"
        }
    }
  2. Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket.

    aws ec2 import-snapshot --disk-container file://containers.json

    The terminal displays a message such as the following. Note the ImportTaskID within the message.

    {
        "SnapshotTaskDetail": {
            "Status": "active",
            "Format": "RAW",
            "DiskImageSize": 0.0,
            "UserBucket": {
                "S3Bucket": "s3-bucket-name",
                "S3Key": "rhel-8.1-x86_64-kvm.raw"
            },
            "Progress": "3",
            "StatusMessage": "pending"
        },
        "ImportTaskId": "import-snap-06cea01fa0f1166a8"
    }
  3. Track the progress of the import using the describe-import-snapshot-tasks command. Include the ImportTaskID.

    aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8

    The returned message shows the current status of the task. When complete, Status shows completed. Within the status, note the snapshot ID.

3.5.6. Creating an AMI from the uploaded snapshot

Within EC2, you must choose an Amazon Machine Image (AMI) when launching an instance. Perform the following procedure to create an AMI from your uploaded snapshot.

Procedure

  1. Go to the AWS EC2 Dashboard.
  2. Under Elastic Block Store, select Snapshots.
  3. Search for your snapshot ID (for example, snap-0e718930bd72bcda0).
  4. Right-click on the snapshot and select Create image.
  5. Name your image.
  6. Under Virtualization type, choose Hardware-assisted virtualization.
  7. Click Create. In the note regarding image creation, there is a link to your image.
  8. Click on the image link. Your image shows up under Images>AMIs.

    Note

    Alternatively, you can use the AWS CLI register-image command to create an AMI from a snapshot. See register-image for more information. An example follows.

    $ aws ec2 register-image --name "myimagename" --description "myimagedescription" --architecture x86_64  --virtualization-type hvm --root-device-name "/dev/sda1" --block-device-mappings "{\"DeviceName\": \"/dev/sda1\",\"Ebs\": {\"SnapshotId\": \"snap-0ce7f009b69ab274d\"}}" --ena-support

    You must specify the root device volume /dev/sda1 as your root-device-name. For conceptual information on device mapping for AWS, see Example block device mapping.

3.5.7. Launching an instance from the AMI

Perform the following procedure to launch and configure an instance from the AMI.

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click on your image and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload.

    Refer to Amazon EC2 Instance Types for information on instance types.

  4. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create.
    2. For Network, select the VPC you created when setting up your AWS environment. Select a subnet for the instance or create a new subnet.
    3. Select Enable for Auto-assign Public IP.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.

  5. Click Next: Add Storage. Verify that the default storage is sufficient.
  6. Click Next: Add Tags.

    Note

    Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information on tagging.

  7. Click Next: Configure Security Group. Select the security group you created when setting up your AWS environment.
  8. Click Review and Launch. Verify your selections.
  9. Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment.

    Note

    Verify that the permissions for your private key are correct. Use the command options chmod 400 <keyname>.pem to change the permissions, if necessary.

  10. Click Launch Instances.
  11. Click View Instances. You can name the instance(s).

    You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect. Use the example provided for A standalone SSH client.

    Note

    Alternatively, you can launch an instance using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

3.5.8. Attaching Red Hat subscriptions

Complete the following steps to attach the subscriptions you previously enabled through the Red Hat Cloud Access program.

Prerequisites

You must have enabled your subscriptions.

Procedure

  1. Register your system.

    subscription-manager register --auto-attach
  2. Attach your subscriptions.

Chapter 4. Configuring a Red Hat High Availability cluster on AWS

This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Amazon Web Services (AWS) using EC2 instances as cluster nodes. Note that you have a number of options for obtaining the Red Hat Enterprise Linux (RHEL) images you use for your cluster. For information on image options for AWS, see Red Hat Enterprise Linux Image Options on AWS.

The chapter includes prerequisite procedures for setting up your environment for AWS. Once you have set up your environment, you can create and configure EC2 instances.

The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on AWS. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing AWS network resource agents.

The chapter refers to the Amazon documentation in a number of places. For many procedures, see the referenced Amazon documentation for more information.

Prerequisites

4.1. Creating the AWS Access Key and AWS Secret Access Key

You need to create an AWS Access Key and AWS Secret Access Key before you install the AWS CLI. The fencing and resource agent APIs use the AWS Access Key and Secret Access Key to connect to each node in the cluster.

Complete the following steps to create these keys.

Prerequisites

Your IAM user account must have Programmatic access. For more information see Setting up the AWS Environment.

Procedure

  1. Launch the AWS Console.
  2. Click on your AWS Account ID to display the drop-down menu and select My Security Credentials.
  3. Click Users.
  4. Select the user and open the Summary screen.
  5. Click the Security credentials tab.
  6. Click Create access key.
  7. Download the .csv file (or save both keys). You need to enter these keys when creating the fencing device.

4.2. Installing the AWS CLI

Many of the procedures in this chapter include using the AWS CLI. Complete the following steps to install the AWS CLI.

Prerequisites

You need to have created and have access to an AWS Access Key ID and an AWS Secret Access Key. See Quickly Configuring the AWS CLI for information and instructions.

Procedure

  1. Install Python 3 and the pip tool.

    # yum install python3
    # yum install python3-pip
  2. Install the AWS command line tools with the pip command.

    # pip3 install awscli
  3. Run the aws --version command to verify that you installed the AWS CLI.

    $ aws --version
    aws-cli/1.16.182 Python/2.7.5 Linux/3.10.0-957.21.3.el7.x86_64 botocore/1.12.172
  4. Configure the AWS command line client according to your AWS access details.

    $ aws configure
    AWS Access Key ID [None]:
    AWS Secret Access Key [None]:
    Default region name [None]:
    Default output format [None]:

4.3. Creating an HA EC2 instance

Complete the following steps to create the instances that you then use as your HA cluster nodes. Note that you have a number of options for obtaining the RHEL images you use for your cluster. See Red Hat Enterprise Linux Image Options on AWS for information on image options for AWS.

You can create and upload a custom image that you then use for your cluster nodes, or you could choose a Gold Image (Cloud Access image) or an on-demand image.

Prerequisites

You need to have set up an AWS environment. See Setting Up with Amazon EC2 for more information.

Procedure

  1. From the AWS EC2 Dashboard, select Images and then AMIs.
  2. Right-click on your image and select Launch.
  3. Choose an Instance Type that meets or exceeds the requirements of your workload. Depending on your HA application, each instance may need to have higher capacity.

See Amazon EC2 Instance Types for information on instance types.

  1. Click Next: Configure Instance Details.

    1. Enter the Number of instances you want to create for the cluster. The examples in this chapter use three cluster nodes.

      Note

      Do not launch into an Auto Scaling Group.

    2. For Network, select the VPC you created in Set up the AWS environment. Select the subnet for the instance to create a new subnet.
    3. Select Enable for Auto-assign Public IP. These are the minimum selections you need to make for Configure Instance Details. Depending on your specific HA application, you may need to make additional selections.

      Note

      These are the minimum configuration options necessary to create a basic instance. Review additional options based on your HA application requirements.

  2. Click Next: Add Storage and verify that the default storage is sufficient. You do not need to modify these settings unless your HA application requires other storage options.
  3. Click Next: Add Tags.

    Note

    Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information on tagging.

  4. Click Next: Configure Security Group. Select the existing security group you created in Setting up the AWS environment.
  5. Click Review and Launch and verify your selections.
  6. Click Launch. You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when Setting up the AWS environment.
  7. Click Launch Instances.
  8. Click View Instances. You can name the instance(s).

    Note

    Alternatively, you can launch instances using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information.

4.4. Configuring the private key

Complete the following configuration tasks to use the private ssh key file (.pem) before it can be used in an SSH session.

Procedure

  1. Move the key file from the Downloads directory to your Home directory or to your ~/.ssh directory.
  2. Enter the following command to change the permissions of the key file so that only the root user can read it.

    # chmod 400 KeyName.pem

4.5. Connecting to an instance

Complete the following steps on all nodes to connect to an instance.

Procedure

  1. Launch the AWS Console and select the EC2 instance.
  2. Click Connect and select A standalone SSH client.
  3. From your SSH terminal session, connect to the instance using the AWS example provided in the pop-up window. Add the correct path to your KeyName.pem file if the path is not shown in the example.

4.6. Installing the HA packages and agents

Complete the following steps on all nodes to install the HA packages and agents.

Procedure

  1. Enter the following command to remove the AWS Red Hat Update Infrastructure (RHUI) client. Because you are going to use a Red Hat Cloud Access subscription, you should not use AWS RHUI in addition to your subscription.

    $ sudo -i
    # yum -y remove rh-amazon-rhui-client*
  2. Register the VM with Red Hat.

    # subscription-manager register --auto-attach
  3. Disable all repositories.

    # subscription-manager repos --disable=*
  4. Enable the RHEL 8 Server and RHEL 8 Server HA repositories.

    # subscription-manager repos --enable=rhel-8-server-rpms
    # subscription-manager repos --enable=rhel-ha-for-rhel-8-server-rpms
  5. Update the RHEL AWS instance.

    # yum update -y
  6. Install the Red Hat High Availability Add-On software packages, along with all available fencing agents from the High Availability channel.

    # yum install pcs pacemaker fence-agents-aws
  7. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  8. Add the high availability service to the RHEL Firewall if firewalld.service is installed.

    # firewall-cmd --permanent --add-service=high-availability
    # firewall-cmd --reload
  9. Start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    # systemctl enable pcsd.service
  10. Edit /etc/hosts and add RHEL host names and internal IP addresses. See How should the /etc/hosts file be set up on RHEL cluster nodes? for details.

Verification step

Ensure the pcs service is running.

# systemctl status pcsd.service

pcsd.service - PCS GUI and remote configuration interface
Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago
Docs: man:pcsd(8)
man:pcs(8)
Main PID: 5437 (pcsd)
CGroup: /system.slice/pcsd.service
     └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface…
Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.

4.7. Creating a cluster

Complete the following steps to create the cluster of nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user hacluster. In the command, specify the name of each node in the cluster.

    # pcs host auth  hostname1 hostname2 hostname3
    Username: hacluster
    Password:
    hostname1: Authorized
    hostname2: Authorized
    hostname3: Authorized

    Example:

    [root@node01 clouduser]# pcs host auth node01 node02 node03
    Username: hacluster
    Password:
    node01: Authorized
    node02: Authorized
    node03: Authorized
  2. Create the cluster.

    # pcs cluster setup cluster-name hostname1 hostname2 hostname3

    Example:

    [root@node01 clouduser]# pcs cluster setup --name newcluster node01 node02 node03
    
    ...omitted
    
    Synchronizing pcsd certificates on nodes node01, node02, node03...
    node02: Success
    node03: Success
    node01: Success
    Restarting pcsd on the nodes in order to reload the certificates...
    node02: Success
    node03: Success
    node01: Success

Verification steps

  1. Enable the cluster.

    [root@node01 clouduser]# pcs cluster enable --all
  2. Start the cluster.

    [root@node01 clouduser]# pcs cluster start --all

    Example:

    [root@node01 clouduser]# pcs cluster enable --all
    node02: Cluster Enabled
    node03: Cluster Enabled
    node01: Cluster Enabled
    
    [root@node01 clouduser]# pcs cluster start --all
    node02: Starting Cluster...
    node03: Starting Cluster...
    node01: Starting Cluster...

4.8. Configuring fencing

Complete the following steps to configure fencing.

Procedure

  1. Enter the following AWS metadata query to get the Instance ID for each node. You need these IDs to configure the fence device. See Instance Metadata and User Data for additional information.

    # echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id)

    Example:

    [root@ip-10-0-0-48 ~]# echo $(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6
  2. Enter the following command to configure the fence device. Use pcmk_host_map to map the RHEL host name to the Instance ID. Use the AWS Access Key and AWS Secret Access Key you previously set up.

    # pcs stonith create name fence_aws access_key=access-key secret_key=secret-access-key region=region pcmk_host_map="rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith create clusterfence fence_aws access_key=AKIAI*******6MRMJA secret_key=a75EYIG4RVL3h*******K7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map="ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4
  3. Test the fencing agent for one of the other nodes.

    # pcs stonith fence awsnodename
    Note

    The command response may take several minutes to display. If you watch the active terminal session for the node being fenced, you see that the terminal connection is immediately terminated after you enter the fence command.

    Example:

    [root@ip-10-0-0-48 ~]# pcs stonith fence ip-10-0-0-58
    Node: ip-10-0-0-58 fenced

Verification steps

  1. Check the status to verify that the node is fenced.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 19:55:41 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ]
    OFFLINE: [ ip-10-0-0-58 ]
    
    Full list of resources:
    clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
    corosync: active/disabled
    pacemaker: active/disabled
    pcsd: active/enabled
  2. Start the node that was fenced in the previous step.

    # pcs cluster start awshostname
  3. Check the status to verify the node started.

    # pcs status

    Example:

    [root@ip-10-0-0-48 ~]# pcs status
    Cluster name: newcluster
    Stack: corosync
    Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
    Last updated: Fri Mar  2 20:01:31 2018
    Last change: Fri Mar  2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48
    
    3 nodes configured
    1 resource configured
    
    Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]
    
    Full list of resources:
    
      clusterfence  (stonith:fence_aws):    Started ip-10-0-0-46
    
    Daemon Status:
      corosync: active/disabled
      pacemaker: active/disabled
      pcsd: active/enabled

4.9. Installing the AWS CLI on cluster nodes

Previously, you installed the AWS CLI on your host system. You now need to install the AWS CLI on cluster nodes before you configure the network resource agents.

Complete the following procedure on each cluster node.

Prerequisites

You must have created an AWS Access Key and AWS Secret Access Key. See Creating the AWS Access Key and AWS Secret Access Key for more information.

Procedure

  1. Perform the procedure Installing the AWS CLI.
  2. Enter the following command to verify that the AWS CLI is configured properly. The instance IDs and instance names should display.

    Example:

    [root@ip-10-0-0-48 ~]# aws ec2 describe-instances --output text --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value]'
    i-07f1ac63af0ec0ac6
    ip-10-0-0-48
    i-063fc5fe93b4167b2
    ip-10-0-0-46
    i-08bd39eb03a6fd2c7
    ip-10-0-0-58

4.10. Installing network resource agents

For HA operations to work, the cluster uses AWS networking resource agents to enable failover functionality. If a node does not respond to a heartbeat check in a set time, the node is fenced and operations fail over to an additional node in the cluster. Network resource agents need to be configured for this to work.

Add the two resources to the same group to enforce order and colocation constraints.

Create a secondary private IP resource and virtual IP resource

Complete the following procedure to add a secondary private IP address and create a virtual IP. You can complete this procedure from any node in the cluster.

Procedure

  1. Enter the following command to view the AWS Secondary Private IP Address resource agent (awsvip) description. This shows the options and default operations for this agent.

    # pcs resource describe awsvip
  2. Enter the following command to create the Secondary Private IP address using an unused private IP address in the VPC CIDR block.

    # pcs resource create privip awsvip secondary_private_ip=Unused-IP-Address --group group-name

    Example:

    [root@ip-10-0-0-48 ~]# pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group
  3. Create a virtual IP resource. This is a VPC IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node within the subnet.

    # pcs resource create vip IPaddr2 ip=secondary-private-IP --group group-name

    Example:

    root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group

Verification step

Enter the pcs status command to verify that the resources are running.

# pcs status

Example:

[root@ip-10-0-0-48 ~]# pcs status
Cluster name: newcluster
Stack: corosync
Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
Last updated: Fri Mar  2 22:34:24 2018
Last change: Fri Mar  2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46

3 nodes configured
3 resources configured

Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]

Full list of resources:

clusterfence    (stonith:fence_aws):    Started ip-10-0-0-46
 Resource Group: networking-group
     privip (ocf::heartbeat:awsvip):    Started ip-10-0-0-48
     vip    (ocf::heartbeat:IPaddr2):   Started ip-10-0-0-58

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Create an elastic IP address

An elastic IP address is a public IP address that can be rapidly remapped from the fenced node to the failover node, masking the failure of the fenced node.

Note that this is different from the virtual IP resource created earlier. The elastic IP address is used for public-facing Internet connections instead of subnet connections.

  1. Add the two resources to the same group that was previously created to enforce order and colocation constraints.
  2. Enter the following AWS CLI command to create an elastic IP address.

    [root@ip-10-0-0-48 ~]# aws ec2 allocate-address --domain vpc --output text
    eipalloc-4c4a2c45   vpc 35.169.153.122
  3. Enter the following command to view the AWS Secondary Elastic IP Address resource agent (awseip) description. This shows the options and default operations for this agent.

    # pcs resource describe awseip
  4. Create the Secondary Elastic IP address resource using the allocated IP address created in Step 1.

    # pcs resource create elastic awseip elastic_ip=_Elastic-IP-Address_allocation_id=_Elastic-IP-Association-ID_ --group networking-group

    Example:

    # pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group

Verification step

Enter the pcs status command to verify that the resource is running.

# pcs status

Example:

[root@ip-10-0-0-58 ~]# pcs status
Cluster name: newcluster
Stack: corosync
Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum
Last updated: Mon Mar  5 16:27:55 2018
Last change: Mon Mar  5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46

3 nodes configured
4 resources configured

Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ]

Full list of resources:

 clusterfence   (stonith:fence_aws):    Started ip-10-0-0-46
 Resource Group: networking-group
     privip (ocf::heartbeat:awsvip):  Started ip-10-0-0-48
     vip    (ocf::heartbeat:IPaddr2):    Started ip-10-0-0-48
     elastic (ocf::heartbeat:awseip):    Started ip-10-0-0-48

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

Test the elastic IP address

Enter the following commands to verify the virtual IP (awsvip) and elastic IP (awseip) resources are working.

Procedure

  1. Launch an SSH session from your local workstation to the elastic IP address previously created.

    $ ssh -l ec2-user -i ~/.ssh/<KeyName>.pem elastic-IP

    Example:

    $ ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122
  2. Verify that the host you connected to via SSH is the host associated with the elastic resource created.

Chapter 5. Deploying a Red Hat Enterprise Linux image as a Google Compute Engine instance on Google Cloud Platform

You have a number of options for deploying a Red Hat Enterprise Linux (RHEL) 8 image as a Google Compute Engine (GCE) instance on Google Cloud Platform (GCP). This chapter discusses your options for choosing an image and lists or refers to system requirements for your host system and VM. The chapter provides procedures for creating a custom image, uploading to GCE, and launching an instance.

This chapter refers to the Google documentation in a number of places. For many procedures, see the referenced Google documentation for additional detail.

Note

For a list of Red Hat product certifications for GCP, see Red Hat on Google Cloud Platform.

Prerequisites

  • You need a Red Hat Customer Portal account to complete the procedures in this chapter.
  • Create an account with GCP to access the Google Cloud Platform Console. See Google Cloud for more information.
  • Enable your Red Hat subscriptions through the Red Hat Cloud Access program. The Red Hat Cloud Access program allows you to move your Red Hat subscriptions from physical or on-premise systems onto GCP with full support from Red Hat.

5.1. Red Hat Enterprise Linux image options on GCP

The following table lists image choices and the differences in the image options.

Table 5.1. Image options

Image optionSubscriptionsSample scenarioConsiderations

Choose to deploy a custom image that you move to GCP.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions.

The subscription includes the Red Hat product cost; you pay all other instance costs.

Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy an existing GCP image that includes RHEL.

The GCP images include a Red Hat product.

Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace.

You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement.

Note

You can create a custom image for GCP using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.

Important

You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing.

The remainder of this chapter includes information and procedures pertaining to custom images.

5.2. Understanding base images

This section includes information on using preconfigured base images and their configuration settings.

5.2.1. Using a custom base image

To manually configure a VM, you start with a base (starter) VM image. Once you have created the base VM image, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image.

The recommended base VM image is the Red Hat Enterprise Linux 8 KVM Guest Image, which you download from the Red Hat Customer Portal. The KVM Guest Image is preconfigured with the following cloud configuration settings.

  • The root account is disabled. You temporarily enable root account access to make configuration changes and install packages that the cloud may require. This guide provides instructions for enabling root account access.
  • A user account named cloud-user is preconfigured on the image. The cloud-user account has sudo access.
  • The image has cloud-init installed and enabled. cloud-init is a service that handles provisioning of the VM (or instance) at initial boot.

You can choose to use a custom Red Hat Enterprise Linux ISO image; however, when using a custom ISO image, you may need to make additional configuration changes.

Additional resources

Red Hat Enterprise Linux

5.2.2. Virtual machine configuration settings

Cloud VMs must have the following configuration settings.

Table 5.2. VM configuration settings

SettingRecommendation

ssh

ssh must be enabled to provide remote access to your VMs.

dhcp

The primary virtual adapter should be configured for dhcp.

5.3. Creating a base image from a KVM Guest Image

Follow the procedures in this section to create a base image from a KVM Guest Image.

Prerequisites

Enable virtualization for your Red Hat Enterprise Linux 8 host machine.

5.3.1. Downloading the KVM Guest Image

Procedure

  1. Download the latest Red Hat Enterprise Linux KVM Guest Image from the Red Hat Customer Portal.
  2. Move the image to /var/lib/libvirt/images.

5.3.2. Creating the VM from the KVM Guest Image

Procedure

  1. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  2. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name kvmtest --memory 2048 --vcpus 2 --disk rhel-8.0-x86_64-kvm.qcow2,bus=virtio --import --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.
  3. Shut down the new VM after a login prompt appears.

5.3.3. Setting up root access to your KVM Guest Image

You need root access to make additional configuration changes to your image. You can also use root as one method of accessing your image once you have uploaded the image to the cloud. Perform the following procedure to enable root access to your VM.

Procedure

  1. From your host system, use the virt-customize command to generate a root password for the VM.

    # virt-customize -a <guest-image-path> --root-password password:<PASSWORD>

    Example:

    # virt-customize -a /var/lib/libvirt/images/rhel-guest-image-8.0-120.x86_64.qcow2 --root-password password:redhat!
    [   0.0] Examining the guest ...
    [ 103.0] Setting a random seed
    [ 103.0] Setting passwords
    [ 112.0] Finishing off
  2. Use the virt-edit command to edit the cloud.cfg file on your VM. Within the file, enable root login and password authentication by setting disable_root to 0 and ssh_pwauth to 1.

    # virt-edit -a <guest-image-path> /etc/cloud/cloud.cfg
  3. Verify root access by starting the RHEL VM and logging in as root.
  4. Configure the image.
  5. Important: This step is only for VMs you intend to upload to AWS. Install the nvme, xen-netfront, and xen-blkfront drivers. which are required for RHEL 8.x images on AWS.

     # dracut -f --add-drivers "nvme xen-netfront xen-blkfront"

    Including these driver removes the possibility of a dracut time-out.

    Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file.

  6. Power down the VM.

5.4. Creating a base VM from an ISO image

Follow the procedures in this section to create a base image from an ISO image.

Prerequisites

Enable virtualization for your Red Hat Enterprise Linux 8 host machine.

5.4.1. Downloading the ISO image

Procedure

  1. Download the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal.
  2. Move the image to /var/lib/libvirt/images.

5.4.2. Creating a VM from the ISO image

Procedure

  1. Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 8 for information and procedures.
  2. Create and start a basic Red Hat Enterprise Linux VM. See Creating virtual machines for instructions.

    1. If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio.

      A basic command-line sample follows.

      virt-install --name isotest --memory 2048 --vcpus 2 --disk size=8,bus=virtio --location rhel-8.0-x86_64-dvd.iso --os-variant=rhel8.0
    2. If you use the web console to create your VM, follow the procedure in Creating virtual machines using the web console, with these caveats:

      • Do not check Immediately Start VM.
      • Change your Memory and Storage Size to your preferred settings.
      • Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM.

5.4.3. Completing the RHEL installation

Perform the following steps to complete the installation and to enable root access once the VM launches.

Procedure

  1. Choose the language you want to use during the installation process.
  2. On the Installation Summary view:

    1. Click Software Selection and check Minimal Install.
    2. Click Done.
    3. Click Installation Destination and check Custom under Storage Configuration.

      • Verify at least 500 MB for /boot. You can use the remaining space for root /.
      • Standard partitions are recommended, but you can use Logical Volume Management (LVM).
      • You can use xfs, ext4, or ext3 for File System.
      • Click Done when you are finished with changes.
  3. Click Begin Installation.
  4. Set a Root Password. Create other users as applicable.
  5. Reboot the VM and log in as root once the installation completes.
  6. Configure the image.

    Note

    Ensure that the cloud-init package is installed and enabled.

  7. Important: This step is only for VMs you intend to upload to AWS. Install the nvme, xen-netfront, and xen-blkfront drivers. which are required for RHEL 8.x images on AWS.

     # dracut -f --add-drivers "nvme xen-netfront xen-blkfront"

    Including these driver removes the possibility of a dracut time-out.

    Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file.

  8. Power down the VM.

5.5. Uploading the RHEL image to GCP

Follow the procedures in this section to upload your image to GCP.

5.5.1. Creating a new project on GCP

Complete the following steps to create a new project on GCP.

Prerequisites

You must have created an account with GCP. If you have not, see Google Cloud for more information.

Procedure

  1. Launch the GCP Console.
  2. Click the drop-down to the right of Google Cloud Platform.
  3. From the pop-up, click NEW PROJECT.
  4. From the New Project window, enter a name for your new project.
  5. Check the Organization. Click the drop-down menu to change the organization, if necessary.
  6. Confirm the Location of your parent organization or folder. Click Browse to search for and change this value, if necessary.
  7. Click CREATE to create your new GCP project.

    Note

    Once you have installed the Google Cloud SDK, you can use the gcloud projects create CLI command to create a project. A simple example follows.

    gcloud projects create my-gcp-project3 --name project3

    The example creates a project with the project ID my-gcp-project3 and the project name project3. See gcloud project create for more information.

Additional resources

Creating and Managing Resources

5.5.2. Installing the Google Cloud SDK

Complete the following steps to install the Google Cloud SDK.

Prerequisites

Procedure

  1. Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details.
  2. Follow the same instructions for initializing the Google Cloud SDK.

    Note

    Once you have initialized the Google Cloud SDK, you can use the gcloud CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with the gcloud compute project-info describe --project <project-name> command.

5.5.3. Creating SSH keys for Google Compute Engine

Perform the following procedure to generate and register SSH keys with GCE so that you can SSH directly into an instance using its public IP address.

Procedure

  1. Use the ssh-keygen command to generate an SSH key pair for use with GCE.

    # ssh-keygen -t rsa -f ~/.ssh/google_compute_engine
  2. From the GCP Console Dashboard page, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Metadata.
  3. Click SSH Keys and then click Edit.
  4. Enter the output generated from the ~/.ssh/google_compute_engine.pub file and click Save.

    Note

    If the Red Hat image you configured was a KVM Guest Image, the user name for your key must be cloud-user, which is the default user.

    You can now connect to your instance using standard SSH.

    # ssh -i ~/.ssh/google_compute_engine <username>@<instance_external_ip>
Note

You can run the gcloud compute config-ssh command to populate your config file with aliases for your instances. The aliases allow simple SSH connections by instance name. For information on the gcloud compute config-ssh command, see gcloud compute config-ssh.

5.5.4. Creating a storage bucket in GCP Storage

Importing to GCP requires a GCP Storage Bucket. Complete the following steps to create a bucket.

Procedure

  1. If you are not already logged in to GCP, log in with the following command.

    # gcloud auth login
  2. Create a storage bucket.

    # gsutil mb gs://bucket_name
    Note

    Alternatively, you can use the Google Cloud Console to create a bucket. See Create a bucket for information.

Additional resources

Create a bucket

5.5.5. Converting and uploading your image to your GCP Bucket

Complete the following procedure to convert and upload your image to your GCP Bucket. The samples are representative; they convert a qcow2 image to raw format and then tar that image for upload.

Procedure

  1. Run the qemu-img command to convert your image. The converted image must have the name disk.raw.

    # qemu-img convert -f qcow2 -O raw gc-iso-dvd.qcow2 disk.raw
  2. Tar the image.

    # tar --format=oldgnu -Sczf disk.raw.tar.gz disk.raw
  3. Upload the image to the bucket you created previously. Upload could take a few minutes.

    # gsutil cp disk.raw.tar.gz gs://bucket_name
  4. From the Google Cloud Platform home screen, click the collapsed menu icon and select Storage and then select Browser.
  5. Click the name of your bucket.

    The tarred image is listed under your bucket name.

    Note

    You can also upload your image using the GCP Console. To do so, click the name of your bucket and then click Upload files.

5.5.6. Creating an image from the object in the GCP bucket

Perform the following procedure to create an image from the object in your GCP bucket.

Procedure

  1. Run the following command to create an image for GCE. Specify the name of the image you are creating, the bucket name, and the name of the tarred image.

    # gcloud compute images create my-image-name --source-uri gs://my-bucket-name/disk.raw.tar.gz
    Note

    Alternatively, you can use the Google Cloud Console to create an image. See Creating, deleting, and deprecating custom images for information.

  2. Optionally, find the image in the GCP Console.

    1. Click the Navigation menu to the left of the Google Cloud Console banner.
    2. Select Compute Engine and then Images.

5.5.7. Creating a Google Compute Engine instance from an image

Complete the following steps to configure a GCE VM instance using the GCP Console.

Note

The following procedure provides instructions for creating a basic VM instance using the GCP Console. See Creating and starting a VM instance for more information on GCE VM instances and their configuration options.

Procedure

  1. From the GCP Console Dashboard page, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select Images.
  2. Select your image.
  3. Click Create Instance.
  4. On the Create an instance page, enter a Name for your instance.
  5. Choose a Region and Zone.
  6. Choose a Machine configuration that meets or exceeds the requirements of your workload.
  7. Ensure that Boot disk specifies the name of your image.
  8. Optionally, under Firewall, select Allow HTTP traffic or Allow HTTPS traffic.
  9. Click Create.

    Note

    These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements.

  10. Find your image under VM instances.
  11. From the GCP Console Dashboard, click the Navigation menu to the left of the Google Cloud Console banner and select Compute Engine and then select VM instances.

    Note

    Alternatively, you can use the gcloud compute instances create CLI command to create a GCE VM instance from an image. A simple example follows.

    gcloud compute instances create myinstance3 --zone=us-central1-a --image test-iso2-image

    The example creates a VM instance named myinstance3 in zone us-central1-a based upon the existing image test-iso2-image. See gcloud compute instances create for more information.

5.5.8. Connecting to your instance

Perform the following procedure to connect to your GCE instance using its public IP address.

Procedure

  1. Run the following command to ensure that your instance is running. The command lists information about your GCE instance, including whether the instance is running, and, if so, the public IP address of the running instance.

    # gcloud compute instances list
  2. Connect to your instance using standard SSH. The example uses the google_compute_engine key created earlier.

    Note

    If the Red Hat image you configured was a KVM Guest Image, use cloud-user, which is the default user name.

    # ssh -i ~/.ssh/google_compute_engine <user_name>@<instance_external_ip>
    Note

    GCP offers a number of ways to SSH into your instance. See Connecting to instances for more information. You can also connect to your instance using the root account and password you set previously.

5.5.9. Attaching Red Hat subscriptions

Complete the following steps to attach the subscriptions you previously enabled through the Red Hat Cloud Access program.

Prerequisites

You must have enabled your subscriptions.

Procedure

  1. Register your system.

    subscription-manager register --auto-attach
  2. Attach your subscriptions.

Chapter 6. Configuring Red Hat High Availability Cluster on Google Cloud Platform

This chapter includes information and procedures for configuring a Red Hat High Availability (HA) cluster on Google Cloud Platform (GCP) using Google Compute Engine (GCE) virtual machine (VM) instances as cluster nodes.

The chapter includes prerequisite procedures for setting up your environment for GCP. Once you have set up your environment, you can create and configure VM instances.

The chapter also includes procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on GCP. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing network resource agents.

Prerequisites

  • You must be enrolled in the Red Hat Cloud Access program and have unused RHEL subscriptions. The attached subscription must include access to the following repositories for each GCP instance.

    • Red Hat Enterprise Linux 8 Server: rhel-8-server-rpms/8Server/x86_64
    • Red Hat Enterprise Linux 8 Server (High Availability): rhel-8-server-ha-rpms/8Server/x86_64
  • You must belong to an active GCP project and have sufficient permissions to create resources in the project.
  • Your project should have a service account that belongs to a VM instance and not an individual user. See Using the Compute Engine Default Service Account for information about using the default service account instead of creating a separate service account.

If you or your project administrator create a custom service account, the service account should be configured for the following roles.

  • Cloud Trace Agent
  • Compute Admin
  • Compute Network Admin
  • Cloud Datastore User
  • Logging Admin
  • Monitoring Editor
  • Monitoring Metric Writer
  • Service Account Administrator
  • Storage Admin

6.1. Required system packages

The procedures in this chapter assume you are using a host system running Red Hat Enterprise Linux. To successfully complete the procedures, your host system must have the following packages installed.

Table 6.1. System packages

PackageRepositoryDescription

libvirt

rhel-8-for-x86_64-appstream-rpms

Open source API, daemon, and management tool for managing platform virtualization

virt-install

rhel-8-for-x86_64-appstream-rpms

A command-line utility for building VMs

libguestfs

rhel-8-for-x86_64-appstream-rpms

A library for accessing and modifying VM file systems

libguestfs-tools

rhel-8-for-x86_64-appstream-rpms

System administration tools for VMs; includes the guestfish utility

6.2. Red Hat Enterprise Linux image options on GCP

The following table lists image choices and the differences in the image options.

Table 6.2. Image options

Image optionSubscriptionsSample scenarioConsiderations

Choose to deploy a custom image that you move to GCP.

Leverage your existing Red Hat subscriptions.

Enable subscriptions through the Red Hat Cloud Access program, upload your custom image, and attach your subscriptions.

The subscription includes the Red Hat product cost; you pay all other instance costs.

Custom images that you move to GCP are called "Cloud Access" images because you leverage your existing Red Hat subscriptions. Red Hat provides support directly for Cloud Access images.

Choose to deploy an existing GCP image that includes RHEL.

The GCP images include a Red Hat product.

Choose a RHEL image when you launch an instance on the GCP Compute Engine, or choose an image from the Google Cloud Platform Marketplace.

You pay GCP hourly on a pay-as-you-go model. Such images are called "on-demand" images. GCP offers support for on-demand images through a support agreement.

Note

You can create a custom image for GCP using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information.

Important

You cannot convert an on-demand instance to a Red Hat Cloud Access instance. To change from an on-demand image to a Red Hat Cloud Access (BYOS) image, create a new Red Hat Cloud Access instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing.

The remainder of this chapter includes information and procedures pertaining to custom images.

6.3. Installing the Google Cloud SDK

Complete the following steps to install the Google Cloud SDK.

Prerequisites

Procedure

  1. Follow the GCP instructions for downloading and extracting the Google Cloud SDK archive. See the GCP document Quickstart for Linux for details.
  2. Follow the same instructions for initializing the Google Cloud SDK.

    Note

    Once you have initialized the Google Cloud SDK, you can use the gcloud CLI commands to perform tasks and obtain information about your project and instances. For example, you can display project information with the gcloud compute project-info describe --project <project-name> command.

6.4. Creating a GCP image bucket

The following document includes the minimum requirements for creating a multi-regional bucket in your default location.

Prerequisites

GCP storage utility (gsutil)

Procedure

  1. If you are not already logged in to Google Cloud Platform, log in with the following command.

    # gcloud auth login
  2. Create a storage bucket.

    $ gsutil mb gs://BucketName

    Example:

    $ gsutil mb gs://rhel-ha-bucket

Additional resources

Make buckets

6.5. Creating a custom virtual private cloud network and subnet

Complete the following steps to create a custom virtual private cloud (VPC) network and subnet.

Procedure

  1. Launch the GCP Console.
  2. Select VPC networks under Networking in the left navigation pane.
  3. Click Create VPC Network.
  4. Enter a name for the VPC network.
  5. Under the New subnet, create a Custom subnet in the region where you want to create the cluster.
  6. Click Create.

6.6. Preparing and importing a base GCP image

Complete the following steps to prepare the image for GCP. The following procedures assume you have created an image from a KVM Guest Image.

See Create a VM from a KVM Guest image for more information.

Procedure

  1. Enter the following command to convert the file. Images uploaded to GCP must be in raw format and named disk.raw.

    $ qemu-img convert -f qcow2 ImageName.qcow2 -O raw disk.raw
  2. Enter the following command to compress the raw file. Images uploaded to GCP must be compressed.

    $ tar -Sczf ImageName.tar.gz disk.raw
  3. Import the compressed image to the bucket created earlier.

    $ gsutil cp ImageName.tar.gz gs://BucketName

6.7. Creating and configuring a base GCP instance

Complete the following steps to create and configure a GCP instance that complies with GCP operating and security requirements.

Procedure

  1. Enter the following command to create an image from the compressed file in the bucket.

    $ gcloud compute images create BaseImageName --source-uri gs://BucketName/BaseImageName.tar.gz

    Example:

    [admin@localhost ~] $ gcloud compute images create rhel-76-server --source-uri gs://user-rhelha/rhel-server-76.tar.gz
    Created [https://www.googleapis.com/compute/v1/projects/MyProject/global/images/rhel-server-76].
    NAME            PROJECT                 FAMILY  DEPRECATED  STATUS
    rhel-76-server  rhel-ha-testing-on-gcp                      READY
  2. Enter the following command to create a template instance from the image. The minimum size required for a base RHEL instance is n1-standard-2. See gcloud compute instances create for additional configuration options.

    $ gcloud compute instances create BaseInstanceName --can-ip-forward --machine-type n1-standard-2 --image BaseImageName --service-account ServiceAccountEmail

    Example:

    [admin@localhost ~] $ gcloud compute instances create rhel-76-server-base-instance --can-ip-forward --machine-type n1-standard-2 --image rhel-76-server --service-account account@project-name-on-gcp.iam.gserviceaccount.com
    Created [https://www.googleapis.com/compute/v1/projects/rhel-ha-testing-on-gcp/zones/us-east1-b/instances/rhel-76-server-base-instance].
    NAME   ZONE   MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
    rhel-76-server-base-instance  us-east1-bn1-standard-2          10.10.10.3   192.227.54.211  RUNNING
  3. Connect to the instance with an SSH terminal session.

    $ ssh root@PublicIPaddress
  4. Update the RHEL software.

    1. Register with Red Hat Subscription Manager (RHSM).
    2. Enable a Subscription pool ID (or use the --auto-attach command).
    3. Disable all repositories.

      # subscription-manager repos --disable=*
    4. Enable the following repository.

      # subscription-manager repos --enable=rhel-8-server-rpms
    5. Run yum update.

      # yum update -y
  5. Install the GCP Linux Guest Environment on the running instance (in-place installation).

    See Install the guest environment in-place for instructions.

  6. Select the CentOS/RHEL option.
  7. Copy the command script and paste it at the command prompt to run the script immediately.
  8. Make the following configuration changes to the instance. These changes are based on GCP recommendations for custom images. See gcloudcompute images list for more information.

    1. Edit the /etc/chrony.conf file and remove all NTP servers.
    2. Add the following NTP server.

      metadata.google.internal iburst Google NTP server
    3. Remove any persistent network device rules.

      # rm -f /etc/udev/rules.d/70-persistent-net.rules
      
      # rm -f /etc/udev/rules.d/75-persistent-net-generator.rules
    4. Set the network service to start automatically.

      # chkconfig network on
    5. Set the ssh service to start automatically.

      # systemctl enable sshd
      # systemctl is-enabled sshd
    6. Enter the following command to set the time zone to UTC.

      # ln -sf /usr/share/zoneinfo/UTC /etc/localtime
    7. (Optional) Edit the /etc/ssh/ssh_config file and add the following lines to the end of the file. This keeps your SSH session alive during longer periods of inactivity.

      # Server times out connections after several minutes of inactivity.
      # Keep alive ssh connections by sending a packet every 7 minutes.
      ServerAliveInterval 420
    8. Edit the /etc/ssh/sshd_config file and make the following changes, if necessary. The ClientAliveInterval 420 setting is optional; this keeps your SSH session alive during longer periods of inactivity.

      PermitRootLogin no
      PasswordAuthentication no
      AllowTcpForwarding yes
      X11Forwarding no
      PermitTunnel no
      # Compute times out connections after 10 minutes of inactivity.
      # Keep ssh connections alive by sending a packet every 7 minutes.
      ClientAliveInterval 420
  9. Enter the following command to disable password access. Edit the /etc/cloud/cloud.cfg file.

    ssh_pwauth from 1 to 0.
    ssh_pwauth: 0
    Important

    Previously, you enabled password access to allow SSH session access to configure the instance. You must disable password access. All SSH session access must be passwordless.

  10. Enter the following command to unregister the instance from the subscription manager.

    # subscription-manager unregister
  11. Enter the following command to clean the shell history. Keep the instance running for the next procedure.

    # export HISTSIZE=0

6.8. Creating a snapshot image

Complete the following steps to preserve the instance configuration settings and create a snapshot.

Procedure

  1. On the running instance, enter the following command to synchronize data to disk.

    # sync
  2. On your host system, enter the following command to create the snapshot.

    $ gcloud compute disks snapshot InstanceName --snapshot-names SnapshotName
  3. On your host system, enter the following command to create the configured image from the snapshot.

    $ gcloud compute images create ConfiguredImageFromSnapshot --source-snapshot SnapshotName

6.9. Creating an HA node template instance and HA nodes

Once you have configured an image from the snapshot, you can create a node template. Use this template to create all HA nodes. Complete the following steps to create the template and HA nodes.

Procedure

  1. Enter the following command to create an instance template.

    $ gcloud compute instance-templates create InstanceTemplateName --can-ip-forward --machine-type n1-standard-2  --image ConfiguredImageFromSnapshot --service-account ServiceAccountEmailAddress

    Example:

    [admin@localhost ~] $ gcloud compute instance-templates create rhel-81-instance-template --can-ip-forward --machine-type n1-standard-2 --image rhel-81-gcp-image --service-account account@project-name-on-gcp.iam.gserviceaccount.com
    Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/global/instanceTemplates/rhel-81-instance-template].
    NAME  MACHINE_TYPE   PREEMPTIBLE  CREATION_TIMESTAMP
    rhel-81-instance-template   n1-standard-2          2018-07-25T11:09:30.506-07:00
  2. Enter the following command to create multiple nodes in one zone.

    # gcloud compute instances create NodeName01 NodeName02 --source-instance-template InstanceTemplateName --zone RegionZone --network=NetworkName --subnet=SubnetName

    Example:

    [admin@localhost ~] $ gcloud compute instances create rhel81-node-01 rhel81-node-02 rhel81-node-03 --source-instance-template rhel-81-instance-template --zone us-west1-b --network=projectVPC --subnet=range0
    Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-01].
    Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-02].
    Created [https://www.googleapis.com/compute/v1/projects/project-name-on-gcp/zones/us-west1-b/instances/rhel81-node-03].
    NAME            ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
    rhel81-node-01  us-west1-b  n1-standard-2               10.10.10.4   192.230.25.81   RUNNING
    rhel81-node-02  us-west1-b  n1-standard-2               10.10.10.5   192.230.81.253  RUNNING
    rhel81-node-03  us-east1-b  n1-standard-2               10.10.10.6   192.230.102.15  RUNNING

6.10. Installing HA packages and agents

Complete the following steps on all nodes.

Procedure

  1. In the Google Cloud Console, select Compute Engine and then select VM instances.
  2. Select the instance, click the arrow next to SSH, and select the View gcloud command option.
  3. Paste this command at a command prompt for passwordless access to the instance.
  4. Enable sudo account access and register with Red Hat Subscription Manager.
  5. Enable a Subscription pool ID (or use the --auto-attach command).
  6. Disable all repositories.

    # subscription-manager repos --disable=*
  7. Enable the following repositories.

    # subscription-manager repos --enable=rhel-8-server-rpms
    # subscription-manager repos --enable=rhel-ha-for-rhel-8-server-rpms
  8. Install pcs pacemaker, the fence agents, and the resource agents.

    # yum install -y pcs pacemaker fence-agents-gce resource-agents-gcp
  9. Update all packages.

    # yum update -y

6.11. Configuring HA services

Complete the following steps on all nodes to configure HA services.

Procedure

  1. The user hacluster was created during the pcs and pacemaker installation in the previous step. Create a password for the user hacluster on all cluster nodes. Use the same password for all nodes.

    # passwd hacluster
  2. If the firewalld service is installed, enter the following command to add the HA service.

    # firewall-cmd --permanent --add-service=high-availability
    
    # firewall-cmd --reload
  3. Enter the following command to start the pcs service and enable it to start on boot.

    # systemctl start pcsd.service
    
    # systemctl enable pcsd.service
    
    Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

Verification steps

  1. Ensure the pcs service is running.

    # systemctl status pcsd.service
    
    pcsd.service - PCS GUI and remote configuration interface
    Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled)
    Active: active (running) since Mon 2018-06-25 19:21:42 UTC; 15s ago
    Docs: man:pcsd(8)
    man:pcs(8)
    Main PID: 5901 (pcsd)
    CGroup: /system.slice/pcsd.service
    └─5901 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &
  2. Edit the /etc/hosts file. Add RHEL host names and internal IP addresses for all nodes.

6.12. Creating a cluster

Complete the following steps to create the cluster of nodes.

Procedure

  1. On one of the nodes, enter the following command to authenticate the pcs user. Specify the name of each node in the cluster in the command.

    # pcs host auth  hostname1 hostname2 hostname3
    Username: hacluster
    Password:
    hostname1: Authorized
    hostname2: Authorized
    hostname3: Authorized
  2. Enter the following command to create the cluster.

    # pcs cluster setup cluster-name hostname1 hostname2 _hostname3-

Verification steps

  1. Run the following command to enable nodes to join the cluster automatically when started.

    # pcs cluster enable --all
  2. Enter the following command to start the cluster.

    # pcs cluster start --all

6.13. Creating a fencing device

For most default configurations, the GCP instance names and the RHEL host names are identical.

Complete the following steps to create a fencing device.

Procedure

  1. Enter the following command to get GCP instance names. Note that the output also shows the internal ID for the instance.

    # fence_gce --zone us-west1-b --project=rhel-ha-on-gcp -o list

    Example:

    Example:
    [root@rhel81-node-01 ~]# fence_gce --zone us-west1-b --project=rhel-ha-testing-on-gcp -o list
    44358**********3181,InstanceName-3
    40819**********6811,InstanceName-1
    71736**********3341,InstanceName-2
  2. Enter the following command to create a fence device.

    # pcs stonith create _FenceDeviceName_ fence_gce zone=_Region-Zone_ project=_MyProject_

Verification step

Verify that the fence devices started.

# pcs status

Example:

[root@rhel81-node-01 ~]# pcs status
Cluster name: gcp-cluster
Stack: corosync
Current DC: rhel81-node-02 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum
Last updated: Fri Jul 27 12:53:25 2018
Last change: Fri Jul 27 12:51:43 2018 by root via cibadmin on rhel81-node-01

3 nodes configured
3 resources configured

Online: [ rhel81-node-01 rhel81-node-02 rhel81-node-03 ]

Full list of resources:

us-west1-b-fence    (stonith:fence_gce):    Started rhel81-node-01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

6.14. Configuring GCP node authorization

Configure cloud SDK tools to use your account credentials to access GCP.

Procedure

Enter the following command on each node to initialize each node with your project ID and account credentials.

# gcloud-ra init

6.15. Configuring the gcp-vcp-move-vip resource agent

The gcp-vpc-move-vip resource agent attaches a secondary IP address (alias IP) to a running instance. This is a floating IP address that can be passed between different nodes in the cluster.

Enter the following command to show more information about this resource.

# pcs resource describe gcp-vpc-move-vip

You can configure the resource agent to use a primary subnet address range or a secondary subnet address range. This section includes procedures for both.

Primary subnet address range

Complete the following steps to configure the resource for the primary VPC subnet.

Procedure

  1. Enter the following command to create the aliasip resource. Include an unused internal IP address. Include the CIDR block in the command.

    # pcs resource create aliasip gcp-vpc-move-vip  alias_ip=UnusedIPaddress/CIDRblock

    Example:

    [root@rhel81-node-01 ~]# pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.10.200/32
  2. Enter the following command to create an IPaddr2 resource for managing the IP on the node.

    # pcs resource create vip IPaddr2 nic=interface ip=AliasIPaddress cidr_netmask=32

    Example:

    [root@rhel81-node-01 ~]# pcs resource create vip IPaddr2 nic=eth0 ip=10.10.10.200 cidr_netmask=32
  3. Enter the following command to group the network resources under vipgrp.

    # pcs resource group add vipgrp aliasip vip

Verification steps

  1. Enter the following command to verify that the resources have started and are grouped under vipgrp.

    [root@rhel81-node-01 ~]# pcs status
  2. Enter the following command to verify that the resource can move to a different node.

    # pcs resource move vip _Node_

    Example:

    [root@rhel81-node-01 ~]# pcs resource move vip rhel81-node-03
  3. Enter the following command to verify that the vip successfully started on a different node.

    [root@rhel81-node-01 ~]# pcs status

Secondary subnet address range

Complete the following steps to configure the resource for a secondary subnet address range.

Procedure

  1. Enter the following command to create a secondary subnet address range.

    # gcloud-ra compute networks subnets update SubnetName --region RegionName --add-secondary-ranges SecondarySubnetName=SecondarySubnetRange

    Example:

    # gcloud-ra compute networks subnets update range0 --region us-west1 --add-secondary-ranges range1=10.10.20.0/24
  2. Enter the following command to create the aliasip resource. Create an unused internal IP address in the secondary subnet address range. Include the CIDR block in the command.

    # pcs resource create aliasip gcp-vpc-move-vip alias_ip=UnusedIPaddress/CIDRblock

    Example:

    [root@rhel81-node-01 ~]# pcs resource create aliasip gcp-vpc-move-vip alias_ip=10.10.20.200/32
  3. Enter the following command to create an IPaddr2 resource for managing the IP on the node.

    # pcs resource create vip IPaddr2 nic=interface ip=AliasIPaddress cidr_netmask=32

    Example:

    [root@rhel81-node-01 ~]# pcs resource create vip IPaddr2 nic=eth0 ip=10.10.20.200 cidr_netmask=32
  4. Group the network resources under vipgrp.

    # pcs resource group add vipgrp aliasip vip

Verification steps

  1. Enter the following command to verify that the resources have started and are grouped under vipgrp.

    [root@rhel81-node-01 ~]# pcs status
  2. Enter the following command to verify that the resource can move to a different node.

    # pcs resource move vip _Node_

    Example:

    [root@rhel81-node-01 ~]# pcs resource move vip rhel81-node-03
  3. Enter the following command to verify that the vip successfully started on a different node.

    [root@rhel81-node-01 ~]# pcs status

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.