Chapter 1. Add Compute and Storage Resources

Red Hat Hyperconverged Infrastructure (RHHI) can be scaled in multiples of three nodes to a maximum of nine nodes.

1.1. Scaling RHHI deployments

1.1.1. Before you begin

  • Be aware that the only supported method of scaling Red Hat Hyperconverged Infrastructure (RHHI) is to create additional volumes that span the new nodes. Expanding the existing volumes to span across more nodes is not supported.
  • Arbitrated replicated volumes are not supported for scaling.
  • If your existing deployment uses certificates signed by a Certificate Authority for encryption, prepare the certificates that will be required for the new nodes.

1.1.2. Scaling RHHI by adding additional volumes on new nodes

  1. Install the three physical machines

    Follow the instructions in Deploying Red Hat Hyperconverged Infrastructure: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html/deploying_red_hat_hyperconverged_infrastructure/install-host-physical-machines.

    Note

    Only one arbitrated replicated volume is supported per deployment.

  2. Configure key-based SSH authentication

    Follow the instructions in Deploying Red Hat Hyperconverged Infrastructure to configure key-based SSH authentication from one node to all nodes: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html/deploying_red_hat_hyperconverged_infrastructure/task-configure-key-based-ssh-auth

  3. Automatically configure new nodes

    1. Create an add_nodes.conf file based on the template provided in Section B.3, “Example gdeploy configuration file for scaling to additional nodes”.
    2. Run gdeploy using the add_nodes.conf file:

      # gdeploy -c add_nodes.conf
  4. (Optional) If encryption is enabled

    1. Ensure that the following files exist in the following locations on all nodes.

      /etc/ssl/glusterfs.key
      The node’s private key.
      /etc/ssl/glusterfs.pem
      The certificate signed by the Certificate Authority, which becomes the node’s certificate.
      /etc/ssl/glusterfs.ca
      The Certificate Authority’s certificate. For self-signed configurations, this file contains the concatenated certificates of all nodes.
    2. Enable management encryption.

      Create the /var/lib/glusterd/secure-access file on each node.

      # touch /var/lib/glusterd/secure-access
    3. Restart the glusterd service

      # systemctl restart glusterd
    4. Update the auth.ssl-allow parameter for all volumes

      Use the following command on any existing node to obtain the existing settings:

      # gluster volume get engine auth.ssl-allow

      Set auth.ssl-allow to the old value with the new IP addresses appended.

      # gluster volume set <vol_name> auth.ssl-allow "<old_hosts>;<new_hosts>"
  5. Disable multipath for each node’s storage devices

    1. Add the following lines to the beginning of the /etc/multipath.conf file.

      # VDSM REVISION 1.3
      # VDSM PRIVATE
    2. Add Red Hat Gluster Storage devices to the blacklist definition in the /etc/multipath.conf file.

      blacklist {
          devnode "^sd[a-z]"
      }
    3. Restart multipathd

      # systemctl restart multipathd
  6. In Red Hat Virtualization Manager, add the new hosts to the existing cluster

    For details on adding a host to a cluster, follow the instructions in Adding a Host to the Red Hat Virtualization Manager in the Red Hat Virtualization Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-host_tasks.

    Ensure that you perform the following configuration items:

    • Select Hosted Engine and the deploy action.
    • Uncheck Automatically configure firewall.
    • Enable Power management settings.
  7. Attach the gluster network to the new hosts

    1. Click the Hosts tab and select the host.
    2. Click the Network Interfaces subtab and then click Setup Host Networks.
    3. Drag and drop the newly created network to the correct interface.
    4. Ensure that the Verify connectivity checkbox is checked.
    5. Ensure that the Save network configuration checkbox is checked.
    6. Click OK to save.
    7. Verify the health of the network

      Click the Hosts tab and select the host.

      Click the Network Interfaces subtab and check the state of the host’s network

      If the network interface enters an "Out of sync" state or does not have an IPv4 Address, click the Management tab that corresponds to the host and click Refresh Capabilities.

  8. Create new bricks

    1. Click the Hosts tab.
    2. Select a host, and then select the Storage Devices subtab.
    3. Select a storage device from the list. Click Create Brick.
    4. In the Create Brick window, verify the Raid Type is correct and enter the following details. Note that these details must match the details of the underlying storage.

      • Brick name
      • Mount point
      • Number of physical disks in RAID volume
    5. Click OK.

      A new thin provisioned logical volume is created from the specified storage devices.

  9. Create a new volume

    1. Click the Volumes tab.
    2. Click New. The New volume window opens.
    3. Specify values for the following fields:

      • Data Center
      • Volume Cluster
      • Name
    4. Set Type to Replicate.
    5. Click the Add Bricks button and select the bricks that comprise this volume.
    6. Check the Optimize for virt-store checkbox.
    7. Set the following volume options:

      • Set cluster.granular-entry-heal to on.
      • Set network.remote-dio to off
      • Set performance.strict-o-direct to on
  10. Start the new volume

    In the Volumes tab, select the volume to start and click Start.

  11. Create a new storage domain

    1. Click the Storage tab and then click New Domain.
    2. Provide a Name for the domain.
    3. Set the Domain function to Data.
    4. Set the Storage Type to GlusterFS.
    5. Check the Use managed gluster volume option.

      A list of volumes available in the cluster appears.

    6. Click OK.