Deploying Red Hat Enterprise Linux based RHHI for Virtualization on a single node

Red Hat Hyperconverged Infrastructure for Virtualization 1.7

Create a hyperconverged configuration with a single server

Laura Bailey

Abstract

Read this for information about deploying a single self-contained Red Hat Hyperconverged Infrastructure for Virtualization server.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Chapter 1. Workflow for deploying a single hyperconverged host

  1. Verify that your planned deployment meets Support Requirements, with exceptions described in Chapter 2, Additional requirements for single node deployments.
  2. Install the hyperconverged host machine.
  3. Configure key-based SSH without a password from the node to itself.
  4. Browse to the Web Console and deploy a single hyperconverged node.
  5. Browse to the Red Hat Virtualization Administration Console and configure Red Hat Gluster Storage as a Red Hat Virtualization storage domain.

Chapter 2. Additional requirements for single node deployments

Red Hat Hyperconverged Infrastructure for Virtualization is supported for deployment on a single node provided that all Support Requirements are met, with the following additions and exceptions.

A single node deployment requires a physical machine with:

  • 1 Network Interface Controller
  • at least 12 cores
  • at least 64GB RAM
  • at most 48TB storage

Single node deployments cannot be scaled, and are not highly available.

Chapter 3. Install host physical machine

Your physical machine needs an operating system and access to the appropriate software repositories in order to be used as a hyperconverged host.

3.1. Installing a Red Hat Enterprise Linux host

Red Hat Enterprise Linux hosts add virtualization capabilities to the Red Hat Enterprise Linux operating system. This allows the host to be used as a hyperconverged host in Red Hat Hyperconverged Infrastructure for Virtualization.

Prerequisites

  • Ensure that your physical machine meets the requirements outlined in Physical machines.

Procedure

  1. Download the Red Hat Enterprise Linux 7 ISO image from the Customer Portal:

    1. Log in to the Customer Portal at https://access.redhat.com.
    2. Click Downloads in the menu bar.
    3. Beside Red Hat Enterprise Linux click Versions 7 and below.
    4. In the Version drop-down menu, select version 7.6.
    5. Go to Red Hat Enterprise Linux 7.6 Binary DVD and and click Download Now.
    6. Create a bootable media device. See Making Media in the Red Hat Enterprise Linux Installation Guide for more information.
  2. Start the machine you are installing as a Red Hat Enterprise Linux host, and boot from the prepared installation media.
  3. From the boot menu, select Install Red Hat Enterprise Linux 7.6 and press Enter.

    Note

    You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu.

  4. Select a language, and click Continue.
  5. Select a time zone from the Date & Time screen and click Done.

    Important

    Red Hat recommends using Coordinated Universal Time (UTC) on all hosts. This helps ensure that data collection and connectivity are not impacted by variation in local time, such as during daylight savings time.

  6. Select a keyboard layout from the Keyboard screen and click Done.
  7. Specify the installation location from the Installation Destination screen.

    Important
    • Red Hat strongly recommends using the Automatically configure partitioning option.
    • All disks are selected by default, so deselect disks that you do not want to use as installation locations.
    • At-rest encryption is not supported. Do not enable encryption.
    • Red Hat recommends increasing the size of /var/log to at least 15GB to provide sufficient space for the additional logging requirements of Red Hat Gluster Storage.

      Follow the instructions in Growing a logical volume using the Web Console to increase the size of this partition after installing the operating system.

    Click Done.

  8. Select the Ethernet network from the Network & Host Name screen.

    1. Click Configure…​ → General and select the Automatically connect to this network when it is available check box.
  9. Optionally configure Language Support, Security Policy, and Kdump. See Installing Using Anaconda in the Red Hat Enterprise Linux 7 Installation Guide for more information on each of the sections in the Installation Summary screen.
  10. Click Begin Installation.
  11. Set a root password and, optionally, create an additional user while Red Hat Enterprise Linux installs.

    Warning

    Red Hat strongly recommends not creating untrusted users on the hyperconverged host, as this can lead to exploitation of local security vulnerabilities.

  12. Click Reboot to complete the installation.

3.2. Enabling software repositories

  1. Register your machine to Red Hat Network.

    # subscription-manager register --username=<username> --password=<password>
  2. Attach the pool.

    # subscription-manager attach --pool=Pool ID number
  3. Disable the repositories.

    # subscription-manager repos --disable="*"
  4. Enable the additional channels required for Red Hat Enterprise Linux hosts.

    # subscription-manager repos --enable=rhel-7-server-rpms  --enable=rh-gluster-3-for-rhel-7-server-rpms  --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms

3.3. Install and configure RHHI for Virtualization requirements

  1. Install the packages required for RHHI for Virtualization.

    # yum install glusterfs-server vdsm-gluster ovirt-hosted-engine-setup cockpit-ovirt-dashboard gluster-ansible-roles
  2. Start and enable the Web Console.

    # systemctl start cockpit
    # systemctl enable cockpit
  3. Configure the firewall for the Web Console traffic

    1. Open ports for the cockpit service.

      # firewall-cmd --add-service=cockpit
      # firewall-cmd --add-service=cockpit --permanent
    2. Verify that the cockpit service is allowed by the firewall.

      Ensure that cockpit appears in the output of the following command.

      # firewall-cmd --list-services | grep cockpit

Chapter 4. Configure key based SSH authentication without a password

Configure key-based SSH authentication without a password for the root user from the host, to the FQDNs of both storage and management interfaces on the same host.

4.1. Generating SSH key pairs without a password

Generating a public/private key pair lets you use key-based SSH authentication. Generating a key pair that does not use a password makes it simpler to use Ansible to automate deployment and configuration processes.

Procedure

  1. Log in to the first hyperconverged host as the root user.
  2. Generate an SSH key that does not use a password.

    1. Start the key generation process.

      # ssh-keygen -t rsa
      Generating public/private rsa key pair.
    2. Enter a location for the key.

      The default location, shown in parentheses, is used if no other input is provided.

      Enter file in which to save the key (/home/username/.ssh/id_rsa): <location>/<keyname>
    3. Specify and confirm an empty passphrase by pressing Enter twice.

      Enter passphrase (empty for no passphrase):
      Enter same passphrase again:

      The private key is saved in <location>/<keyname>. The public key is saved in <location>/<keyname>.pub.

      Your identification has been saved in <location>/<keyname>.
      Your public key has been saved in <location>/<keyname>.pub.
      The key fingerprint is SHA256:8BhZageKrLXM99z5f/AM9aPo/KAUd8ZZFPcPFWqK6+M root@server1.example.com
      The key's randomart image is:
      +---[ECDSA 256]---+
      |      . .      +=|
      | . . . =      o.o|
      |  + . * .    o...|
      | = . . *  . + +..|
      |. + . . So o * ..|
      |   . o . .+ =  ..|
      |      o oo ..=. .|
      |        ooo...+  |
      |        .E++oo   |
      +----[SHA256]-----+
      Warning

      Your identification in this output is your private key. Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key.

4.2. Copying SSH keys

To access a host using your private key, that host needs a copy of your public key.

Prerequisites

  • Generate a public/private key pair.

Procedure

  1. Log in to the first host as the root user.
  2. Copy your public key to each host that you want to access, including the host on which you execute the command, using both the front-end and the back-end FQDNs.

    # ssh-copy-id -i <location>/<keyname>.pub <user>@<hostname>

    Enter the password for <user>@<hostname> when prompted.

    Warning

    Make sure that you use the file that ends in .pub. Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key.

    For example, if you are logged in as the root user on server1.example.com, you would run the following commands:

    # ssh-copy-id -i <location>/<keyname>.pub root@server1front.example.com
    # ssh-copy-id -i <location>/<keyname>.pub root@server2front.example.com
    # ssh-copy-id -i <location>/<keyname>.pub root@server3front.example.com
    # ssh-copy-id -i <location>/<keyname>.pub root@server1back.example.com
    # ssh-copy-id -i <location>/<keyname>.pub root@server2back.example.com
    # ssh-copy-id -i <location>/<keyname>.pub root@server3back.example.com

Chapter 5. Configuring a single node RHHI for Virtualization deployment

5.1. Configuring Red Hat Gluster Storage on a single node

Important

Ensure that disks specified as part of this deployment process do not have any partitions or labels.

  1. Log into the Web Console

    Browse to the the Web Console management interface of the first hyperconverged host, for example, https://node1.example.com:9090/, and log in with the credentials you created in the previous section.

  2. Start the deployment wizard

    1. Click VirtualizationHosted Engine and click Start underneath Hyperconverged.

      Hosted Engine Setup screen with Start buttons underneath the Hosted Engine and Hyperconverged options

      The Gluster Configuration window opens.

    2. Click the Run Gluster Wizard for Single Node button.

      Selecting the type of hyperconverged deployment in the Web Console

      The Gluster Deployment window opens in single node mode.

  3. Specify hyperconverged host

    Specify the back-end FQDN on the storage network of the hyperconverged host and click Next.

    The Hosts tab of the single node deployment wizard
  4. Specify packages

    No additional packages are required for Red Hat Hyperconverged Infrastructure for Virtualization, but you can use this tab to install any additional packages you require on the host.

    The Packages tab of the Gluster Deployment window with the default blank fields shown
  5. Specify volumes

    Specify the volumes to create.

    The Volumes tab of the single node deployment wizard
    Name
    Specify the name of the volume to be created.
    Volume Type
    Specify a Distribute volume type. Only distributed volumes are supported for single node deployments.
    Brick Dirs
    The directory that contains this volume’s bricks.
    Add Volume
    To add more volumes, click the Add Volume option and it will create a blank entry. Specify the name of the volume to be added and if arbiter volume is required, check the arbiter check box. It is recommended to use the brick path as /gluster_bricks/<volname>/<volname>.
  6. Specify bricks

    Specify the bricks to create.

    The Bricks tab of the single node deployment wizard
    RAID
    Specify the RAID configuration to use. This should match the RAID configuration of your host. Supported values are raid5, raid6, and jbod. Setting this option ensures that your storage is correctly tuned for your RAID configuration.
    Stripe Size
    Specify the RAID stripe size in KB. Do not enter units, only the number. This can be ignored for jbod configurations.
    Disk Count
    Specify the number of data disks in a RAID volume. This can be ignored for jbod configurations.
    LV Name
    The name of the logical volume to be created. This is pre-filled with the name that you specified on the previous page of the wizard.
    Device
    Specify the raw device you want to use. Red Hat recommends an unpartitioned device.
    Size
    Specify the size of the logical volume to create in GB. Do not enter units, only the number. This number should be the same for all bricks in a replicated set. Arbiter bricks can be smaller than other bricks in their replication set.
    Mount Point
    The mount point for the logical volume. This is pre-filled with the brick directory that you specified on the previous page of the wizard.
    Thinp
    This option is enabled and volumes are thinly provisioned by default, except for the engine volume, which must be thickly provisioned.
    Enable Dedupe & Compression
    Specify whether to provision the volume using VDO for compression and deduplication at deployment time.
    Logical Size (GB)
    Specify the logical size of the VDO volume. This can be up to 10 times the size of the physical volume, with an absolute maximum logical size of 4 PB.
    Configure LV Cache
    Optionally, check this checkbox to configure a small, fast SSD device as a logical volume cache for a larger, slower logical volume. Add the device path to the SSD field, the size to the LV Size (GB) field, and set the Cache Mode used by the device.
    Warning

    To avoid data loss when using write-back mode, Red Hat recommends using two separate SSD/NVMe devices. Configuring the two devices in a RAID-1 configuration (via software or hardware), significantly reduces the potential of data loss from lost writes.

    For further information about lvmcache configuration, see Red Hat Enterprise Linux 7 LVM Administration.

    1. (Optional) If your system has multipath devices, additional configuration is required.

      1. To use multipath devices

        If you want to use multipath devices in your RHHI for Virtualization deployment, use multipath WWIDs to specify the device. For example, use /dev/mapper/3600508b1001caab032303683327a6a2e instead of /dev/sdb.

      2. To disable multipath device use

        If multipath devices exist in your environment, but you do not want to use them for your RHHI for Virtualization deployment, blacklist the devices.

        1. Create a custom multipath configuration file.

          # mkdir /etc/multipath/conf.d
          # touch /etc/multipath/conf.d/99-custom-multipath.conf
        2. Add the following content to the file, replacing <device> with the name of the device to blacklist:

          blacklist {
            devnode "<device>"
          }

          For example, to blacklist the /dev/sdb device, add the following:

          blacklist {
            devnode "sdb"
          }
        3. Restart multipathd.

          # systemctl restart multipathd
        4. Verify that your disks no longer have multipath names by using the lsblk command.

          If multipath names are still present, reboot hosts.

  7. Review and edit configuration

    Review the contents of the generated configuration file and click Edit to modify the file, and Save to keep your changes.

    Click Deploy when you are satisfied with the configuration file.

    The Review tab of the single node deployment wizard
  8. Wait for deployment to complete

    You can watch the deployment process in the text field.

    The window displays Successfully deployed gluster when complete.

    Click Continue to Hosted Engine Deployment and continue the deployment process with the instructions in Section 5.2, “Deploy the Hosted Engine on a single node using the Web Console”.

Important

If deployment fails, click Clean up to remove any potentially incorrect changes to the system.

If deployment fails, click the Redeploy button. This returns you to the Review and edit configuration tab so that you can correct any issues in the generated configuration file before reattempting deployment.

5.2. Deploy the Hosted Engine on a single node using the Web Console

This section shows you how to deploy the Hosted Engine on a single node using the Web Console. Following this process results in Red Hat Virtualization Manager running in a virtual machine on your node, and managing that virtual machine. It also configures a Default cluster consisting only of that node, and enables Red Hat Gluster Storage functionality and the virtual-host tuned performance profile for the cluster of one.

Prerequisites

  • The RHV-M Appliance is installed during the deployment process; however, if required, you can install it on the deployment host before starting the installation:
# yum install rhvm-appliance

Manually installing the Manager virtual machine is not supported.

  • Configure Red Hat Gluster Storage on a single node
  • Gather the information you need for Hosted Engine deployment

    Have the following information ready before you start the deployment process.

    • IP address for a pingable gateway to the hyperconverged host
    • IP address of the front-end management network
    • Fully-qualified domain name (FQDN) for the Hosted Engine virtual machine
    • MAC address that resolves to the static FQDN and IP address of the Hosted Engine

Procedure

  1. Open the Hosted Engine Deployment wizard

    If you continued directly from the end of Configure Red Hat Gluster Storage on a single node, the wizard is already open.

    Otherwise:

    1. Click VirtualizationHosted Engine.
    2. Click Start underneath Hyperconverged.
    3. Click Use existing configuration.

      Important

      If the previous deployment attempt failed, click Clean up instead of Use existing configuration to discard the previous attempt and start from scratch.

  2. Specify virtual machine details

    The VM tab of the Hosted Engine Deployment window with example values entered in all fields.
    1. Enter the following details:

      Engine VM FQDN
      The fully qualified domain name to be used for the Hosted Engine virtual machine, for example, engine.example.com.
      MAC Address

      The MAC address associated with the Engine VM FQDN.

      Important

      The pre-populated MAC address must be replaced.

      Network Configuration

      Choose either DHCP or Static from the Network Configuration drop-down list.

      • If you choose DHCP, you must have a DHCP reservation for the Hosted Engine virtual machine so that its host name resolves to the address received from DHCP. Specify its MAC address in the MAC Address field.
      • If you choose Static, enter the following details:

        • VM IP Address - The IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Hosted Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
        • Gateway Address
        • DNS Servers
      Bridge Interface
      Select the Bridge Interface from the drop-down list.
      Root password
      The root password to be used for the Hosted Engine virtual machine.
      Root SSH Access
      Specify whether to allow Root SSH Access.The default value of Root SSH Access is set to Yes.
      Number of Virtual CPUs
      Enter the Number of Virtual CPUs for the virtual machine.
      Memory Size (MiB)

      Enter the Memory Size (MiB). The available memory is displayed next to the input field.

      Note

      Red Hat recommends to retain the values of Root SSH Access, Number of Virtual CPUs and Memory Size to default values.

    2. Optionally expand the Advanced fields.

      The advanced options for Hosted engine Deployment window.
      Root SSH Public Key
      Enter a Root SSH Public Key to use for root access to the Hosted Engine virtual machine.
      Edit Hosts File
      Select or clear the Edit Hosts File check box to specify whether to add entries for the Hosted Engine virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
      Bridge Name
      Change the management Bridge Name, or accept the default ovirtmgmt.
      Gateway Address
      Enter the Gateway Address for the management bridge.
      Host FQDN
      Enter the Host FQDN of the first host to add to the Manager. This is the front-end FQDN of the base host you are running the deployment on.
      Network Test
      If you have a static network configuration or are using an isolated environment with addresses defined in /etc/hosts, set Network Test to Ping.
    3. Click Next. Your FQDNs are validated before the next screen appears.
  3. Specify virtualization management details

    1. Enter the password to be used by the admin account in the Administration Portal. You can also specify an email address for notifications, which requires further configuration after deployment; see Chapter 8, Post-deployment configuration suggestions.

      The Engine tab of the Hosted Engine Deployment window with example values entered in all fields.
    2. Click Next.
  4. Review virtual machine configuration

    1. Ensure that the details listed on this tab are correct. Click Back to correct any incorrect information.

      The Prepare VM tab of the Hosted Engine Deployment window with configuration details displayed for review.
    2. Click Prepare VM.
    3. Wait for virtual machine preparation to complete.

      The Prepare VM tab of the Hosted Engine Deployment window showing, 'Execution completed successfully. Please proceed to the next step.'

      If preparation does not occur successfully, see Viewing Hosted Engine deployment errors.

    4. Click Next.
  5. Specify storage for the Hosted Engine virtual machine

    1. Specify the back-end address and location of the engine volume.

      The Storage tab of the Hosted Engine Deployment window with the engine volume specified as hosted engine virtual machine storage.
    2. Click Next.
  6. Finalize Hosted Engine deployment

    1. Review your deployment details and verify that they are correct.

      Note

      The responses you provided during configuration are saved to an answer file to help you reinstall the hosted engine if necessary. The answer file is created at /etc/ovirt-hosted-engine/answers.conf by default. This file should not be modified manually without assistance from Red Hat Support.

      The Finish tab of the Hosted Engine Deployment window with details of the Hosted Engine’s storage displayed.
    2. Click Finish Deployment.
  7. Wait for deployment to complete

    This can take some time, depending on your configuration details.

    The window displays the following when complete.

    The Finish tab of the Hosted Engine Deployment window showing Hosted Engine deployment complete.
    Important

    If deployment does not complete successfully, see Viewing Hosted Engine deployment errors.

    Click Close.

  8. Verify hosted engine deployment

    Browse to the Administration Portal (for example, http://engine.example.com/ovirt-engine) and verify that you can log in using the administrative credentials you configured earlier. Click Dashboard and look for your hosts, storage domains, and virtual machines.

    The Administration Portal dashboard after deployment.

Chapter 6. Configure Red Hat Gluster Storage as a Red Hat Virtualization storage domain

The hosted engine storage domain is imported automatically, but other storage domains must be added to be used.

  1. Click the Storage tab and then click New Domain.
  2. Select GlusterFS as the Storage Type and provide a Name for the domain.
  3. Check the Use managed gluster volume option and select the volume to use.
  4. Click OK to save.

Chapter 7. Verify your deployment

After deployment is complete, verify that your deployment has completed successfully.

  1. Browse to the Administration Portal, for example, http://engine.example.com/ovirt-engine.

    Administration Console Login

    Login page for the Administration Console

  2. Log in using the administrative credentials added during hosted engine deployment.

    When login is successful, the Dashboard appears.

    Administration Console Dashboard

    Administration Console Dashboard

  3. Verify that your cluster is available.

    Administration Console Dashboard - Clusters

    The cluster widget with one cluster showing

  4. Verify that one host is available.

    The hosts widget with one host showing

    1. Click ComputeHosts.
    2. Verify that your host is listed with a Status of Up.
  5. Verify that all storage domains are available.

    1. Click StorageDomains.
    2. Verify that the Active icon is shown in the first column.

      Administration Console - Storage Domains

      Administration Console storage domain dashboard

Chapter 8. Post-deployment configuration suggestions

Depending on your requirements, you may want to perform some additional configuration on your newly deployed Red Hat Hyperconverged Infrastructure for Virtualization. This section contains suggested next steps for additional configuration.

Details on these processes are available in Maintaining Red Hat Hyperconverged Infrastructure for Virtualization.

8.1. Configure notifications

See Configuring Event Notifications in the Administration Portal to configure email notifications.

8.2. Configure fencing for high availability

Fencing allows a cluster to enforce performance and availability policies and react to unexpected host failures by automatically rebooting hyperconverged hosts.

See Configure High Availability using fencing policies for further information.

8.3. Configure backup and recovery options

Red Hat recommends configuring at least basic disaster recovery capabilities on all production deployments.

See Configuring backup and recovery options in Maintaining Red Hat Hyperconverged Infrastructure for Virtualization for more information.

Chapter 9. Next steps

Part I. Reference material

Appendix A. Glossary of terms

A.1. Virtualization terms

Administration Portal
A web user interface provided by Red Hat Virtualization Manager, based on the oVirt engine web user interface. It allows administrators to manage and monitor cluster resources like networks, storage domains, and virtual machine templates.
Hosted Engine
The instance of Red Hat Virtualization Manager that manages RHHI for Virtualization.
Hosted Engine virtual machine
The virtual machine that acts as Red Hat Virtualization Manager. The Hosted Engine virtual machine runs on a virtualization host that is managed by the instance of Red Hat Virtualization Manager that is running on the Hosted Engine virtual machine.
Manager node
A virtualization host that runs Red Hat Virtualization Manager directly, rather than running it in a Hosted Engine virtual machine.
Red Hat Enterprise Linux host
A physical machine installed with Red Hat Enterprise Linux plus additional packages to provide the same capabilities as a Red Hat Virtualization host. Support for this type of host is limited.
Red Hat Virtualization
An operating system and management interface for virtualizing resources, processes, and applications for Linux and Microsoft Windows workloads.
Red Hat Virtualization host
A physical machine installed with Red Hat Virtualization that provides the physical resources to support the virtualization of resources, processes, and applications for Linux and Microsoft Windows workloads. This is the only type of host supported with RHHI for Virtualization.
Red Hat Virtualization Manager
A server that runs the management and monitoring capabilities of Red Hat Virtualization.
Self-Hosted Engine node
A virtualization host that contains the Hosted Engine virtual machine. All hosts in a RHHI for Virtualization deployment are capable of becoming Self-Hosted Engine nodes, but there is only one Self-Hosted Engine node at a time.
storage domain
A named collection of images, templates, snapshots, and metadata. A storage domain can be comprised of block devices or file systems. Storage domains are attached to data centers in order to provide access to the collection of images, templates, and so on to hosts in the data center.
virtualization host
A physical machine with the ability to virtualize physical resources, processes, and applications for client access.
VM Portal
A web user interface provided by Red Hat Virtualization Manager. It allows users to manage and monitor virtual machines.

A.2. Storage terms

brick
An exported directory on a server in a trusted storage pool.
cache logical volume
A small, fast logical volume used to improve the performance of a large, slow logical volume.
geo-replication
One way asynchronous replication of data from a source Gluster volume to a target volume. Geo-replication works across local and wide area networks as well as the Internet. The target volume can be a Gluster volume in a different trusted storage pool, or another type of storage.
gluster volume
A logical group of bricks that can be configured to distribute, replicate, or disperse data according to workload requirements.
logical volume management (LVM)
A method of combining physical disks into larger virtual partitions. Physical volumes are placed in volume groups to form a pool of storage that can be divided into logical volumes as needed.
Red Hat Gluster Storage
An operating system based on Red Hat Enterprise Linux with additional packages that provide support for distributed, software-defined storage.
source volume
The Gluster volume that data is being copied from during geo-replication.
storage host
A physical machine that provides storage for client access.
target volume
The Gluster volume or other storage volume that data is being copied to during geo-replication.
thin provisioning
Provisioning storage such that only the space that is required is allocated at creation time, with further space being allocated dynamically according to need over time.
thick provisioning
Provisioning storage such that all space is allocated at creation time, regardless of whether that space is required immediately.
trusted storage pool
A group of Red Hat Gluster Storage servers that recognise each other as trusted peers.

A.3. Hyperconverged Infrastructure terms

Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization
RHHI for Virtualization is a single product that provides both virtual compute and virtual storage resources. Red Hat Virtualization and Red Hat Gluster Storage are installed in a converged configuration, where the services of both products are available on each physical machine in a cluster.
hyperconverged host
A physical machine that provides physical storage, which is virtualized and consumed by virtualized processes and applications run on the same host. All hosts installed with RHHI for Virtualization are hyperconverged hosts.
Web Console
The web user interface for deploying, managing, and monitoring RHHI for Virtualization. The Web Console is provided by the the Web Console service and plugins for Red Hat Virtualization Manager.

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
All other trademarks are the property of their respective owners.