Deploying Red Hat Hyperconverged Infrastructure
Instructions for deploying Red Hat Hyperconverged Infrastructure
Laura Bailey
lbailey@redhat.com
Abstract
Chapter 1. Known Issues
This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure (RHHI).
- BZ#1395087 - Virtual machines pause indefinitely when network unavailable
- When the network for Gluster and migration traffic becomes unavailable, virtual machines performing I/O become inaccessible, and cannot complete migration to another node until the hypervisor reboots. This is expected behavior for current fencing and migration methods. There is currently no workaround for this issue.
- BZ#1401969 - Arbiter brick becomes heal source
- When data bricks in an arbiter volume are taken offline and brought back online one at a time, the arbiter brick is incorrectly identified as the source of correct data when healing the other bricks. This results in virtual machines being paused, because arbiter bricks contain only metadata. There is currently no workaround for this issue.
- BZ#1412930 - Excessive logging when storage unavailable when TLS/SSL enabled
- When Transport Layer Security (TLS/SSL) is enabled on Red Hat Gluster Storage volumes and a Red Hat Gluster Storage server becomes unavailable, a large number of connection error messages are logged until the Red Hat Gluster Storage server becomes available again. This occurs because the messages logged are not changed to a lower level log message after attempting to reconnect. There is currently no workaround for this issue.
- BZ#1413845 - Hosted Engine does not migrate when management network not available
- If the management network becomes unavailable during migration, the Hosted Engine virtual machine restarts but the Hosted Engine itself does not. To work around this issue, manually restart the ovirt-engine service.
- BZ#1425767 - Sanity check script does not fail
-
The sanity check script sometimes returns zero (success) even when disks do not exist, or are not empty. Since the sanity check appears to succeed, gdeploy attempts to create physical volumes and fails. To work around this issue, ensure that the
disk
value in the gdeploy configuration file is correct, and that the disk has no partitions or labels, and retry deployment.
- BZ#1432326 - Associating a network with a host makes network out of sync
- When the Gluster network is associated with a Red Hat Gluster Storage node’s network interface, the Gluster network enters an out of sync state. To work around this issue, click the Management tab that corresponds to the node and click Refresh Capabilities.
- BZ#1434105 - Live Storage Migration failure
- Live migration from a Gluster-based storage domain fails when I/O operations are still in progress during migration. There is currently no workaround for this issue.
- BZ#1437799 - ISO upload to Gluster storage domain using SSH fails
When uploading an ISO to a Gluster-based storage domain using SSH, the
ovirt-iso-uploader
tool uses the wrong path and fails with the following error as a result.OSError: [Errno 2] No such file or directory: '/ISO_Volume' ERROR: Unable to copy RHGSS-3.1.3-RHEL-7-20160616.2-RHGSS-x86_64-dvd1.iso to ISO storage domain on ISODomain1. ERROR: Error message is "unable to test the available space on /ISO_Volume
To work around this issue, enable NFS access on the Gluster volume and upload using NFS instead of SSH.
- BZ#1439069 - Reinstalling a node overwrites /etc/ovirt-hosted-engine/hosted-engine.conf
If a node is reinstalled after the primary Red Hat Gluster Storage server has been replaced, the contents of the
/etc/ovirt-hosted-engine/hosted-engine.conf
file are overwritten with details of the old primary host. This results in a non-operational state in the cluster.To work around this issue, move the reinstalled node to Maintenance mode and update the contents of
/etc/ovirt-hosted-engine/hosted-engine.conf
to point to the replacement primary server. Then reboot and reactivate the reinstalled node to bring it online and mount all volumes.- BZ#1443169 - Deploying Hosted Engine fails during bridge configuration
-
When setting up the self-hosted engine on a system with a bridged network configuration, setup fails after the restart of the firewalld service. To work around this problem, remove all
*.bak
files from the/etc/sysconfig/network-scripts/
directory before deploying the self-hosted engine.
Part I. Plan
Chapter 2. Architecture
Red Hat Hyperconverged Infrastructure (RHHI) combines compute, storage, networking, and management capabilities in one deployment.
RHHI is deployed across three physical machines to create a discrete cluster or pod using Red Hat Gluster Storage 3.2 and Red Hat Virtualization 4.1.
The dominant use case for this deployment is in remote office branch office (ROBO) environments, where a remote office synchronizes data to a central data center on a regular basis, but does not require connectivity to the central data center to function.
The following diagram shows the basic architecture of a single cluster.
Chapter 3. Support Requirements
Review this section to ensure that your planned deployment meets the requirements for support by Red Hat.
3.1. Operating System
Red Hat Hyperconverged Infrastructure (RHHI) is supported only on Red Hat Virtualization Host 4.1. Use Red Hat Virtualization Host 4.1 as a base for all other configuration.
See the Red Hat Virtualization Planning and Prerequisites Guide for details on requirements of Red Hat Virtualization: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/planning_and_prerequisites_guide/requirements
3.2. Physical Machines
Red Hat Hyperconverged Infrastructure (RHHI) requires at least 3 physical machines. Scaling to 6 physical machines or 9 physical machines is also supported.
Each physical machine must have the following capabilities.
- at least 2 NICs per physical machine, for separation of data and management traffic (see Section 3.4, “Networking” for details)
for small deployments:
- at least 12 cores
- at least 64GB RAM
- at most 48TB storage
for medium deployments:
- at least 12 cores
- at least 128GB RAM
- at most 64TB storage
for large deployments:
- at least 16 cores
- at least 256GB RAM
- at most 80TB storage
3.3. Virtual Machines
Each virtual machine can have at most 4 virtual CPUs and 2TB virtual disk space.
The supported number of virtual machines depends on their size and resource usage.
3.4. Networking
Two separate networks are required so that client and management traffic in the cluster are separated.
- Front-end network
This is used as a network bridge for ovirt management.
- IP addresses assigned to the front-end network should be from the same subnet, and should be from a different subnet to back-end IP addresses.
- Back-end network
This is used for storage and migration traffic between storage peers.
- Red Hat recommends a 10Gbps network for the back-end network.
- Red Hat Gluster Storage requires a maximum latency of 5 milliseconds between peers.
- IP addresses assigned to the back-end network should be from the same subnet, and should be from a different subnet to front-end IP addresses.
All host FQDNs and the Hosted Engine virtual machine’s FQDN should be forward and reverse resolvable by DNS.
A DHCP server is required when selecting DHCP network configuration for the Hosted Engine virtual machine.
3.5. Storage
3.5.1. Architecture
A single Red Hat Gluster Storage cluster can have 3–4 volumes.
- 1 engine volume for the Hosted Engine
- 1 vmstore volume for virtual machine boot disk images
- 1 optional data volume for other virtual machine disk images
- 1 shared_storage volume for geo-replication metadata
A Red Hat Gluster Storage cluster can contain at most 1 geo-replicated volume.
Red Hat further recommends providing at least one hot spare drive local to each server.
3.5.2. RAID
RAID configuration limits depend on the technology in use.
- SAS/SATA 7k disks are supported with RAID6 (at most 10+2)
SAS 10k and 15k disks are supported with the following:
- RAID5 (at most 7+1)
- RAID6 (at most 10+2)
RAID cards must use flash backed write cache.
3.5.3. JBOD
JBOD configurations require architecture review. Contact your Red Hat representative for details.
3.5.4. Volume Types
Red Hat Hyperconverged Infrastructure (RHHI) supports only replicated or arbitrated replicated volume types.
The arbitrated replicated volume type carries the following additional limitations:
- Arbitrated replicated volumes are supported on the first three nodes only.
-
Arbitrated replicated volumes must use
replica 3 arbiter 1
configuration. Arbiter bricks do not store file data; they only store file names, structure, and metadata.
This means that a three-way arbitrated replicated volume requires about 75% of the storage space that a three-way replicated volume would require to achieve the same level of consistency.
However, because the arbiter brick stores only metadata, a three-way arbitrated replicated volume only provides the availability of a two-way replicated volume.
For further details, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#Creating_Arbitrated_Replicated_Volumes
3.6. Support limitations
- One arbitrated replicated volume is supported as part of the initial deployment of Red Hat Hyperconverged Infrastructure (RHHI). Expanding this arbitrated replicated volume is not supported. Adding additional arbitrated replicated volumes is not supported.
-
Arbitrated replicated volumes are currently supported only in a
replica 3 arbiter 1
configuration.
For further details about these volume types, see the Red Hat Gluster Storage Administration Guide:
3.7. Disaster Recovery
Red Hat strongly recommends configuring a disaster recovery solution. For details on configuring geo-replication as a disaster recovery solution, see Maintaining Red Hat Hyperconverged Infrastructure: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.0/html/maintaining_red_hat_hyperconverged_infrastructure/configure_disaster_recovery_using_geo_replication.
Be aware of the following support limitations when configuring geo-replication:
- Red Hat Hyperconverged Infrastructure (RHHI) supports only one geo-replicated volume; Red Hat recommends backing up the volume that stores the data of your virtual machines.
- The source and destination volumes for geo-replication must be managed by different instances of Red Hat Virtualization Manager.
Part II. Deploy
Chapter 4. Deployment Workflow
The workflow for deploying Red Hat Hyperconverged Infrastructure (RHHI) is as follows:
- Verify that your planned deployment meets Chapter 3, Support Requirements.
- Chapter 5, Install Host Physical Machines that will act as virtualization hosts.
- Chapter 6, Configure Public Key based SSH Authentication to enable automated configuration of the hosts.
- Chapter 7, Configure Red Hat Gluster Storage for Hosted Engine using the Cockpit UI on the hosts using the Cockpit UI.
- Chapter 8, Deploy the Hosted Engine using the Cockpit UI using the Cockpit UI.
- Chapter 9, Configure Red Hat Gluster Storage as a Red Hat Virtualization storage domain from the Red Hat Gluster Storage nodes using the Red Hat Virtualization management UI.
Chapter 5. Install Host Physical Machines
Install Red Hat Virtualization Host 4.1 on your three physical machines. See the following section for details about installing a virtualization host: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/installation_guide/red_hat_virtualization_hosts.
Chapter 6. Configure Public Key based SSH Authentication
From the first virtualization host, configure Public Key authentication based SSH for the root user to all three virtualization hosts.
Ensure that you use the hostnames or IP addresses associated with the back-end management network.
See the Red Hat Enterprise Linux 7 Installation Guide for more details: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/s1-ssh-configuration.html#s2-ssh-configuration-keypairs.
Chapter 7. Configure Red Hat Gluster Storage for Hosted Engine using the Cockpit UI
Ensure that disks specified as part of this deployment process do not have any partitions or labels.
Log into the Cockpit UI
Browse to the Cockpit management interface of the first virtualization host, for example, https://node1.example.com:9090/, and log in with the credentials you created in Chapter 5, Install Host Physical Machines.
Start the deployment wizard
Click Virtualization > Hosted Engine, select Hosted Engine with Gluster, and click Start. The deployment wizard appears.
Deployment wizard: Specify storage hosts
Specify the back-end gluster network addresses (not the management network addresses) of the three virtualization hosts. The virtualization host that can SSH using key pairs should be listed first, as it is the host that will run gdeploy and the hosted engine.
NoteIf you plan to create an arbitrated replicated volume, ensure that you specify the host with the arbiter brick as Host3 on this screen.
Click Next.
Deployment wizard: Specify packages
There is no need to install packages. Delete any values from these fields and uncheck the boxes.
Click Next.
Deployment wizard: Specify volumes
Specify the volumes to be created.
- Name
- Specify the name of the volume to be created.
- Volume Type
- Specify a Replicate volume type. Only replicated volumes are supported for this release.
- Arbiter
- Specify whether to create the volume with an arbiter brick. If this box is checked, the third disk stores only metadata.
- Brick Dirs
- The directory that contains this volume’s bricks.
The default values are correct for most installations.
Deployment wizard: Specify bricks
Specify the bricks to be created.
- LV Name
- Specify the name of the logical volume to be created.
- Device
- Specify the raw device you want to use. Red Hat recommends an unpartitioned device.
- Size
- Specify the size of the logical volume to create in GB. Do not enter units, only the number.
- Mount Point
- Specify the mount point for the logical volume. This should match the brick directory that you specified on the previous page of the wizard.
- Thinp
- Specify whether to provision the volume thinly or not. Note that thick provisioning is recommended for the engine volume.
- RAID
- Specify the RAID configuration to use. This should match the RAID configuration of your host. Supported values are raid5, raid6, and jbod.
- Stripe Size
- Specify the RAID stripe size in KB. Do not enter units, only the number. This can be ignored for JBOD configurations.
- Disk Count
- Specify the number of data disks in a RAID volume. This can be ignored for JBOD configurations.
Deployment wizard: Review and edit configuration
- Click Edit to begin editing the generated gdeployConfig.conf file.
(Optional) Configure Transport Layer Security (TLS/SSL)
This can be configured during or after deployment. If you want to configure TLS/SSL encryption as part of deployment, see one of the following sections:
Review the configuration file
When you are satisfied with the configuration details, click Save and then click Deploy.
Wait for deployment to complete
You can watch the deployment process in the text field as the gdeploy process runs using the generated configuration file.
In case of deployment failure
If deployment fails, click the Redeploy button. This returns you to the Review and edit configuration tab so that you can correct any issues in the generated configuration file before reattempting deployment.
It may be necessary to clean up previous deployment attempts before you try again. Follow the steps in Chapter 11, Cleaning up Automated Red Hat Gluster Storage Deployment Errors to clean up previous attempts.
The deployment script completes and displays Successfully deployed gluster.
Chapter 8. Deploy the Hosted Engine using the Cockpit UI
This section shows you how to deploy the Hosted Engine using the Cockpit UI. Following this process results in Red Hat Virtualization Manager running as a virtual machine on the first physical machine in your deployment. It also configures a Default cluster comprised of the three physical machines, and enables Red Hat Gluster Storage functionality and the virtual-host tuned performance profile for each machine in the cluster.
Navigate to the Hosted Engine deployment wizard
After completing Chapter 7, Configure Red Hat Gluster Storage for Hosted Engine using the Cockpit UI, click Continue to Hosted Engine Deployment to go to the wizard.
Agree to the installation
Continuing will configure this host for serving as hypervisor and create a VM where you have to install the engine afterwards. Are you sure you want to continue?
Type Yes in the field provided. Click Next and wait for the environment to be set up.
Answer deployment questions when prompted
Provide answers to the hosted engine deployment wizard prompts to install and configure the hosted engine. You can press Ctrl+D at any time to halt the process.
Table 8.1. Table Hosted Engine deployment wizard prompts
Prompt text Action Do you wish to install ovirt-engine-appliance rpm?
Enter Yes.
Do you want to configure this host and its cluster for gluster?
Enter Yes.
iptables was detected on your computer, do you wish setup to configure it?
Enter No.
Please indicate a pingable gateway IP address
Enter the IP address of the front-end gateway server.
Please indicate a nic to set ovirtmgmt bridge on
Enter the front-end management network address.
The following appliance have been found on your system: [1] The RHV-M Appliance image (OVA) - 4.1.20170328.1.el7ev [2] Directly select an OVA file. Please select an appliance
Enter the appropriate number for the hosted engine appliance (usually 1).
Would you like to use cloud-init to customize the appliance on first boot
Enter Yes.
Would you like to generate on-fly a cloud-init ISO image (of no-cloud type) or do you have an existing one?
Enter Generate.
Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN:
Enter the fully-qualified domain name that you want to use for your hosted engine.
Please provide the domain name you would like to use for the engine appliance. Engine VM domain:
Verify that the detected domain name is correct.
Automatically execute engine-setup on the engine appliance on first boot
Enter Yes.
Enter root password that will be used for the engine appliance
Enter the password to be used for remotely logging in to the hosted engine.
Confirm appliance root password
Re-enter the password to be used for remotely logging in to the hosted engine.
Enter ssh public key for the root user that will be used for the engine appliance
Leave this field empty.
Do you want to enable ssh access for the root user
Enter Yes.
Please specify the size of the VM disk in GB
Use the default value.
Please specify the memory size of the VM in MB
Use the default value.
The following CPU types are supported by this host:
Please specify the CPU type to be used by the VM
Use the default value.
Please specify the number of virtual CPUs for the VM
Use the default value.
You may specify a unicast MAC address for the VM or accept a randomly generated default
Enter the MAC address that resolves to the fully-qualified domain name specified for the hosted engine.
How should the engine VM network be configured
Enter DHCP.
Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? (Note: ensuring that this host could resolve the engine VM hostname is still up to you)
Enter No.
Enter engine admin password (for the RHV UI)
Enter the password that the Red Hat Virtualization Manager administrator will use.
Confirm engine admin password
Re-enter the password that the Red Hat Virtualization Manager administrator will use.
Please provide the name of the SMTP server through which we will send notifications
Use the default value.
Please provide the TCP port number of the SMTP server
Use the default value.
Please provide the email address from which notifications will be sent
Use the default value.
Please provide a comma-separated list of email addresses which will get notifications
Enter any email addresses that you want to receive notifications from the hosted engine. The default (root@localhost) is adequate.
Confirm installation settings
Please confirm installation settings
Review the configuration values and verify that they are correct. When satisfied, enter Yes in the field provided and click Next.
NoteThere is currently no way to edit configuration at this point in the process; if your configuration details are incorrect, you need to start this process again to correct them, or continue the process and correct them during deployment.
Wait for deployment to complete
This will take some time (about 30 minutes).
When deployment is complete, the following message is displayed:
Hosted Engine Setup successfully completed!
You can now log in to Red Hat Virtualization Manager to complete configuration.
If deployment did not complete, see Chapter 10, Handling Hosted Engine Deployment Errors.
Chapter 9. Configure Red Hat Gluster Storage as a Red Hat Virtualization storage domain
9.1. Create the logical network for gluster traffic
Log in to the engine
Browse to the engine and log in using the administrative credentials you configured in Chapter 8, Deploy the Hosted Engine using the Cockpit UI.
Create a logical network for gluster traffic
- Click the Networks tab and then click New. The New Logical Network wizard appears.
- On the General tab of the wizard, provide a Name for the new logical network, and uncheck the VM Network checkbox.
- On the Cluster tab of the wizard, uncheck the Required checkbox.
- Click OK to create the new logical network.
Enable the new logical network for gluster
- Click the Networks tab and select the new logical network.
- Click the Clusters sub-tab and then click Manage Network. The Manage Network dialogue appears.
- In the Manage Network dialogue, check the Migration Network and Gluster Network checkboxes.
- Click OK to save.
Attach the gluster network to the host
- Click the Hosts tab and select the host.
- Click the Network Interfaces subtab and then click Setup Host Networks.
- Drag and drop the newly created network to the correct interface.
- Ensure that the Verify connectivity checkbox is checked.
- Ensure that the Save network configuration checkbox is checked.
- Click OK to save.
Verify the health of the network
Click the Hosts tab and select the host. Click the Networks subtab and check the state of the host’s network.
If the network interface enters an "Out of sync" state or does not have an IPv4 Address, click the Management tab that corresponds to the host and click Refresh Capabilities.
9.2. Create the master storage domain
- Click the Storage tab and then click New Domain.
- Select GlusterFS as the Storage Type and provide a Name for the domain.
Check the Use managed gluster volume option.
A list of volumes available in the cluster appears.
Select the vmstore volume and add the following to the Mount Options:
backup-volfile-servers=server2:server3
- Click OK to save.
The hosted engine storage domain is imported automatically after master storage domain creation.
Wait until the Hosted Engine virtual machine and its storage domain are available before continuing with other tasks; this ensures that the Hosted Engine tab is available for use.
9.3. Add remaining virtualization hosts to the hosted engine
Follow these steps in Red Hat Virtualization Manager for each of the other hosts.
- Click the Hosts tab and then click New to open the New Host dialog.
- Provide a Name, Address, and Password for the new host.
- Uncheck the Automatically configure host firewall checkbox, as firewall rules are already configured by gdeploy.
- In the Hosted Engine tab of the New Host dialog, set the value of Choose hosted engine deployment action to deploy.
- Click Deploy.
Attach the gluster network to the host
- Click the Hosts tab and select the host.
- Click the Network Interfaces subtab and then click Setup Host Networks.
- Drag and drop the newly created network to the correct interface.
- Ensure that the Verify connectivity checkbox is checked.
- Ensure that the Save network configuration checkbox is checked.
- Click OK to save.
In the General subtab for this host, verify that the value of Hosted Engine HA is Active, with a positive integer as a score.
ImportantIf Score is listed as N/A, you may have forgotten to select the deploy action for Choose hosted engine deployment action.
- Select the host and click Management > Maintenance > OK to place this host in Maintenance mode.
- Click Installation > Reinstall to open the Reinstall window.
- On the General tab, uncheck the Automatically Configure Host firewall checkbox.
- On the Hosted Engine tab, set the value of Choose hosted engine deployment action to deploy.
- Click OK to re-deploy the host.
Verify the health of the network
Click the Hosts tab and select the host. Click the Networks subtab and check the state of the host’s network.
ImportantIf the network interface enters an "Out of sync" state or does not have an IPv4 Address, right-click on the host and click on Management > Refresh Capabilities.
See the Red Hat Virtualization 4.1 Self-Hosted Engine Guide for further details: https://access.redhat.com/documentation/en/red-hat-virtualization/4.1/paged/self-hosted-engine-guide/chapter-7-installing-additional-hosts-to-a-self-hosted-environment
Part III. Troubleshoot
Chapter 10. Handling Hosted Engine Deployment Errors
If an error occurs during hosted engine deployment, deployment pauses, and the following message is displayed:
Please check Engine VM configuration Make a selection from the options below: (1) Continue setup - Engine VM configuration has been fixed (2) Abort setup
Error details are then listed in red. You can log in to the hosted engine and attempt to correct the errors to continue setup. Note that you will need to stop and start the ovirt-engine service to apply your corrections before selecting Continue setup.
Chapter 11. Cleaning up Automated Red Hat Gluster Storage Deployment Errors
- Create a volume_cleanup.conf file based on the volume_cleanup.conf file in Appendix B, Example cleanup configuration files for gdeploy.
Run gdeploy using the volume_cleanup.conf file.
# gdeploy -c volume_cleanup.conf
- Create a lv_cleanup.conf file based on the lv_cleanup.conf file in Appendix B, Example cleanup configuration files for gdeploy.
Run gdeploy using the lv_cleanup.conf file.
# gdeploy -c lv_cleanup.conf
Check mount configurations on all hosts
Check the /etc/fstab file on all hosts, and remove any lines that correspond to XFS mounts of automatically created bricks.
Part IV. Verify
Chapter 12. Verify your deployment
When deployment is complete, use Red Hat Virtualization Manager to check the status of the deployment.
Create a virtual machine to verify that your deployment is operating as expected. For details, see the Red Hat Virtualization Virtual Machine Management Guide: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/virtual_machine_management_guide/.
Part V. Reference Material
Appendix A. Configuring encryption during deployment
A.1. Configuring TLS/SSL during deployment using a Certificate Authority signed certificate
A.1.1. Before you begin
Ensure that you have appropriate certificates signed by a Certificate Authority before proceeding. Obtaining certificates is outside the scope of this document.
A.1.2. Configuring TLS/SSL encryption using a CA-signed certificate
Ensure that the following files exist in the following locations on all nodes.
- /etc/ssl/glusterfs.key
- The node’s private key.
- /etc/ssl/glusterfs.pem
- The certificate signed by the Certificate Authority, which becomes the node’s certificate.
- /etc/ssl/glusterfs.ca
- The Certificate Authority’s certificate. For self-signed configurations, this file contains the concatenated certificates of all nodes.
Enable management encryption.
Create the /var/lib/glusterd/secure-access file on each node.
# touch /var/lib/glusterd/secure-access
Configure encryption.
Add the following lines to each volume listed in the configuration file generated as part of Chapter 7, Configure Red Hat Gluster Storage for Hosted Engine using the Cockpit UI. This creates and configures TLS/SSL based encryption between gluster volumes using CA-signed certificates as part of the deployment process.
key=client.ssl,server.ssl,auth.ssl-allow value=on,on,"host1;host2;host3"
Ensure that you save the generated file after editing.
A.2. Configuring TLS/SSL encryption during deployment using a self signed certificate
Add the following lines to the configuration file generated in Chapter 7, Configure Red Hat Gluster Storage for Hosted Engine using the Cockpit UI to create and configure TLS/SSL based encryption between gluster volumes using self signed certificates as part of the deployment process. Certificates generated by gdeploy are valid for one year.
In the configuration for the first volume, add lines for the enable_ssl and ssl_clients parameters and their values:
[volume1] enable_ssl=yes ssl_clients=<Gluster_Network_IP1>,<Gluster_Network_IP2>,<Gluster_Network_IP3>
In the configuration for subsequent volumes, add the following lines to define values for the client.ssl, server.ssl, and auth.ssl-allow parameters:
[volumeX] key=client.ssl,server.ssl,auth.ssl-allow value=on,on,"<Gluster_Network_IP1>;<Gluster_Network_IP2>;<Gluster_Network_IP3>"
Appendix B. Example cleanup configuration files for gdeploy
In the event that deployment fails, it is necessary to clean up the previous deployment attempts before retrying the deployment. The following two example files can be run with gdeploy to clean up previously failed deployment attempts so that deployment can be reattempted.
volume_cleanup.conf
[hosts] <Gluster_Network_NodeA> <Gluster_Network_NodeB> <Gluster_Network_NodeC> [volume1] action=delete volname=engine [volume2] action=delete volname=vmstore [volume3] action=delete volname=data [peer] action=detach
lv_cleanup.conf
[hosts] <Gluster_Network_NodeA> <Gluster_Network_NodeB> <Gluster_Network_NodeC> [backend-reset] pvs=sdb,sdc unmount=yes
Appendix C. Understanding the generated gdeploy configuration file
Gdeploy automatically provisions one or more machines with Red Hat Gluster Storage based on a configuration file.
The Cockpit UI provides provides a wizard that allows users to generate a gdeploy configuration file that is suitable for performing the base-level deployment of Red Hat Hyperconverged Infrastructure.
This section explains the gdeploy configuration file that would be generated if the following configuration details were specified in the Cockpit UI:
- 3 hosts with IP addresses 192.168.0.101, 192.168.0.102, and 192.168.0.103
- No additional packages or repositories.
- Arbiter configuration for all volumes.
- 12 bricks that are configured with RAID 6 with a stripe size of 256 KB.
This results in a gdeploy configuration file with the following sections.
For further details on any of the sections defined here, see the Red Hat Gluster Storage Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/chap-red_hat_storage_volumes#chap-Red_Hat_Storage_Volumes-gdeploy_configfile.
[hosts] section
[hosts] 192.168.0.101 192.168.0.102 192.168.0.103
The [hosts]
section defines the IP addresses of the three physical machines to be configured according to this configuration file.
[script1] section
[script1] action=execute ignore_script_errors=no file=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.0.101,192.168.0.102,192.168.0.103
The [script1]
section specifies a script to run to verify that all hosts are configured correctly in order to allow gdeploy to run without error.
Underlying storage configuration
[disktype] raid6 [diskcount] 12 [stripesize] 256
The [disktype]
section specifies the hardware configuration of the underlying storage for all hosts.
The [diskcount]
section specifies the number of disks in RAID storage. This can be omitted for JBOD configurations.
The [stripesize]
section specifies the RAID storage stripe size in kilobytes. This can be omitted for JBOD configurations.
Enable and restart NTPD
[service1] action=enable service=ntpd [service2] action=restart service=ntpd
These service sections enable and restart the Network Time Protocol daemon, NTPD, on all hosts.
Create physical volume on all hosts
[pv1] action=create devices=sdb ignore_pv_errors=no
The [pv1]
section creates a physical volume on the sdb
device of all hosts.
Create volume group on all hosts
[vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no
The [vg1]
section creates a volume group in the previously created physical volume on all hosts.
Create the logical volume thin pool
[lv1:{192.168.0.101,192.168.0.102}] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool poolmetadatasize=16GB size=1000GB [lv2:192.168.0.103] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool poolmetadatasize=16GB size=20GB
The [lv1:*]
section creates a 1000 GB thin pool on the first two hosts with a meta data pool size of 16 GB.
The [lv2:*]
section creates a 20 GB thin pool on the third host with a meta data pool size of 16 GB. This is the logical volume used for the arbiter brick.
The chunksize
variable is also available, but should be used with caution. chunksize
defines the size of the chunks used for snapshots, cache pools, and thin pools. By default this is specified in kilobytes. For RAID 5 and 6 volumes, gdeploy calculates the default chunksize by multiplying the stripe size and the disk count.
Red Hat recommends using at least the default chunksize. If the chunksize is too small and your volume runs out of space for metadata, the volume is unable to create data. Red Hat recommends monitoring your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full.
Create underlying engine storage
[lv3:{192.168.0.101,192.168.0.102}] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=100GB lvtype=thick [lv4:192.168.0.103] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=10GB lvtype=thick
The [lv3:*]
section creates a 100 GB thick provisioned logical volume called gluster_lv_engine on the first two hosts. This volume is configured to mount on /gluster_bricks/engine
.
The [lv4:*]
section creates a 10 GB thick provisioned logical volume for the engine on the third host. This volume is configured to mount on /gluster_bricks/engine
.
Create underlying data and virtual machine boot disk storage
[lv5:{192.168.0.101,192.168.0.102}] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=500GB [lv6:192.168.0.103] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=10GB [lv7:{192.168.0.101,192.168.0.102}] action=create lvname=gluster_lv_vmstore ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/vmstore lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=500GB [lv8:192.168.0.103] action=create lvname=gluster_lv_vmstore ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/vmstore lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=10GB
The [lv5:*]
and [lv7:*]
sections create 500 GB logical volumes as bricks for the data and vmstore volumes on the first two hosts.
The [lv6:*]
and [lv8:*]
sections create 10 GB logical volumes as arbiter bricks for the data and vmstore volumes on the third host.
The data bricks are configured to mount on /gluster_bricks/data
, and the vmstore bricks are configured to mount on /gluster_bricks/vmstore
.
Configure SELinux file system labels
[selinux] yes
The [selinux]
section specifies that the storage created should be configured with appropriate SELinux file system labels for Gluster storage.
Start glusterd
[service3] action=start service=glusterd slice_setup=yes
The [service3]
section starts the glusterd service and configures a control group to ensure glusterd cannot consume all system resources; see the Red Hat Enterprise Linux Resource Management Guide for details: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Resource_Management_Guide/index.html.
Configure the firewall
[firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs
The [firewalld]
section opens the ports required to allow gluster traffic.
Disable gluster hooks
[script2] action=execute file=/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh
The [script2]
section disables gluster hooks that can interfere with the Hyperconverged Infrastructure.
Create gluster volumes
[volume1] action=create volname=engine transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size value=virt,36,36,30,on,off,enable,64MB brick_dirs=192.168.0.101:/gluster_bricks/engine/engine,192.168.0.102:/gluster_bricks/engine/engine,192.168.0.103:/gluster_bricks/engine/engine ignore_volume_errors=no arbiter_count=1 [volume2] action=create volname=data transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size value=virt,36,36,30,on,off,enable,64MB brick_dirs=192.168.0.101:/gluster_bricks/data/data,192.168.0.102:/gluster_bricks/data/data,192.168.0.103:/gluster_bricks/data/data ignore_volume_errors=no arbiter_count=1 [volume3] action=create volname=vmstore transport=tcp replica=yes replica_count=3 key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal,features.shard-block-size value=virt,36,36,30,on,off,enable,64MB brick_dirs=192.168.0.101:/gluster_bricks/vmstore/vmstore,192.168.0.102:/gluster_bricks/vmstore/vmstore,192.168.0.103:/gluster_bricks/vmstore/vmstore ignore_volume_errors=no arbiter_count=1
The [volume*]
sections configure three arbitrated replicated Red Hat Gluster Storage volumes: engine, data and vmstore. Each volume has one arbiter brick on the third host.
The key
and value
parameters are used to set the following options:
-
group=virt
-
storage.owner-uid=36
-
storage.owner-gid=36
-
network.ping-timeout=30
-
performance.strict-o-direct=on
-
network.remote-dio=off
-
cluster.granular-entry-heal=enable
-
features.shard-block-size=64MB