Chapter 4. Recommendations
The configuration described in this section is not required, but may improve the stability or performance of your deployment.
4.1. General recommendations
- Take a full backup as soon as deployment is complete, and store the backup in a separate location. Take regular backups thereafter. See Configuring backup and recovery options for details.
- Avoid running any service that your deployment depends on as a virtual machine in the same RHHI for Virtualization environment. If you must run a required service in the same deployment, carefully plan your deployment to minimize the downtime of the virtual machine running the required service.
Ensure that hyperconverged hosts have sufficient entropy. Failures can occur when the value in
/proc/sys/kernel/random/entropy_availis less than
200. To increase entropy, install the
rng-toolspackage and follow the steps in https://access.redhat.com/solutions/1395493.
- Document your environment so that everyone who works with it is aware of its current state and required procedures.
4.2. Security recommendations
- Do not disable any security features (such as HTTPS, SELinux, and the firewall) on the hosts or virtual machines.
- Register all hosts and Red Hat Enterprise Linux virtual machines to either the Red Hat Content Delivery Network or Red Hat Satellite in order to receive the latest security updates and errata.
- Create individual administrator accounts, instead of allowing many people to use the default admin account, for proper activity tracking.
- Limit access to the hosts and create separate logins. Do not create a single root login for everyone to use. See Managing user accounts in the web console in the Red Hat Enterprise Linux 8 documentation.
- Do not create untrusted users on hosts.
- Avoid installing additional packages such as analyzers, compilers, or other components that add unnecessary security risk.
4.3. Host recommendations
- Standardize the hosts in the same cluster. This includes having consistent hardware models and firmware versions. Mixing different server hardware within the same cluster can result in inconsistent performance from host to host.
- Configure fencing devices at deployment time. Fencing devices are required for high availability.
- Use separate hardware switches for fencing traffic. If monitoring and fencing go over the same switch, that switch becomes a single point of failure for high availability.
4.4. Networking recommendations
- Bond network interfaces, especially on production hosts. Bonding improves the overall availability of service, as well as network bandwidth. See Network Bonding in the Administration Guide.
- For optimal performance and simplified troubleshooting, use VLANs to separate different traffic types and make the best use of 10 GbE or 40 GbE networks.
If the underlying switches support jumbo frames, set the MTU to the maximum size (for example,
9000) that the underlying switches support. This setting enables optimal throughput, with higher bandwidth and reduced CPU usage, for most applications. The default MTU is determined by the minimum size supported by the underlying switches. If you have LLDP enabled, you can see the MTU supported by the peer of each host in the NIC’s tool tip in the Setup Host Networks window.
- 1 GbE networks should only be used for management traffic. Use 10 GbE or 40 GbE for virtual machines and Ethernet-based storage.
- If additional physical interfaces are added to a host for storage use, uncheck VM network so that the VLAN is assigned directly to the physical interface.
4.4.1. Recommended practices for configuring host networks
If your network environment is complex, you may need to configure a host network manually before adding the host to Red Hat Virtualization Manager.
Red Hat recommends the following practices for configuring a host network:
- Configure the network with the Web Console. Alternatively, you can use nmtui or nmcli.
- If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster.
Use the following naming conventions:
VLANs on bond interfaces:
- VLAN devices:
- Use network bonding. Networking teaming is not supported.
Use recommended bonding modes:
For the bridged network used as the virtual machine logical network (
ovirtmgmt), see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?.
- For any other logical network, any supported bonding mode can be used.
Red Hat Virtualization’s default bonding mode is
(Mode 4) Dynamic Link Aggregation. If your switch does not support Link Aggregation Control Protocol (LACP), use
(Mode 1) Active-Backup. See Bonding Modes for details.
- For the bridged network used as the virtual machine logical network (
Configure a VLAN on a physical NIC as in the following example (although
nmcliis used, you can use any tool):
# nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 # nmcli con mod vlan50 +ipv4.dns 188.8.131.52 +ipv4.addresses 184.108.40.206/24 +ivp4.gateway 220.127.116.11
Configure a VLAN on a bond as in the following example (although
nmcliis used, you can use any tool):
# nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore # nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond # nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond # nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 # nmcli con mod vlan50 +ipv4.dns 18.104.22.168 +ipv4.addresses 22.214.171.124/24 +ivp4.gateway 126.96.36.199
Do not disable
- Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules.
4.5. Self-hosted engine recommendations
- Create a separate data center and cluster for the Red Hat Virtualization Manager and other infrastructure-level services, if the environment is large enough to allow it. Although the Manager virtual machine can run on hosts in a regular cluster, separation from production virtual machines helps facilitate backup schedules, performance, availability, and security.
- A storage domain dedicated to the Manager virtual machine is created during self-hosted engine deployment. Do not use this storage domain for any other virtual machines.
- All self-hosted engine nodes should have an equal CPU family so that the Manager virtual machine can safely migrate between them. If you intend to have various families, begin the installation with the lowest one.
- If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it.