Chapter 5. Installing the Red Hat Virtualization Manager
5.1. Manually installing the RHV-M Appliance
When you deploy the self-hosted engine, the following sequence of events takes place:
- The installer installs the RHV-M Appliance to the deployment host.
- The appliance installs the Manager virtual machine.
- The appliance installs the Manager on the Manager virtual machine.
However, you can install the appliance manually on the deployment host beforehand if you need to. The appliance is large and network connectivity issues might cause the appliance installation to take a long time, or possibly fail.
On Red Hat Enterprise Linux hosts:
# dnf module reset virtNote
If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.
You can see the value of the stream by entering:
# dnf module list virt
virtmodule in the Advanced Virtualization stream with the following command:
For RHV 4.4.2:
# dnf module enable virt:8.2
For RHV 4.4.3 to 4.4.5:
# dnf module enable virt:8.3
For RHV 4.4.6 to 4.4.10:
# dnf module enable virt:av
For RHV 4.4 and later:
# dnf module enable virt:rhelNote
Starting with RHEL 8.6 the Advanced virtualization packages will use the standard
virt:rhelmodule. For RHEL 8.4 and 8.5, only one Advanced Virtualization stream is used,
Synchronize installed packages to update them to the latest available versions:
# dnf distro-sync --nobest
Install the RHV-M Appliance to the host manually:
# dnf install rhvm-appliance
Now, when you deploy the self-hosted engine, the installer detects that the appliance is already installed.
5.2. Enabling and configuring the firewall
firewalld must be installed and running before you run the self-hosted deployment script. You must also have an active zone with an interface configured.
firewalldpackage, so you do not need to do any additional steps.
# systemctl unmask firewalld # systemctl start firewalld
To ensure firewalld starts automatically at system start, enter the following command as root:
# systemctl enable firewalld
Ensure that firewalld is running:
# systemctl status firewalld
Ensure that your management interface is in a firewall zone via
# firewall-cmd --get-active-zones
Now you are ready to deploy the self-hosted engine.
5.3. Deploying the self-hosted engine using the command line
You can deploy a self-hosted engine from the command line. After installing the setup package, you run the command
hosted-engine --deploy, and a script collects the details of your environment and uses them to configure the host and the Manager.
You can customize the Manager virtual machine during deployment, either manually, by pausing the deployment, or using automation.
Setting the variable
truepauses deployment after installing the Manager and adding the deployment host to the Manager.
Setting the variable
truepauses the deployment before installing the Manager and before restoring the Manager when using
he_pause_before_engine_setupvariables are set to true a lock file is created at
/tmpwith the suffix
_he_setup_lockon the deployment host. You can then manually customize the virtual machine as needed. The deployment continues after you delete the lock file, or after 24 hours, whichever comes first.
Adding an Ansible playbook to any of the following directories on the deployment host automatically runs the playbook. Add the playbook under one of the following directories under
Upgrade the appliance content to the latest product version before performing
To do this manually, pause the deployment using
he_pause_before_engine_setupand perform a
To do this automatically, apply the
- To do this manually, pause the deployment using
- FQDNs prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS.
- When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
- Optional: If you want to customize the Manager virtual machine during deployment using automation, an Ansible playbook must be added. See Customizing the Engine virtual machine using automation during deployment.
The self-hosted engine setup script requires ssh public key access using 2048-bit RSA keys from the engine virtual machine to the root account of its bare metal host. In
/etc/ssh/sshd_config, these values must be set as follows:
PubkeyAcceptedKeyTypesmust allow 2048-bit RSA keys or stronger.
By default, this setting uses system-wide crypto policies. For more information, see the manual page
RHVH hosts that are registered with the Manager in versions earlier than 184.108.40.206 require RSA 2048 for backward compatibility until all the keys are migrated.
RHVH hosts registered for 220.127.116.11 and later use the strongest algorithm that is supported by both the Manager and RHVH. The
PubkeyAcceptedKeyTypessetting helps determine which algorithm is used.
PermitRootLoginis set to
PubkeyAuthenticationis set to
Install the deployment tool:
# dnf install ovirt-hosted-engine-setup
tmuxwindow manager to run the script to avoid losing the session in case of network or terminal disruption.
Install and run
# dnf -y install tmux # tmux
Start the deployment script:
# hosted-engine --deploy
Alternatively, to pause the deployment after adding the deployment host to the Manager, use the command line option
# hosted-engine --deploy --ansible-extra-vars=he_pause_host=trueNote
To escape the script at any time, use the Ctrl+D keyboard combination to abort deployment. In the event of session timeout or connection disruption, run
tmux attachto recover the deployment session.
When prompted, enter Yes to begin the deployment:
Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]:
Configure the network. Check that the gateway shown is correct and press Enter. Enter a pingable address on the same subnet so the script can check the host’s connectivity.
Please indicate a pingable gateway IP address [X.X.X.X]:
The script detects possible NICs to use as a management bridge for the environment. Enter one of them or press Enter to accept the default.
Please indicate a nic to set ovirtmgmt bridge on: (ens1, ens0) [ens1]:
Specify how to check network connectivity. The default is
Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]:
- Attempts to ping the gateway.
- Checks the connection to the DNS server.
- Creates a TCP connection to a host and port combination. You need to specify a destination IP address and port. Once the connection is successfully created, the network is considered to be alive. Ensure that the given host is able to accept incoming TCP connections on the given port.
- The network is always considered connected.
Enter a name for the data center in which to deploy the host for the self-hosted engine. The default name is Default.
Please enter the name of the data center where you want to deploy this hosted-engine host. Data center [Default]:
Enter a name for the cluster in which to deploy the host for the self-hosted engine. The default name is Default.
Please enter the name of the cluster where you want to deploy this hosted-engine host. Cluster [Default]:
- If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
To deploy with a custom RHV-M Appliance appliance image, specify the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use. Entering no value will use the image from the rhvm-appliance rpm, installing it if needed. Appliance image path :
Enter the CPU and memory configuration for the Manager virtual machine:
Please specify the number of virtual CPUs for the VM. The default is the appliance OVF value : Please specify the memory size of the VM in MB. The default is the maximum available :
Specify the FQDN for the Manager virtual machine, such as
Please provide the FQDN you would like to use for the engine. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN :
Specify the domain of the Manager virtual machine. For example, if the FQDN is
manager.example.com, then enter
Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [example.com]
Create the root password for the Manager, and reenter it to confirm:
Enter root password that will be used for the engine appliance: Confirm appliance root password:
Optional: Enter an SSH public key to enable you to log in to the Manager virtual machine as the root user without entering a password, and specify whether to enable SSH access for the root user:
You may provide an SSH public key, that will be added by the deployment script to the authorized_keys file of the root user in the engine appliance. This should allow you passwordless login to the engine machine after deployment. If you provide no key, authorized_keys will not be touched. SSH public key : Do you want to enable ssh access for the root user (yes, no, without-password) [yes]:
Optional: You can apply the DISA STIG security profile on the Manager virtual machine. The DISA STIG profile is the default OpenSCAP profile.
Do you want to apply a default OpenSCAP security profile? (Yes, No) [No]:
Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:
Enter the Manager virtual machine’s networking details:
How should the engine VM network be configured (DHCP, Static)[DHCP]?
If you specified Static, enter the IP address of the Manager virtual machine:Important
- The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
- For IPv6, Red Hat Virtualization supports only static addressing.
Please enter the IP address to be used for the engine VM [x.x.x.x]: Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip):
Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s
/etc/hostsfile. You must ensure that the host names are resolvable.
Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you. Add lines to /etc/hosts? (Yes, No)[Yes]:
Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Alternatively, press Enter to accept the defaults:
Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server : Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
Create a password for the
admin@internaluser to access the Administration Portal and reenter it to confirm:
Enter engine admin password: Confirm engine admin password:
Specify the hostname of the deployment host:
Please provide the hostname of this host on the management network [hostname.example.com]:
The script creates the virtual machine. By default, the script first downloads and installs the RHV-M Appliance, which increases the installation time.
Optional: If you set the variable
he_pause_host: true, the deployment pauses after adding the deployment host to the Manager. You can now log in from the deployment host to the Manager virtual machine to customize it. You can log in with either the FQDN or the IP address of the Manager. For example, if the FQDN of the Manager is
$ ssh email@example.comTip
In the installation log, the IP address is in
local_vm_ip. The installation log is the most recent instance of
- Customize the Manager virtual machine as needed.
- When you are done, log in to the Administration Portal using a browser with the Manager FQDN and make sure that the host’s state is Up.
- Delete the lock file and the deployment script automatically continues, configuring the Manager virtual machine.
Select the type of storage to use:
Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
For NFS, enter the version, full address and path to the storage, and any mount options:
Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs If needed, specify additional mount options for the connection to the hosted-engine storage domain :
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.Note
Please specify the iSCSI portal IP address: Please specify the iSCSI portal port : Please specify the iSCSI discover user: Please specify the iSCSI discover password: Please specify the iSCSI portal login user: Please specify the iSCSI portal login password: The following targets have been found:  iqn.2017-10.com.redhat.example:he TPGT: 1, portals: 192.168.1.xxx:3260 192.168.2.xxx:3260 192.168.3.xxx:3260 Please select a target (1) : 1 The following luns have been found on the requested target:  360003ff44dc75adcb5046390a16b4beb 199GiB MSFT Virtual HD status: free, paths: 1 active Please select the destination LUN (1) :
For Gluster storage, enter the full address and path to the storage, and any mount options:Important
Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:
gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30
Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume If needed, specify additional mount options for the connection to the hosted-engine storage domain :
For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
The following luns have been found on the requested target:  3514f0c5447600351 30GiB XtremIO XtremApp status: used, paths: 2 active  3514f0c5447600352 30GiB XtremIO XtremApp status: used, paths: 2 active Please select the destination LUN (1, 2) :
Enter the disk size of the Manager virtual machine:
Please specify the size of the VM disk in GB: :
When the deployment completes successfully, one data center, cluster, host, storage domain, and the Manager virtual machine are already running. You can log in to the Administration Portal to add any other resources.
- Optional: Install and configure Red Hat Single Sign On so that you can add additional users to the environment. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.
- Optional: Deploy Grafana so you can monitor and display reports from your RHV environment. For more information, see Configuring Grafana in the Administration Guide.
The Manager virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal.
Both the Manager’s I/O scheduler and the hypervisor that hosts the Manager reorder I/O requests. This double reordering might delay I/O requests to the storage layer, impacting performance.
Depending on your data center, you might improve performance by changing the I/O scheduler to
none. For more information, see Available disk schedulers in Monitoring and managing system status and performance for RHEL.
The next step is to enable the Red Hat Virtualization Manager repositories.
5.4. Enabling the Red Hat Virtualization Manager Repositories
You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the
Red Hat Virtualization Manager subscription, and enable the Manager repositories.
Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
# subscription-manager registerNote
If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.
Red Hat Virtualization Managersubscription pool and record the pool ID:
# subscription-manager list --available
Use the pool ID to attach the subscription to the system:
# subscription-manager attach --pool=pool_idNote
To view currently attached subscriptions:
# subscription-manager list --consumed
To list all enabled repositories:
# dnf repolist
Configure the repositories:
# subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
Set the RHEL version to 8.6:
# subscription-manager release --set=8.6
# dnf module -y enable pki-deps
Enable version 12 of the
# dnf module -y enable postgresql:12
Enable version 14 of the
# dnf module -y enable nodejs:14
- Update the Self-Hosted Engine using the procedure Updating a Self-Hosted Engine in the Upgrade Guide.
For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components
Log in to the Administration Portal, where you can add hosts and storage to the environment:
5.5. Connecting to the Administration Portal
Access the Administration Portal using a web browser.
In a web browser, navigate to
https://manager-fqdn/ovirt-engine, replacing manager-fqdn with the FQDN that you provided during installation.Note
You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/. For example:
# vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS="alias1.example.com alias2.example.com"
The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended.
- Click Administration Portal. An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time.
- Enter your User Name and Password. If you are logging in for the first time, use the user name admin along with the password that you specified during installation.
- Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain.
- Click Log In.
- You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page.
To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out. You are logged out of all portals and the Manager welcome screen displays.