Chapter 5. Installing the Red Hat Virtualization Manager

5.1. Manually installing the RHV-M Appliance

When you deploy the self-hosted engine, the following sequence of events takes place:

  1. The installer installs the RHV-M Appliance to the deployment host.
  2. The appliance installs the Manager virtual machine.
  3. The appliance installs the Manager on the Manager virtual machine.

However, you can install the appliance manually on the deployment host beforehand if you need to. The appliance is large and network connectivity issues might cause the appliance installation to take a long time, or possibly fail.

Procedure

  1. On Red Hat Enterprise Linux hosts:

    1. Reset the virt module:

      # dnf module reset virt
      Note

      If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.

      You can see the value of the stream by entering:

      # dnf module list virt
    2. Enable the virt module in the Advanced Virtualization stream with the following command:

      • For RHV 4.4.2:

        # dnf module enable virt:8.2
      • For RHV 4.4.3 to 4.4.5:

        # dnf module enable virt:8.3
      • For RHV 4.4.6 or later:

        # dnf module enable virt:av
        Note

        Starting with RHEL 8.4, only one Advanced Virtualization stream is used, rhel:av.

  2. Synchronize installed packages to update them to the latest available versions:

    # dnf distro-sync --nobest
  3. Install the RHV-M Appliance to the host manually:

    # dnf install rhvm-appliance

Now, when you deploy the self-hosted engine, the installer detects that the appliance is already installed.

5.2. Enabling and configuring the firewall

firewalld must be installed and running before you run the self-hosted deployment script. You must also have an active zone with an interface configured.

Prerequisites

  • firewalld is installed. hosted-engine-setup requires the firewalld package, so you do not need to do any additional steps.

Procedure

  1. Start firewalld:

    # systemctl unmask firewalld
    # systemctl start firewalld

    To ensure firewalld starts automatically at system start, enter the following command as root:

    # systemctl enable firewalld
  2. Ensure that firewalld is running:

    # systemctl status firewalld
  3. Ensure that your management interface is in a firewall zone via

    # firewall-cmd --get-active-zones

Now you are ready to deploy the self-hosted engine.

5.3. Deploying the self-hosted engine using the command line

You can deploy a self-hosted engine from the command line. After installing the setup package, you run the command hosted-engine --deploy, and a script collects the details of your environment and uses them to configure the host and the Manager.

You can customize the Manager virtual machine during deployment, either manually, by pausing the deployment, or using automation.

  • Setting the variable he_pause_host to true pauses deployment after installing the Manager and adding the deployment host to the Manager.
  • Setting the variable he_pause_before_engine_setup to true pauses the deployment before installing the Manager and before restoring the Manager when using he_restore_from_file.

    Note

    When the he_pause_host or he_pause_before_engine_setup variables are set to true a lock file is created at /tmp with the suffix _he_setup_lock on the deployment host. You can then manually customize the virtual machine as needed. The deployment continues after you delete the lock file, or after 24 hours, whichever comes first.

  • Adding an Ansible playbook to any of the following directories on the deployment host automatically runs the playbook. Add the playbook under one of the following directories under /usr/share/ansible/collections/ansible_collections/redhat/rhv/roles/hosted_engine_setup/hooks/:

    • enginevm_before_engine_setup
    • enginevm_after_engine_setup
    • after_add_host
    • after_setup

Prerequisites

  • FQDNs prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS.
  • When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
  • (Optional) If you want to customize the Manager virtual machine during deployment using automation, an Ansible playbook must be added. See Appendix B, Customizing the Manager virtual machine using automation during deployment.

Procedure

  1. Install the deployment tool:

    # dnf install ovirt-hosted-engine-setup
  2. Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption.

    Install and run tmux:

    # dnf -y install tmux
    # tmux
  3. Start the deployment script:

    # hosted-engine --deploy

    Alternatively, to pause the deployment after adding the deployment host to the Manager, use the command line option --ansible-extra-vars=he_pause_host=true:

    # hosted-engine --deploy --ansible-extra-vars=he_pause_host=true
    Note

    To escape the script at any time, use the Ctrl+D keyboard combination to abort deployment. In the event of session timeout or connection disruption, run tmux attach to recover the deployment session.

  4. When prompted, enter Yes to begin the deployment:

    Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine.
    The locally running engine will be used to configure a new storage domain and create a VM there.
    At the end the disk of the local VM will be moved to the shared storage.
    Are you sure you want to continue? (Yes, No)[Yes]:
  5. Configure the network. Check that the gateway shown is correct and press Enter. Enter a pingable address on the same subnet so the script can check the host’s connectivity.

    Please indicate a pingable gateway IP address [X.X.X.X]:
  6. The script detects possible NICs to use as a management bridge for the environment. Enter one of them or press Enter to accept the default.

    Please indicate a nic to set ovirtmgmt bridge on: (ens1, ens0) [ens1]:
  7. Specify how to check network connectivity. The default is dns.

    Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]:
    ping
    Attempts to ping the gateway.
    dns
    Checks the connection to the DNS server.
    tcp
    Creates a TCP connection to a host and port combination. You need to specify a destination IP address and port. Once the connection is successfully created, the network is considered to be alive. Ensure that the given host is able to accept incoming TCP connections on the given port.
    none
    The network is always considered connected.
  8. Enter a name for the data center in which to deploy the host for the self-hosted engine. The default name is Default.

    Please enter the name of the datacenter where you want to deploy this hosted-engine host. [Default]:
  9. Enter a name for the cluster in which to deploy the host for the self-hosted engine. The default name is Default.

    Please enter the name of the cluster where you want to deploy this hosted-engine host. [Default]:
  10. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.

    If you want to deploy with a custom engine appliance image,
    please specify the path to the OVA archive you would like to use
    (leave it empty to skip, the setup will use RHV-M Appliance rpm installing it if missing):
  11. Enter the CPU and memory configuration for the Manager virtual machine:

    Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]:
    Please specify the memory size of the VM in MB (Defaults to maximum available): [7267]:
  12. Specify the FQDN for the Manager virtual machine, such as manager.example.com:

    Please provide the FQDN you would like to use for the engine appliance.
    Note: This will be the FQDN of the engine VM you are now going to launch,
    it should not point to the base host or to any other existing machine.
    Engine VM FQDN:
  13. Specify the domain of the Manager virtual machine. For example, if the FQDN is manager.example.com, then enter example.com.

    Please provide the domain name you would like to use for the engine appliance.
    Engine VM domain: [example.com]
  14. Create the root password for the Manager, and reenter it to confirm:

    Enter root password that will be used for the engine appliance:
    Confirm appliance root password:
  15. Optionally, enter an SSH public key to enable you to log in to the Manager virtual machine as the root user without entering a password, and specify whether to enable SSH access for the root user:

    Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip):
    Do you want to enable ssh access for the root user (yes, no, without-password) [yes]:
  16. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.

    You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:
  17. Enter the Manager virtual machine’s networking details:

    How should the engine VM network be configured (DHCP, Static)[DHCP]?

    If you specified Static, enter the IP address of the Manager virtual machine:

    Important
    • The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
    • For IPv6, Red Hat Virtualization supports only static addressing.
    Please enter the IP address to be used for the engine VM [x.x.x.x]:
    Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM
    Engine VM DNS (leave it empty to skip):
  18. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.

    Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
    Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No]
  19. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Alternatively, press Enter to accept the defaults:

    Please provide the name of the SMTP server through which we will send notifications [localhost]:
    Please provide the TCP port number of the SMTP server [25]:
    Please provide the email address from which notifications will be sent [root@localhost]:
    Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
  20. Create a password for the admin@internal user to access the Administration Portal and reenter it to confirm:

    Enter engine admin password:
    Confirm engine admin password:

    The script creates the virtual machine. By default, the script first downloads and installs the RHV-M Appliance, which increases the installation time.

  21. (Optional) If you set the variable he_pause_host: true, the deployment pauses after adding the deployment host to the Manager. You can now log in from the deployment host to the Manager virtual machine to customize it. You can log in with either the FQDN or the IP address of the Manager. For example, if the FQDN of the Manager is manager.example.com:

    $ ssh root@manager.example.com
    Tip

    In the installation log, the IP address is in local_vm_ip. The installation log is the most recent instance of /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm*.

    1. Customize the Manager virtual machine as needed.
    2. When you are done, log in to the Administration Portal using a browser with the Manager FQDN and make sure that the host’s state is Up.
    3. Delete the lock file and the deployment script automatically continues, configuring the Manager virtual machine.
  22. Select the type of storage to use:

    Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
    • For NFS, enter the version, full address and path to the storage, and any mount options:

      Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]:
      Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs
      If needed, specify additional mount options for the connection to the hosted-engine storage domain []:
    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      Please specify the iSCSI portal IP address:
      Please specify the iSCSI portal port [3260]:
      Please specify the iSCSI discover user:
      Please specify the iSCSI discover password:
      Please specify the iSCSI portal login user:
      Please specify the iSCSI portal login password:
      
      The following targets have been found:
      	[1]	iqn.2017-10.com.redhat.example:he
      		TPGT: 1, portals:
      			192.168.1.xxx:3260
      			192.168.2.xxx:3260
      			192.168.3.xxx:3260
      
      Please select a target (1) [1]: 1
      
      The following luns have been found on the requested target:
        [1] 360003ff44dc75adcb5046390a16b4beb   199GiB  MSFT   Virtual HD
            status: free, paths: 1 active
      
      Please select the destination LUN (1) [1]:
    • For Gluster storage, enter the full address and path to the storage, and any mount options:

      Important

      Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:

      gluster volume set VOLUME_NAME group virt
      gluster volume set VOLUME_NAME performance.strict-o-direct on
      gluster volume set VOLUME_NAME network.remote-dio off
      gluster volume set VOLUME_NAME storage.owner-uid 36
      gluster volume set VOLUME_NAME storage.owner-gid 36
      gluster volume set VOLUME_NAME network.ping-timeout 30
      Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume
      If needed, specify additional mount options for the connection to the hosted-engine storage domain []:
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.

      The following luns have been found on the requested target:
      [1] 3514f0c5447600351   30GiB   XtremIO XtremApp
      		status: used, paths: 2 active
      
      [2] 3514f0c5447600352   30GiB   XtremIO XtremApp
      		status: used, paths: 2 active
      
      Please select the destination LUN (1, 2) [1]:
  23. Enter the disk size of the Manager virtual machine:

    Please specify the size of the VM disk in GB: [50]:

    When the deployment completes successfully, one data center, cluster, host, storage domain, and the Manager virtual machine are already running. You can log in to the Administration Portal to add any other resources.

  24. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment. For more information, see Configuring an External LDAP Provider in the Administration Guide.
  25. Optionally, deploy Grafana so you can monitor and display reports from your RHV environment. For more information, see Configuring Grafana in the Administration Guide.

The Manager virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal.

Note

Both the Manager’s I/O scheduler and the hypervisor that hosts the Manager reorder I/O requests. This double reordering might delay I/O requests to the storage layer, impacting performance.

Depending on your data center, you might improve performance by changing the I/O scheduler to none. For more information, see Available disk schedulers in Monitoring and managing system status and performance for RHEL.

The next step is to enable the Red Hat Virtualization Manager repositories.

5.4. Enabling the Red Hat Virtualization Manager Repositories

You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # dnf repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-rpms \
        --enable=rhel-8-for-x86_64-appstream-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=jb-eap-7.3-for-rhel-8-x86_64-rpms
  5. Enable the pki-deps module.

    # dnf module -y enable pki-deps
  6. Enable version 12 of the postgresql module.

    # dnf module -y enable postgresql:12
  7. Synchronize installed packages to update them to the latest available versions.

    # dnf distro-sync --nobest

    Additional resources

    For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components

Log in to the Administration Portal, where you can add hosts and storage to the environment:

5.5. Connecting to the Administration Portal

Access the Administration Portal using a web browser.

  1. In a web browser, navigate to https://manager-fqdn/ovirt-engine, replacing manager-fqdn with the FQDN that you provided during installation.

    Note

    You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/. For example:

    # vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf
    SSO_ALTERNATE_ENGINE_FQDNS="alias1.example.com alias2.example.com"

    The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended.

  2. Click Administration Portal. An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time.
  3. Enter your User Name and Password. If you are logging in for the first time, use the user name admin along with the password that you specified during installation.
  4. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain.
  5. Click Log In.
  6. You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page.

To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out. You are logged out of all portals and the Manager welcome screen displays.