Red Hat Training

A Red Hat training course is available for Red Hat Virtualization

Chapter 2. Deploying the Self-Hosted Engine

You can deploy a self-hosted engine from the command line, or through the Cockpit user interface. Cockpit is available by default on Red Hat Virtualization Hosts, and can be installed on Red Hat Enterprise Linux hosts. Both methods use Ansible to automate most of the process.

Self-hosted engine installation uses the RHV-M Appliance to create the Manager virtual machine. The appliance is installed during the deployment process; however, you can install it on the host before starting the deployment if required:

# yum install rhvm-appliance
Important

See Self-Hosted Engine Recommendations in the Planning and Prerequisites Guide for specific recommendations about the self-hosted engine environment.

If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them before deployment. See Networking Recommendations in the Planning and Prerequisites Guide.

If you want to deploy the self-hosted engine with Red Hat Gluster Storage as part of a Red Hat Hyperconverged Infrastructure (RHHI) environment, see the Deploying Red Hat Hyperconverged Infrastructure Guide for more information.

2.1. Deploying the Self-Hosted Engine Using Cockpit

You can deploy a self-hosted engine through Cockpit using a simplified wizard to collect the details of your environment. This is the recommended method.

Cockpit is enabled by default on Red Hat Virtualization Hosts. If you are using a Red Hat Enterprise Linux host, see Installing Cockpit on Red Hat Enterprise Linux Hosts in the Installation Guide.

Prerequisites

  • A fresh installation of Red Hat Virtualization Host or Red Hat Enterprise Linux 7, with the required repositories enabled. See Installing Red Hat Virtualization Host or Enabling the Red Hat Enterprise Linux Host Repositories in the Installation Guide.
  • A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS.
  • A directory of at least 5 GB on the host, for the RHV-M Appliance. The deployment process will check if /var/tmp has enough space to extract the appliance files. If not, you can specify a different directory or mount external storage. The VDSM user and KVM group must have read, write, and execute permissions on the directory.
  • Prepared storage for a data storage domain dedicated to the Manager virtual machine. This storage domain is created during the self-hosted engine deployment, and must be at least 74 GiB. Highly available storage is recommended. For more information on preparing storage for your deployment, see the Storage chapter of the Administration Guide.
  • Prepared storage for additional storage domains, as necessary for the size and design of your environment.

    Important

    If you are using iSCSI storage, the self-hosted engine storage domain must use its own iSCSI target. Any additional storage domains must use a different iSCSI target.

    Warning

    Creating additional data storage domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine.

Procedure

  1. Log in to Cockpit at https://HostIPorFQDN:9090 and click VirtualizationHosted Engine.
  2. Click Start under the Hosted Engine option.
  3. Enter the details for the Manager virtual machine:

    1. Enter the Engine VM FQDN. This is the FQDN for the Manager virtual machine, not the base host.
    2. Enter a MAC Address for the Manager virtual machine, or accept a randomly generated one.
    3. Choose either DHCP or Static from the Network Configuration drop-down list.

      • If you choose DHCP, you must have a DHCP reservation for the Manager virtual machine so that its host name resolves to the address received from DHCP. Specify its MAC address in the MAC Address field.
      • If you choose Static, enter the following details:

        • VM IP Address - The IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
        • Gateway Address
        • DNS Servers
    4. Select the Bridge Interface from the drop-down list.
    5. Enter and confirm the virtual machine’s Root Password.
    6. Specify whether to allow Root SSH Access.
    7. Enter the Number of Virtual CPUs for the virtual machine.
    8. Enter the Memory Size (MiB). The available memory is displayed next to the input field.
  4. Optionally expand the Advanced fields:

    1. Enter a Root SSH Public Key to use for root access to the Manager virtual machine.
    2. Select or clear the Edit Hosts File check box to specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
    3. Change the management Bridge Name, or accept the default ovirtmgmt.
    4. Enter the Gateway Address for the management bridge.
    5. Enter the Host FQDN of the first host to add to the Manager. This is the FQDN of the base host you are running the deployment on.
  5. Click Next.
  6. Enter and confirm the Admin Portal Password for the admin@internal user.
  7. Configure event notifications:

    1. Enter the Server Name and Server Port Number of the SMTP server.
    2. Enter the Sender E-Mail Address.
    3. Enter the Recipient E-Mail Addresses.
  8. Click Next.
  9. Review the configuration of the Manager and its virtual machine. If the details are correct, click Prepare VM.
  10. When the virtual machine installation is complete, click Next.
  11. Select the Storage Type from the drop-down list, and enter the details for the self-hosted engine storage domain:

    • For NFS:

      1. Enter the full address and path to the storage in the Storage Connection field.
      2. If required, enter any Mount Options.
      3. Enter the Disk Size (GiB).
      4. Select the NFS Version from the drop-down list.
      5. Enter the Storage Domain Name.
    • For iSCSI:

      1. Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.
      2. Click Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

        Note

        To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      3. Enter the Disk Size (GiB).
      4. Enter the Discovery Username and Discovery Password.
    • For Fibre Channel:

      1. Enter the LUN ID. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
      2. Enter the Disk Size (GiB).
    • For Red Hat Gluster Storage:

      1. Enter the full address and path to the storage in the Storage Connection field.
      2. If required, enter any Mount Options.
      3. Enter the Disk Size (GiB).
  12. Click Next.
  13. Review the storage configuration. If the details are correct, click Finish Deployment.
  14. When the deployment is complete, click Close.

    One data center, cluster, host, storage domain, and the Manager virtual machine are already running. You can log in to the Administration Portal to add any other resources.

  15. Enable the required repositories on the Manager virtual machine. See Enabling the Red Hat Virtualization Manager Repositories in the Installation Guide.
  16. Optionally, add a directory server using the ovirt-engine-extension-aaa-ldap-setup interactive setup script so you can add additional users to the environment. For more information, see Configuring an External LDAP Provider in the Administration Guide.

The self-hosted engine’s status is displayed in Cockpit’s VirtualizationHosted Engine tab. The Manager virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal.