Chapter 16. Backups and Migration

16.1. Backing Up and Restoring the Red Hat Virtualization Manager

16.1.1. Backing up Red Hat Virtualization Manager - Overview

Use the engine-backup tool to take regular backups of the Red Hat Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service.

16.1.2. Syntax for the engine-backup Command

The engine-backup command works in one of two basic modes:

# engine-backup --mode=backup
# engine-backup --mode=restore

These two modes are further extended by a set of parameters that allow you to specify the scope of the backup and different credentials for the engine database. Run engine-backup --help for a full list of parameters and their function.

Basic Options

--mode
Specifies whether the command will perform a backup operation or a restore operation. Two options are available - backup, and restore. This is a required parameter.
--file
Specifies the path and name of a file into which backups are to be taken in backup mode, and the path and name of a file from which to read backup data in restore mode. This is a required parameter in both backup mode and restore mode.
--log
Specifies the path and name of a file into which logs of the backup or restore operation are to be written. This parameter is required in both backup mode and restore mode.
--scope

Specifies the scope of the backup or restore operation. There are four options: all, which backs up or restores all databases and configuration data; files, which backs up or restores only files on the system; db, which backs up or restores only the Manager database; and dwhdb, which backs up or restores only the Data Warehouse database. The default scope is all.

The --scope parameter can be specified multiple times in the same engine-backup command.

Manager Database Options

The following options are only available when using the engine-backup command in restore mode. The option syntax below applies to restoring the Manager database. The same options exist for restoring the Data Warehouse database. See engine-backup --help for the Data Warehouse option syntax.

--provision-db
Creates a PostgreSQL database for the Manager database backup to be restored to. This is a required parameter when restoring a backup on a remote host or fresh installation that does not have a PostgreSQL database already configured.
--change-db-credentials
Allows you to specify alternate credentials for restoring the Manager database using credentials other than those stored in the backup itself. See engine-backup --help for the additional parameters required by this parameter.
--restore-permissions or --no-restore-permissions

Restores (or does not restore) the permissions of database users. One of these parameters is required when restoring a backup.

Note

If a backup contains grants for extra database users, restoring the backup with the --restore-permissions and --provision-db (or --provision-dwh-db) options will create the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731.

16.1.3. Creating a Backup with the engine-backup Command

The Red Hat Virtualization Manager can be backed up using the engine-backup command while the Manager is active. Append one of the following options to --scope to specify which backup to perform:

  • all: A full backup of all databases and configuration files on the Manager
  • files: A backup of only the files on the system
  • db: A backup of only the Manager database
  • dwhdb: A backup of only the Data Warehouse database
Important

To restore a database to a fresh installation of Red Hat Virtualization Manager, a database backup alone is not sufficient; the Manager also requires access to the configuration files. Any backup that specifies a scope other than the default, all, must be accompanied by the files scope, or a filesystem backup.

Example Usage of the engine-backup Command

  1. Log on to the machine running the Red Hat Virtualization Manager.
  2. Create a backup:

    Example 16.1. Creating a Full Backup

    # engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name

    Example 16.2. Creating a Manager Database Backup

    # engine-backup --scope=files --scope=db --mode=backup --file=file_name --log=log_file_name

    Replace the db option with dwhdb to back up the Data Warehouse database.

    A tar file containing a backup is created using the path and file name provided.

The tar files containing the backups can now be used to restore the environment.

16.1.4. Restoring a Backup with the engine-backup Command

Restoring a backup using the engine-backup command involves more steps than creating a backup does, depending on the restoration destination. For example, the engine-backup command can be used to restore backups to fresh installations of Red Hat Virtualization, on top of existing installations of Red Hat Virtualization, and using local or remote databases.

Important

Backups can only be restored to environments of the same major release as that of the backup. For example, a backup of a Red Hat Virtualization version 4.2 environment can only be restored to another Red Hat Virtualization version 4.2 environment. To view the version of Red Hat Virtualization contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files.

16.1.5. Restoring a Backup to a Fresh Installation

The engine-backup command can be used to restore a backup to a fresh installation of the Red Hat Virtualization Manager. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Virtualization Manager have been installed, but the engine-setup command has not yet been run. This procedure assumes that the backup file or files can be accessed from the machine on which the backup is to be restored.

Restoring a Backup to a Fresh Installation

  1. Log on to the Manager machine. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host.
  2. Restore a complete backup or a database-only backup.

    • Restore a complete backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions

      If Data Warehouse is also being restored as part of the complete backup, provision the additional database:

      engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
    • Restore a database-only backup by restoring the configuration files and database backup:

      # engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --provision-db --restore-permissions

      The example above restores a backup of the Manager database.

      # engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db --restore-permissions

      The example above restores a backup of the Data Warehouse database.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  3. Run the following command and follow the prompts to configure the restored Manager:

    # engine-setup

The Red Hat Virtualization Manager has been restored to the version preserved in the backup. To change the fully qualified domain name of the new Red Hat Virtualization system, see Section 22.1.1, “The oVirt Engine Rename Tool”.

16.1.6. Restoring a Backup to Overwrite an Existing Installation

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.

Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.

Procedure

  1. Log in to the Manager machine.
  2. Remove the configuration files and clean the database associated with the Manager:

    # engine-cleanup

    The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database.

  3. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.

    • Restore a full backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
      Note

      To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  4. Reconfigure the Manager:

    # engine-setup

16.1.7. Restoring a Backup with Different Credentials

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system.

Important

When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. The engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database. So you do not need to create a new database or specify the database credentials. However, if the credentials for the owner of the engine database are not known, you must change them before you can restore the backup.

Restoring a Backup with Different Credentials

  1. Log in to the Red Hat Virtualization Manager machine.
  2. Run the following command and follow the prompts to remove the Manager’s configuration files and to clean the Manager’s database:

    # engine-cleanup
  3. Change the password for the owner of the engine database if the credentials of that user are not known:

    1. Enter the postgresql command line:

      # su - postgres -c 'scl enable rh-postgresql10 -- psql'
    2. Change the password of the user that owns the engine database:

      postgres=# alter role user_name encrypted password 'new_password';

      Repeat this for the user that owns the ovirt_engine_history database if necessary.

  4. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost.

    Note

    The following examples use a --*password option for each database without specifying a password, which prompts for a password for each database. Alternatively, you can use --*passfile=password_file options for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts.

    • Restore a complete backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions

      If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database:

      engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions

      The example above restores a backup of the Manager database.

      # engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions

      The example above restores a backup of the Data Warehouse database.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  5. Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured:

    # engine-setup

16.1.8. Backing up and Restoring a Self-Hosted Engine

You can back up a self-hosted engine and restore it in a new self-hosted environment. Use this procedure for tasks such as migrating the environment to a new self-hosted engine storage domain with a different storage type.

When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment.

The backup and restore operation involves the following key actions:

This procedure assumes that you have access and can make changes to the original Manager.

Prerequisites

  • A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager.
  • The original Manager must be updated to the latest minor version. The Manager version in the backup file must match the version of the new Manager. See Updating the Red Hat Virtualization Manager in the Upgrade Guide.
  • There must be at least one regular host in the environment. This host (and any other regular hosts) will remain active to host the SPM role and any running virtual machines. If a regular host is not already the SPM, move the SPM role before creating the backup by selecting a regular host and clicking ManagementSelect as SPM.

    If no regular hosts are available, there are two ways to add one:

16.1.8.1. Backing up the Original Manager

Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process.

For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide.

Procedure

  1. Log in to one of the self-hosted engine nodes and move the environment to global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Log in to the original Manager and stop the ovirt-engine service:

    # systemctl stop ovirt-engine
    # systemctl disable ovirt-engine
    Note

    Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources.

  3. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log:

    # engine-backup --mode=backup --file=file_name --log=log_file_name
  4. Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path.

    # scp -p file_name log_file_name storage.example.com:/backup/
  5. If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager:

    # subscription-manager unregister
  6. Log in to one of the self-hosted engine nodes and shut down the original Manager virtual machine:

    # hosted-engine --vm-shutdown

After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine.

16.1.8.2. Restoring the Backup on a New Self-Hosted Engine

Run the hosted-engine script on a new host, and use the --restore-from-file=path/to/file_name option to restore the Manager backup during the deployment.

Important

If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:

  • If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable.
  • If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.

Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).

Procedure

  1. Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path.

    # scp -p file_name host.example.com:/backup/
  2. Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package:

    # yum install ovirt-hosted-engine-setup
  3. Red Hat recommends using the screen window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run screen:

    # yum install screen
    # screen

    In the event of session timeout or connection disruption, run screen -d -r to recover the deployment session.

  4. Run the hosted-engine script, specifying the path to the backup file:

    # hosted-engine --deploy --restore-from-file=backup/file_name

    To escape the script at any time, use CTRL+D to abort deployment.

  5. Select Yes to begin the deployment.
  6. Configure the network. The script detects possible NICs to use as a management bridge for the environment.
  7. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
  8. Specify the FQDN for the Manager virtual machine.
  9. Enter the root password for the Manager.
  10. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
  11. Enter the virtual machine’s CPU and memory configuration.
  12. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
  13. Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.

    Important

    The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

  14. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
  16. Enter a password for the admin@internal user to access the Administration Portal.

    The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.

  17. Select the type of storage to use:

    • For NFS, enter the version, full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

    • For Gluster storage, enter the full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

      Important

      Only replica 3 Gluster storage is supported. Ensure you have the following configuration:

      • In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on.

        option rpc-auth-allow-insecure on
      • Configure the volume as follows:

        gluster volume set _volume_ cluster.quorum-type auto
        gluster volume set _volume_ network.ping-timeout 10
        gluster volume set _volume_ auth.allow \*
        gluster volume set _volume_ group virt
        gluster volume set _volume_ storage.owner-uid 36
        gluster volume set _volume_ storage.owner-gid 36
        gluster volume set _volume_ server.allow-insecure on
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
  18. Enter the Manager disk size.

    The script continues until the deployment is complete.

  19. The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the .ssh/known_hosts file on any client machines that accessed the original Manager.

When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.

16.1.8.3. Enabling the Red Hat Virtualization Manager Repositories

Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # yum repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-7-server-rpms \
        --enable=rhel-7-server-supplementary-rpms \
        --enable=rhel-7-server-rhv-4.3-manager-rpms \
        --enable=rhel-7-server-rhv-4-manager-tools-rpms \
        --enable=rhel-7-server-ansible-2.9-rpms \
        --enable=jb-eap-7.2-for-rhel-7-server-rpms

The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.

16.1.8.4. Reinstalling Hosts

Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.

Prerequisites

  • If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the host’s usage is relatively low.
  • Ensure that the cluster has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance.
  • Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure

  1. Click ComputeHosts and select the host.
  2. Click ManagementMaintenance.
  3. Click InstallationReinstall to open the Install Host window.
  4. Click the Hosted Engine tab and select DEPLOY from the drop-down list.
  5. Click OK to reinstall the host.

Once successfully reinstalled, the host displays a status of Up. Any virtual machines that were migrated off the host can now be migrated back to it.

Important

After a Red Hat Virtualization Host is successfully registered to the Red Hat Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed. Click ManagementActivate, and the host will change to an Up status and be ready for use.

After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:

# hosted-engine --vm-status

During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.

16.1.8.5. Removing a Storage Domain

You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure

  1. Click StorageDomains.
  2. Move the storage domain to maintenance mode and detach it:

    1. Click the storage domain’s name to open the details view.
    2. Click the Data Center tab.
    3. Click Maintenance, then click OK.
    4. Click Detach, then click OK.
  3. Click Remove.
  4. Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
  5. Click OK.

The storage domain is permanently removed from the environment.

16.1.9. Recovering a Self-Hosted Engine from an Existing Backup

If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available.

When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment.

Restoring a self-hosted engine involves the following key actions:

This procedure assumes that you do not have access to the original Manager, and that the new host can access the backup file.

Prerequisites

  • A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager.

16.1.9.1. Restoring the Backup on a New Self-Hosted Engine

Run the hosted-engine script on a new host, and use the --restore-from-file=path/to/file_name option to restore the Manager backup during the deployment.

Important

If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:

  • If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable.
  • If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.

Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).

Procedure

  1. Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path.

    # scp -p file_name host.example.com:/backup/
  2. Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package:

    # yum install ovirt-hosted-engine-setup
  3. Red Hat recommends using the screen window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run screen:

    # yum install screen
    # screen

    In the event of session timeout or connection disruption, run screen -d -r to recover the deployment session.

  4. Run the hosted-engine script, specifying the path to the backup file:

    # hosted-engine --deploy --restore-from-file=backup/file_name

    To escape the script at any time, use CTRL+D to abort deployment.

  5. Select Yes to begin the deployment.
  6. Configure the network. The script detects possible NICs to use as a management bridge for the environment.
  7. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
  8. Specify the FQDN for the Manager virtual machine.
  9. Enter the root password for the Manager.
  10. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
  11. Enter the virtual machine’s CPU and memory configuration.
  12. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
  13. Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.

    Important

    The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

  14. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
  16. Enter a password for the admin@internal user to access the Administration Portal.

    The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.

  17. Select the type of storage to use:

    • For NFS, enter the version, full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

    • For Gluster storage, enter the full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

      Important

      Only replica 3 Gluster storage is supported. Ensure you have the following configuration:

      • In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set rpc-auth-allow-insecure to on.

        option rpc-auth-allow-insecure on
      • Configure the volume as follows:

        gluster volume set _volume_ cluster.quorum-type auto
        gluster volume set _volume_ network.ping-timeout 10
        gluster volume set _volume_ auth.allow \*
        gluster volume set _volume_ group virt
        gluster volume set _volume_ storage.owner-uid 36
        gluster volume set _volume_ storage.owner-gid 36
        gluster volume set _volume_ server.allow-insecure on
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
  18. Enter the Manager disk size.

    The script continues until the deployment is complete.

  19. The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the .ssh/known_hosts file on any client machines that accessed the original Manager.

When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.

16.1.9.2. Enabling the Red Hat Virtualization Manager Repositories

Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # yum repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-7-server-rpms \
        --enable=rhel-7-server-supplementary-rpms \
        --enable=rhel-7-server-rhv-4.3-manager-rpms \
        --enable=rhel-7-server-rhv-4-manager-tools-rpms \
        --enable=rhel-7-server-ansible-2.9-rpms \
        --enable=jb-eap-7.2-for-rhel-7-server-rpms

The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.

16.1.9.3. Reinstalling Hosts

Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.

Prerequisites

  • If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the host’s usage is relatively low.
  • Ensure that the cluster has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance.
  • Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure

  1. Click ComputeHosts and select the host.
  2. Click ManagementMaintenance.
  3. Click InstallationReinstall to open the Install Host window.
  4. Click the Hosted Engine tab and select DEPLOY from the drop-down list.
  5. Click OK to reinstall the host.

Once successfully reinstalled, the host displays a status of Up. Any virtual machines that were migrated off the host can now be migrated back to it.

Important

After a Red Hat Virtualization Host is successfully registered to the Red Hat Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed. Click ManagementActivate, and the host will change to an Up status and be ready for use.

After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:

# hosted-engine --vm-status

During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.

16.1.9.4. Removing a Storage Domain

You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure

  1. Click StorageDomains.
  2. Move the storage domain to maintenance mode and detach it:

    1. Click the storage domain’s name to open the details view.
    2. Click the Data Center tab.
    3. Click Maintenance, then click OK.
    4. Click Detach, then click OK.
  3. Click Remove.
  4. Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
  5. Click OK.

The storage domain is permanently removed from the environment.

16.1.10. Overwriting a Self-Hosted Engine from an Existing Backup

If a self-hosted engine is accessible, but is experiencing an issue such as database corruption, or a configuration error that is difficult to roll back, you can restore the environment to a previous state using a backup taken before the problem began, if one is available.

Restoring a self-hosted engine’s previous state involves the following steps:

For more information about engine-backup --mode=restore options, see Section 16.1, “Backing Up and Restoring the Red Hat Virtualization Manager”.

16.1.10.1. Enabling Global Maintenance Mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine.

Procedure

  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in maintenance mode.

16.1.10.2. Restoring a Backup to Overwrite an Existing Installation

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.

Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.

Procedure

  1. Log in to the Manager machine.
  2. Remove the configuration files and clean the database associated with the Manager:

    # engine-cleanup

    The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database.

  3. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.

    • Restore a full backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
      Note

      To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  4. Reconfigure the Manager:

    # engine-setup

16.1.10.3. Disabling Global Maintenance Mode

Procedure

  1. Log in to the Manager virtual machine and shut it down.
  2. Log in to one of the self-hosted engine nodes and disable global maintenance mode:

    # hosted-engine --set-maintenance --mode=none

    When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start.

  3. Confirm that the environment is running:

    # hosted-engine --vm-status

    The listed information includes Engine Status. The value for Engine status should be:

    {"health": "good", "vm": "up", "detail": "Up"}
    Note

    When the virtual machine is still booting and the Manager hasn’t started yet, the Engine status is:

    {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}

    If this happens, wait a few minutes and try again.

When the environment is running again, you can start any virtual machines that were stopped, and check that the resources in the environment are behaving as expected.