6.2. Restoring the Backup on a New Self-Hosted Engine
hosted-engine script on a new host, and use the
--restore-from-file=path/to/file_name option to restore the Manager backup during the deployment.
If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a
STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:
If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in
/etc/iscsi/initiatorname.iscsi. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable.
- If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.
Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).
Copy the backup file to the new host. In the following example,
host.example.comis the FQDN for the host, and
/backup/is any designated folder or path.
# scp -p file_name host.example.com:/backup/
Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package:
# yum install ovirt-hosted-engine-setup
Red Hat recommends using the
screenwindow manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run
# yum install screen # screen
In the event of session timeout or connection disruption, run
screen -d -rto recover the deployment session.
hosted-enginescript, specifying the path to the backup file:
# hosted-engine --deploy --restore-from-file=backup/file_name
To escape the script at any time, use CTRL+D to abort deployment.
- Select Yes to begin the deployment.
- Configure the network. The script detects possible NICs to use as a management bridge for the environment.
- If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
- Specify the FQDN for the Manager virtual machine.
- Enter the root password for the Manager.
- Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
- Enter the virtual machine’s CPU and memory configuration.
- Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.Important
The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s
/etc/hostsfile. You must ensure that the host names are resolvable.
- Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
Enter a password for the
admin@internaluser to access the Administration Portal.
The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.
Select the type of storage to use:
For NFS, enter the version, full address and path to the storage, and any mount options.Warning
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
For Gluster storage, enter the full address and path to the storage, and any mount options.Warning
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.Important
Only replica 3 Gluster storage is supported. Ensure you have the following configuration:
In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set
option rpc-auth-allow-insecure on
Configure the volume as follows:
gluster volume set _volume_ cluster.quorum-type auto gluster volume set _volume_ network.ping-timeout 10 gluster volume set _volume_ auth.allow \* gluster volume set _volume_ group virt gluster volume set _volume_ storage.owner-uid 36 gluster volume set _volume_ storage.owner-gid 36 gluster volume set _volume_ server.allow-insecure on
- For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
Enter the Manager disk size.
The script continues until the deployment is complete.
The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the
.ssh/known_hostsfile on any client machines that accessed the original Manager.
When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.