Installing Red Hat Enterprise Virtualization Environments
Legal Notice
Abstract
- Preface
- I. Before you Begin
- 1. Introduction
- 1.1. Red Hat Enterprise Virtualization Architecture
- 1.2. Red Hat Enterprise Virtualization System Components
- 1.3. Red Hat Enterprise Virtualization Resources
- 1.4. Red Hat Enterprise Virtualization API Support Statement
- 1.5. Introduction to Virtual Machines
- 1.6. Supported Virtual Machine Operating Systems
- 1.7. Red Hat Enterprise Virtualization Installation Workflow
- 2. System Requirements
- II. Installing Red Hat Enterprise Virtualization Manager
- 3. Manager Installation
- 3.1. Workflow Progress — Installing Red Hat Enterprise Virtualization Manager
- 3.2. Installing the Red Hat Enterprise Virtualization Manager
- 3.3. Subscribing to the Red Hat Enterprise Virtualization Channels
- 3.4. Installing the Red Hat Enterprise Virtualization Manager Packages
- 3.5. Configuring Red Hat Enterprise Virtualization Manager
- 3.6. Passwords in Red Hat Enterprise Virtualization Manager
- 3.7. Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager
- 3.8. Configuring the Manager to Use a Manually Configured Local or Remote PostgreSQL Database
- 3.9. Connecting to the Administration Portal
- 3.10. Removing Red Hat Enterprise Virtualization Manager
- 4. Self-Hosted Engine
- 5. Data Collection Setup and Reports Installation
- 6. Updating the Red Hat Enterprise Virtualization Environment
- 6.1. Upgrades between Minor Releases
- 6.1.1. Checking for Red Hat Enterprise Virtualization Manager Updates
- 6.1.2. Updating Red Hat Enterprise Virtualization Manager
- 6.1.3. Troubleshooting for Upgrading Red Hat Enterprise Virtualization Manager
- 6.1.4. Updating Red Hat Enterprise Virtualization Manager Reports
- 6.1.5. Updating Red Hat Enterprise Virtualization Hypervisors
- 6.1.6. Updating Red Hat Enterprise Linux Virtualization Hosts
- 6.1.7. Updating the Red Hat Enterprise Virtualization Guest Tools
- 6.2. Upgrading to Red Hat Enterprise Virtualization 3.3
- 6.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
- 6.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
- 6.5. Post-upgrade Tasks
- III. Installing Virtualization Hosts
- 7. Introduction to Virtualization Hosts
- 8. Installing Red Hat Enterprise Virtualization Hypervisor Hosts
- 8.1. Red Hat Enterprise Virtualization Hypervisor Installation Overview
- 8.2. Installing the Red Hat Enterprise Virtualization Hypervisor Packages
- 8.3. Preparing Hypervisor Installation Media
- 8.4. Installing the Hypervisor
- 8.5. Configuring the Hypervisor
- 8.5.1. Logging into the Hypervisor
- 8.5.2. Selecting Hypervisor Keyboard
- 8.5.3. Viewing Hypervisor Status
- 8.5.4. Configuring Hypervisor Network
- 8.5.5. Configuring Hypervisor Security
- 8.5.6. Configuring Hypervisor Simple Network Management Protocol
- 8.5.7. Configuring Hypervisor Common Information Model
- 8.5.8. Configuring Logging
- 8.5.9. Configuring the Hypervisor for Red Hat Network
- 8.5.10. Configuring Hypervisor Kernel Dumps
- 8.5.11. Configuring Hypervisor Remote Storage
- 8.6. Attaching the Hypervisor to the Red Hat Enterprise Virtualization Manager
- 9. Installing Red Hat Enterprise Linux Hosts
- IV. Environment Configuration
- 10. Planning your Data Center
- 11. Network Setup
- 11.1. Workflow Progress — Network Setup
- 11.2. Networking in Red Hat Enterprise Virtualization
- 11.3. Logical Networks
- 11.3.1. Creating a New Logical Network in a Data Center or Cluster
- 11.3.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts
- 11.3.3. Explanation of Settings and Controls in the General Tab of the New Logical Network and Edit Logical Network Windows
- 11.3.4. Editing a Logical Network
- 11.3.5. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
- 11.3.6. Explanation of Settings in the Manage Networks Window
- 11.3.7. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
- 11.3.8. Multiple Gateways
- 11.4. Using the Networks Tab
- 11.5. Bonds
- 12. Storage Setup
- A. Log Files
- B. Additional Utilities
- C. Revision History
1. Document Conventions
1.1. Typographic Conventions
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novelin your current working directory, enter thecat my_next_bestselling_novelcommand at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to a virtual terminal.
mono-spaced bold. For example:
File-related classes includefilesystemfor file systems,filefor files, anddirfor directories. Each class has its own associated set of permissions.
Choose → → from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).To insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic or Proportional Bold Italic
To connect to a remote machine using ssh, typessh username@domain.nameat a shell prompt. If the remote machine isexample.comand your username on that machine is john, typessh john@example.com.Themount -o remount file-systemcommand remounts the named file system. For example, to remount the/homefile system, the command ismount -o remount /home.To see the version of a currently installed package, use therpm -q packagecommand. It will return a result as follows:package-version-release.
Publican is a DocBook publishing system.
1.2. Pull-quote Conventions
mono-spaced roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
struct kvm_assigned_pci_dev *assigned_dev)
{
int r = 0;
struct kvm_assigned_dev_kernel *match;
mutex_lock(&kvm->lock);
match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
assigned_dev->assigned_dev_id);
if (!match) {
printk(KERN_INFO "%s: device hasn't been assigned before, "
"so cannot be deassigned\n", __func__);
r = -EINVAL;
goto out;
}
kvm_deassign_device(kvm, match);
kvm_free_assigned_device(kvm, match);
out:
mutex_unlock(&kvm->lock);
return r;
}1.3. Notes and Warnings
Note
Important
Warning
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
- search or browse through a knowledgebase of technical support articles about Red Hat products.
- submit a support case to Red Hat Global Support Services (GSS).
- access other product documentation.
2.2. We Need Feedback!
Part I. Before you Begin
Table of Contents
- 1. Introduction
- 1.1. Red Hat Enterprise Virtualization Architecture
- 1.2. Red Hat Enterprise Virtualization System Components
- 1.3. Red Hat Enterprise Virtualization Resources
- 1.4. Red Hat Enterprise Virtualization API Support Statement
- 1.5. Introduction to Virtual Machines
- 1.6. Supported Virtual Machine Operating Systems
- 1.7. Red Hat Enterprise Virtualization Installation Workflow
- 2. System Requirements
Chapter 1. Introduction
- 1.1. Red Hat Enterprise Virtualization Architecture
- 1.2. Red Hat Enterprise Virtualization System Components
- 1.3. Red Hat Enterprise Virtualization Resources
- 1.4. Red Hat Enterprise Virtualization API Support Statement
- 1.5. Introduction to Virtual Machines
- 1.6. Supported Virtual Machine Operating Systems
- 1.7. Red Hat Enterprise Virtualization Installation Workflow
1.1. Red Hat Enterprise Virtualization Architecture
- Virtual machine hosts using the Kernel-based Virtual Machine (KVM).
- Agents and tools running on hosts including VDSM, QEMU, and libvirt. These tools provide local management for virtual machines, networks and storage.
- The Red Hat Enterprise Virtualization Manager; a centralized management platform for the Red Hat Enterprise Virtualization environment. It provides a graphical interface where you can view, provision and manage resources.
- Storage domains to hold virtual resources like virtual machines, templates, ISOs.
- A database to track the state of and changes to the environment.
- Access to an external Directory Server to provide users and authentication.
- Networking to link the environment together. This includes physical network links, and logical networks.
1.2. Red Hat Enterprise Virtualization System Components
1.3. Red Hat Enterprise Virtualization Resources
- Data Center - A data center is the highest level container for all physical and logical resources within a managed virtual environment. It is a collection of clusters, virtual machines, storage, and networks.
- Clusters - A cluster is a set of physical hosts that are treated as a resource pool for virtual machines. Hosts in a cluster share the same network infrastructure and storage. They form a migration domain within which virtual machines can be moved from host to host.
- Logical Networks - A logical network is a logical representation of a physical network. Logical networks group network traffic and communication between the Manager, hosts, storage, and virtual machines.
- Hosts - A host is a physical server that runs one or more virtual machines. Hosts are grouped into clusters. Virtual machines can be migrated from one host to another within a cluster.
- Storage Pool - The storage pool is a logical entity that contains a standalone image repository of a certain type, either iSCSI, Fibre Channel, NFS, or POSIX. Each storage pool can contain several domains, for storing virtual machine disk images, ISO images, and for the import and export of virtual machine images.
- Virtual Machines - A virtual machine is a virtual desktop or virtual server containing an operating system and a set of applications. Multiple identical virtual machines can be created in a Pool. Virtual machines are created, managed, or deleted by power users and accessed by users.
- Template - A template is a model virtual machine with predefined settings. A virtual machine that is based on a particular template acquires the settings of the template. Using templates is the quickest way of creating a large number of virtual machines in a single step.
- Virtual Machine Pool - A virtual machine pool is a group of identical virtual machines that are available on demand by each group member. Virtual machine pools can be set up for different purposes. For example, one pool can be for the Marketing department, another for Research and Development, and so on.
- Snapshot - A snapshot is a view of a virtual machine's operating system and all its applications at a point in time. It can be used to save the settings of a virtual machine before an upgrade or installing new applications. In case of problems, a snapshot can be used to restore the virtual machine to its original state.
- User Types - Red Hat Enterprise Virtualization supports multiple levels of administrators and users with distinct levels of permissions. System administrators can manage objects of the physical infrastructure, such as data centers, hosts, and storage. Users access virtual machines available from a virtual machine pool or standalone virtual machines made accessible by an administrator.
- Events and Monitors - Alerts, warnings, and other notices about activities help the administrator to monitor the performance and status of resources.
- Reports - A range of reports either from the reports module based on JasperReports, or from the data warehouse. Preconfigured or ad hoc reports can be generated from the reports module. Users can also generate reports using any query tool that supports SQL from a data warehouse that collects monitoring data for hosts, virtual machines, and storage.
1.4. Red Hat Enterprise Virtualization API Support Statement
Supported Interfaces for Read and Write Access
- Representational State Transfer (REST) API
- The REST API exposed by the Red Hat Enterprise Virtualization Manager is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
- Software Development Kit (SDK)
- The SDK provided by the rhevm-sdk package is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
- Command Line Shell
- The command line shell provided by the rhevm-cli package is a fully supported interface for interacting with the Red Hat Enterprise Virtualization Manager.
- VDSM Hooks
- The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Enterprise Virtualization Hypervisor is not currently supported.
Supported Interfaces for Read Access
- Red Hat Enterprise Virtualization Manager History Database
- Read access to the Red Hat Enterprise Virtualization Manager history database using the database views specified in the Administration Guide is supported. Write access is not supported.
- Libvirt on Virtualization Hosts
- Read access to
libvirtusing thevirsh -rcommand is a supported method of interacting with virtualization hosts. Write access is not supported.
Unsupported Interfaces
- The vdsClient Command
- Use of the
vdsClientcommand to interact with virtualization hosts is not supported unless explicitly requested by Red Hat Support. - Red Hat Enterprise Virtualization Hypervisor Console
- Console access to Red Hat Enterprise Virtualization Hypervisor outside of the provided text user interface for configuration is not supported unless explicitly requested by Red Hat Support.
- Red Hat Enterprise Virtualization Manager Database
- Direct access to and manipulation of the Red Hat Enterprise Virtualization Manager database is not supported unless explicitly requested by Red Hat Support.
Important
1.5. Introduction to Virtual Machines
1.6. Supported Virtual Machine Operating Systems
Table 1.1. Operating systems that can be used as guest operating systems
| Operating System | Architecture | SPICE support |
|---|---|---|
|
Red Hat Enterprise Linux 3
|
32-bit, 64-bit
|
Yes
|
|
Red Hat Enterprise Linux 4
|
32-bit, 64-bit
|
Yes
|
|
Red Hat Enterprise Linux 5
|
32-bit, 64-bit
|
Yes
|
|
Red Hat Enterprise Linux 6
|
32-bit, 64-bit
|
Yes
|
|
SUSE Linux Enterprise Server 10 (select for the guest type in the user interface)
|
32-bit, 64-bit
|
No
|
|
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide spice drivers as part of their distribution.)
|
32-bit, 64-bit
|
No
|
|
Ubuntu 12.04 (Precise Pangolin LTS)
|
32-bit, 64-bit
|
Yes
|
|
Ubuntu 12.10 (Quantal Quetzal)
|
32-bit, 64-bit
|
Yes
|
|
Ubuntu 13.04 (Raring Ringtail)
|
32-bit, 64-bit
|
No
|
|
Ubuntu 13.10 (Saucy Salamander)
|
32-bit, 64-bit
|
Yes
|
|
Windows XP Service Pack 3 and newer
|
32-bit
|
Yes
|
|
Windows 7
|
32-bit, 64-bit
|
Yes
|
|
Windows 8
|
32-bit, 64-bit
|
No
|
|
Windows Server 2003 Service Pack 2 and newer
| |
Yes
|
|
Windows Server 2003 R2
| |
Yes
|
|
Windows Server 2008
|
32-bit, 64-bit
|
Yes
|
|
Windows Server 2008 R2
|
64-bit
|
Yes
|
|
Windows Server 2012
|
64-bit
|
No
|
|
Windows Server 2012 R2
|
64-bit
|
No
|
Table 1.2. Guest operating systems that are supported by Global Support Services
| Operating System | Architecture |
|---|---|
|
Red Hat Enterprise Linux 3
|
32-bit, 64-bit
|
|
Red Hat Enterprise Linux 4
|
32-bit, 64-bit
|
|
Red Hat Enterprise Linux 5
|
32-bit, 64-bit
|
|
Red Hat Enterprise Linux 6
|
32-bit, 64-bit
|
|
SUSE Linux Enterprise Server 10 (select for the guest type in the user interface)
|
32-bit, 64-bit
|
|
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide spice drivers as part of their distribution.)
|
32-bit, 64-bit
|
|
Windows XP Service Pack 3 and newer
|
32-bit
|
|
Windows 7
|
32-bit, 64-bit
|
|
Windows 8
|
32-bit, 64-bit
|
|
Windows Server 2003 Service Pack 2 and newer
| |
|
Windows Server 2003 R2
| |
|
Windows Server 2008
|
32-bit, 64-bit
|
|
Windows Server 2008 R2
|
64-bit
|
|
Windows Server 2012
|
64-bit
|
|
Windows Server 2012 R2
|
64-bit
|
Note
Note
1.7. Red Hat Enterprise Virtualization Installation Workflow

Chapter 2. System Requirements
2.2. Hardware Requirements
2.2.1. Red Hat Enterprise Virtualization Hardware Requirements Overview
- one machine to act as the management server,
- one or more machines to act as virtualization hosts - at least two are required to support migration and power management,
- one or more machines to use as clients for accessing the Administration Portal.
- storage infrastructure provided by NFS, POSIX, iSCSI, SAN, or local storage.
See Also:
2.2.2. Red Hat Enterprise Virtualization Manager Hardware Requirements
Minimum
- A dual core CPU.
- 4 GB of available system RAM that is not being consumed by existing processes.
- 25 GB of locally accessible, writeable, disk space.
- 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps.
Recommended
- A quad core CPU or multiple dual core CPUs.
- 16 GB of system RAM.
- 50 GB of locally accessible, writeable, disk space.
- 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps.
2.2.3. Virtualization Host Hardware Requirements
2.2.3.1. Virtualization Host Hardware Requirements Overview
2.2.3.2. Virtualization Host CPU Requirements
- AMD Opteron G1
- AMD Opteron G2
- AMD Opteron G3
- AMD Opteron G4
- AMD Opteron G5
- Intel Conroe
- Intel Penryn
- Intel Nehalem
- Intel Westmere
- Intel Sandybridge
- Intel Haswell
No eXecute flag (NX) is also required. To check that your processor supports the required flags, and that they are enabled:
- At the Red Hat Enterprise Linux or Red Hat Enterprise Virtualization Hypervisor boot screen press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. After the last kernel parameter listed ensure there is a Space and append the
rescueparameter. - Press Enter to boot into rescue mode.
- At the prompt which appears, determine that your processor has the required extensions and that they are enabled by running this command:
# grep -E 'svm|vmx' /proc/cpuinfo | grep nx
If any output is shown, the processor is hardware virtualization capable. If no output is shown it is still possible that your processor supports hardware virtualization. In some circumstances manufacturers disable the virtualization extensions in the BIOS. Where you believe this to be the case consult the system's BIOS and the motherboard manual provided by the manufacturer. - As an additional check, verify that the
kvmmodules are loaded in the kernel:# lsmod | grep kvm
If the output includeskvm_intelorkvm_amdthen thekvmhardware virtualization modules are loaded and your system meets requirements.
2.2.3.3. Virtualization Host RAM Requirements
- guest operating system requirements,
- guest application requirements, and
- memory activity and usage of guests.
2.2.3.4. Virtualization Host Storage Requirements
- The root partitions require at least 512 MB of storage.
- The configuration partition requires at least 8 MB of storage.
- The recommended minimum size of the logging partition is 2048 MB.
- The data partition requires at least 256 MB of storage. Use of a smaller data partition may prevent future upgrades of the Hypervisor from the Red Hat Enterprise Virtualization Manager. By default all disk space remaining after allocation of swap space will be allocated to the data partition.
- The swap partition requires at least 8 MB of storage. The recommended size of the swap partition varies depending on both the system the Hypervisor is being installed upon and the anticipated level of overcommit for the environment. Overcommit allows the Red Hat Enterprise Virtualization environment to present more RAM to guests than is actually physically present. The default overcommit ratio is
0.5.The recommended size of the swap partition can be determined by:- Multiplying the amount of system RAM by the expected overcommit ratio, and adding
- 2 GB of swap space for systems with 4 GB of RAM or less, or
- 4 GB of swap space for systems with between 4 GB and 16 GB of RAM, or
- 8 GB of swap space for systems with between 16 GB and 64 GB of RAM, or
- 16 GB of swap space for systems with between 64 GB and 256 GB of RAM.
Example 2.1. Calculating Swap Partition Size
For a system with 8 GB of RAM this means the formula for determining the amount of swap space to allocate is:(8 GB x 0.5) + 4 GB = 8 GB
Important
0.5 is used for this calculation. For some systems the result of this calculation may be a swap partition that requires more free disk space than is available at installation. Where this is the case Hypervisor installation will fail.
storage_vol boot parameter.
Example 2.2. Manually Setting Swap Partition Size
storage_vol boot parameter is used to set a swap partition size of 4096 MB. Note that no sizes are specified for the other partitions, allowing the Hypervisor to use the default sizes.
storage_vol=:4096::::
Important
fakeraid devices. Where a fakeraid device is present it must be reconfigured such that it no longer runs in RAID mode.
- Access the RAID controller's BIOS and remove all logical drives from it.
- Change controller mode to be non-RAID. This may be referred to as compatibility or JBOD mode.
2.2.3.5. Virtualization Host PCI Device Requirements
2.3. Software Requirements
2.3.1. Red Hat Enterprise Virtualization Operating System Requirements
- Red Hat Enterprise Virtualization Manager requires Red Hat Enterprise Linux 6.5 Server. Complete successful installation of the operating system prior to commencing installation of the Red Hat Enterprise Virtualization Manager.
Important
See the Red Hat Enterprise Linux 6 Security Guide for security hardening information for your Red Hat Enterprise Linux Servers. - Virtualization hosts must run either:
- Red Hat Enterprise Virtualization Hypervisor 6.5
- Red Hat Enterprise Linux 6.5
Important
2.3.2. Red Hat Enterprise Virtualization Manager Client Requirements
- Mozilla Firefox 17, and later, on Red Hat Enterprise Linux is required to access both portals.
- Internet Explorer 8, and later, on Microsoft Windows is required to access the User Portal. Use the desktop version, not the touchscreen version of Internet Explorer 10.
- Internet Explorer 9, and later, on Microsoft Windows is required to access the Administration Portal. Use the desktop version, not the touchscreen version of Internet Explorer 10.
- Red Hat Enterprise Linux 5.8+ (i386, AMD64 and Intel 64)
- Red Hat Enterprise Linux 6.2+ (i386, AMD64 and Intel 64)
- Red Hat Enterprise Linux 6.5+ (i386, AMD64 and Intel 64)
- Windows XP
- Windows XP Embedded (XPe)
- Windows 7 (x86, AMD64 and Intel 64)
- Windows 8 (x86, AMD64 and Intel 64)
- Windows Embedded Standard 7
- Windows 2008/R2 (x86, AMD64 and Intel 64)
- Windows Embedded Standard 2009
- Red Hat Enterprise Virtualization Certified Linux-based thin clients
Note
yum.
2.3.3. Red Hat Enterprise Virtualization Manager Software Channels
Note
Certificate-based Red Hat Network
- The
Red Hat Enterprise Linux Serverentitlement, provides Red Hat Enterprise Linux. - The
Red Hat Enterprise Virtualizationentitlement, provides Red Hat Enterprise Virtualization Manager. - The
Red Hat JBoss Enterprise Application Platformentitlement, provides the supported release of the application platform on which the Manager runs.
Red Hat Network Classic
- The
Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64)channel, also referred to asrhel-x86_64-server-6, provides Red Hat Enterprise Linux 6 Server. The Channel Entitlement name for this channel isRed Hat Enterprise Linux Server (v. 6). - The
RHEL Server Supplementary (v. 6 64-bit x86_64)channel, also referred to asrhel-x86_64-server-supplementary-6, provides the virtio-win package. The virtio-win package provides the Windows VirtIO drivers for use in virtual machines. The Channel Entitlement Name for the supplementary channel isRed Hat Enterprise Linux Server Supplementary (v. 6). - The
Red Hat Enterprise Virtualization Manager (v3.3 x86_64)channel, also referred to asrhel-x86_64-server-6-rhevm-3.3, provides Red Hat Enterprise Virtualization Manager. The Channel Entitlement Name for this channel isRed Hat Enterprise Virtualization Manager (v3). - The
Red Hat JBoss EAP (v 6) for 6Server x86_64channel, also referred to asjbappplatform-6-x86_64-server-6-rpm, provides the supported release of the application platform on which the Manager runs. The Channel Entitlement Name for this channel isRed Hat JBoss Enterprise Application Platform (v 4, zip format).
2.3.4. Directory Services
2.3.4.1. About Directory Services
2.3.4.2. Directory Services Support in Red Hat Enterprise Virtualization
admin. This account is intended for use when initially configuring the environment, and for troubleshooting. To add other users to Red Hat Enterprise Virtualization you will need to attach a directory server to the Manager using the Domain Management Tool, engine-manage-domains.
user@domain. Attachment of more than one directory server to the Manager is also supported.
- Active Directory
- Identity Management (IdM)
- Red Hat Directory Server 9 (RHDS 9)
- OpenLDAP
- A valid pointer record (PTR) for the directory server's reverse look-up address.
- A valid service record (SRV) for LDAP over TCP port
389. - A valid service record (SRV) for Kerberos over TCP port
88. - A valid service record (SRV) for Kerberos over UDP port
88.
engine-manage-domains.
- Active Directory - http://technet.microsoft.com/en-us/windowsserver/dd448614.
- Identity Management (IdM) - http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/index.html
- Red Hat Directory Server (RHDS) - http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/index.html
- OpenLDAP - http://www.openldap.org/doc/
Important
Important
Important
sysprep in the creation of Templates and Virtual Machines, then the Red Hat Enterprise Virtualization administrative user must be delegated control over the Domain to:
- Join a computer to the domain
- Modify the membership of a group
Note
- Configure the
memberOfplug-in for RHDS to allow group membership. In particular ensure that the value of thememberofgroupattrattribute of thememberOfplug-in is set touniqueMember. In OpenLDAP, thememberOffunctionality is not called a "plugin". It is called an "overlay" and requires no configuration after installation.Consult the Red Hat Directory Server 9.0 Plug-in Guide for more information on configuring thememberOfplug-in. - Define the directory server as a service of the form
ldap/hostname@REALMNAMEin the Kerberos realm. Replace hostname with the fully qualified domain name associated with the directory server and REALMNAME with the fully qualified Kerberos realm name. The Kerberos realm name must be specified in capital letters. - Generate a
keytabfile for the directory server in the Kerberos realm. Thekeytabfile contains pairs of Kerberos principals and their associated encrypted keys. These keys will allow the directory server to authenticate itself with the Kerberos realm.Consult the documentation for your Kerberos principle for more information on generating akeytabfile. - Install the
keytabfile on the directory server. Then configure RHDS to recognize thekeytabfile and accept Kerberos authentication using GSSAPI.Consult the Red Hat Directory Server 9.0 Administration Guide for more information on configuring RHDS to use an externalkeytabfile. - Test the configuration on the directory server by using the
kinitcommand to authenticate as a user defined in the Kerberos realm. Once authenticated run theldapsearchcommand against the directory server. Use the-Y GSSAPIparameters to ensure the use of Kerberos for authentication.
See Also:
2.3.5. Firewall Configuration
2.3.5.1. Red Hat Enterprise Virtualization Manager Firewall Requirements
engine-setup script is able to configure the firewall automatically, but this will overwrite any pre-existing firewall configuration.
engine-setup command will save a list of the iptables rules required in the /usr/share/ovirt-engine/conf/iptables.example file.
80 and 443) listed here.
Table 2.1. Red Hat Enterprise Virtualization Manager Firewall Requirements
| Port(s) | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| - | ICMP |
|
| When registering to the Red Hat Enterprise Virtualization Manager, virtualization hosts send an ICMP ping request to the Manager to confirm that it is online. |
| 22 | TCP |
|
| SSH (optional) |
| 80, 443 | TCP |
|
|
Provides HTTP and HTTPS access to the Manager.
|
Important
NFSv4
- TCP port
2049for NFS.
NFSv3
- TCP and UDP port
2049for NFS. - TCP and UDP port
111(rpcbind/sunrpc). - TCP and UDP port specified with
MOUNTD_PORT="port" - TCP and UDP port specified with
STATD_PORT="port" - TCP port specified with
LOCKD_TCPPORT="port" - UDP port specified with
LOCKD_UDPPORT="port"
MOUNTD_PORT, STATD_PORT, LOCKD_TCPPORT, and LOCKD_UDPPORT ports are configured in the /etc/sysconfig/nfs file.
2.3.5.2. Virtualization Host Firewall Requirements
Table 2.2. Virtualization Host Firewall Requirements
| Port(s) | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| 22 | TCP |
|
| Secure Shell (SSH) access. |
| 5900 - 6411 | TCP |
|
|
Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines.
|
| 5989 | TCP, UDP |
|
|
Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the virtualization host. If you wish to use a CIMOM to monitor the virtual machines in your virtualization environment then you must ensure that this port is open.
|
| 16514 | TCP |
|
|
Virtual machine migration using
libvirt.
|
| 49152 - 49216 | TCP |
|
|
Virtual machine migration and fencing using VDSM. These ports must be open facilitate both automated and manually initiated migration of virtual machines.
|
| 54321 | TCP |
|
|
VDSM communications with the Manager and other virtualization hosts.
|
Example 2.3. Option Name: IPTablesConfig
*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT # vdsm -A INPUT -p tcp --dport 54321 -j ACCEPT # libvirt tls -A INPUT -p tcp --dport 16514 -j ACCEPT # SSH -A INPUT -p tcp --dport 22 -j ACCEPT # guest consoles -A INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT # migration -A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT # snmp -A INPUT -p udp --dport 161 -j ACCEPT # Reject any other input traffic -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited COMMIT
2.3.5.3. Directory Server Firewall Requirements
Table 2.3. Host Firewall Requirements
| Port(s) | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| 88, 464 | TCP, UDP |
|
| Kerberos authentication. |
| 389, 636 | TCP |
|
| Lightweight Directory Access Protocol (LDAP) and LDAP over SSL. |
2.3.5.4. Database Server Firewall Requirements
Table 2.4. Host Firewall Requirements
| Port(s) | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
| 5432 | TCP, UDP |
|
| Default port for PostgreSQL database connections. |
2.3.6. Required User Accounts and Groups
2.3.6.1. Red Hat Enterprise Virtualization Manager User Accounts
- The
vdsmuser (UID36). Required for support tools that mount and access NFS storage domains. - The
ovirtuser (UID108). Owner of theovirt-engineRed Hat JBoss Enterprise Application Platform instance.
2.3.6.2. Red Hat Enterprise Virtualization Manager Groups
- The
kvmgroup (GID36). Group members include:- The
vdsmuser.
- The
ovirtgroup (GID108). Group members include:- The
ovirtuser.
2.3.6.3. Virtualization Host User Accounts
- The
vdsmuser (UID36). - The
qemuuser (UID107). - The
sanlockuser (UID179).
admin user (UID 500). This admin user is not created on Red Hat Enterprise Linux virtualization hosts. The admin user is created with the required permissions to run commands as the root user using the sudo command. The vdsm user which is present on both types of virtualization hosts is also given access to the sudo command.
Important
vdsm user however is fixed to a UID of 36 and the kvm group is fixed to a GID of 36.
36 or GID 36 is already used by another account on the system then a conflict will arise during installation of the vdsm and qemu-kvm-rhev packages.
2.3.6.4. Virtualization Host Groups
- The
kvmgroup (GID36). Group members include:- The
qemuuser. - The
sanlockuser.
- The
qemugroup (GID107). Group members include:- The
vdsmuser. - The
sanlockuser.
Important
vdsm user however is fixed to a UID of 36 and the kvm group is fixed to a GID of 36.
36 or GID 36 is already used by another account on the system then a conflict will arise during installation of the vdsm and qemu-kvm-rhev packages.
Part II. Installing Red Hat Enterprise Virtualization Manager
Table of Contents
- 3. Manager Installation
- 3.1. Workflow Progress — Installing Red Hat Enterprise Virtualization Manager
- 3.2. Installing the Red Hat Enterprise Virtualization Manager
- 3.3. Subscribing to the Red Hat Enterprise Virtualization Channels
- 3.4. Installing the Red Hat Enterprise Virtualization Manager Packages
- 3.5. Configuring Red Hat Enterprise Virtualization Manager
- 3.6. Passwords in Red Hat Enterprise Virtualization Manager
- 3.7. Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager
- 3.8. Configuring the Manager to Use a Manually Configured Local or Remote PostgreSQL Database
- 3.9. Connecting to the Administration Portal
- 3.10. Removing Red Hat Enterprise Virtualization Manager
- 4. Self-Hosted Engine
- 5. Data Collection Setup and Reports Installation
- 6. Updating the Red Hat Enterprise Virtualization Environment
- 6.1. Upgrades between Minor Releases
- 6.1.1. Checking for Red Hat Enterprise Virtualization Manager Updates
- 6.1.2. Updating Red Hat Enterprise Virtualization Manager
- 6.1.3. Troubleshooting for Upgrading Red Hat Enterprise Virtualization Manager
- 6.1.4. Updating Red Hat Enterprise Virtualization Manager Reports
- 6.1.5. Updating Red Hat Enterprise Virtualization Hypervisors
- 6.1.6. Updating Red Hat Enterprise Linux Virtualization Hosts
- 6.1.7. Updating the Red Hat Enterprise Virtualization Guest Tools
- 6.2. Upgrading to Red Hat Enterprise Virtualization 3.3
- 6.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
- 6.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
- 6.5. Post-upgrade Tasks
Chapter 3. Manager Installation
- 3.1. Workflow Progress — Installing Red Hat Enterprise Virtualization Manager
- 3.2. Installing the Red Hat Enterprise Virtualization Manager
- 3.3. Subscribing to the Red Hat Enterprise Virtualization Channels
- 3.4. Installing the Red Hat Enterprise Virtualization Manager Packages
- 3.5. Configuring Red Hat Enterprise Virtualization Manager
- 3.6. Passwords in Red Hat Enterprise Virtualization Manager
- 3.7. Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager
- 3.8. Configuring the Manager to Use a Manually Configured Local or Remote PostgreSQL Database
- 3.9. Connecting to the Administration Portal
- 3.10. Removing Red Hat Enterprise Virtualization Manager
3.2. Installing the Red Hat Enterprise Virtualization Manager
Important
- The ports to be used for HTTP and HTTPS communication. The defaults ports are
80and443respectively. - The fully qualified domain name (FQDN) of the system on which the Manager is to be installed.
- The password you will use to secure the Red Hat Enterprise Virtualization administration account.
- The location of the database server to be used. You can use the setup script to install and configure a local database server or use an existing remote database server. To use a remote database server you will need to know:You must also know the user name and password of a user that is known to the remote database server. The user must have permission to create databases in PostgreSQL.
- The host name of the system on which the remote database server exists.
- The port on which the remote database server is listening.
- That the
uuid-osspextension had been loaded by the remote database server.
- The organization name to use when creating the Manager's security certificates.
- The storage type to be used for the initial data center attached to the Manager. The default is NFS.
- The path to use for the ISO share, if the Manager is being configured to provide one. The display name, which will be used to label the domain in the Red Hat Enterprise Virtualization Manager also needs to be provided.
- The firewall rules, if any, present on the system that need to be integrated with the rules required for the Manager to function.
Example 3.1. Completed Installation
--== CONFIGURATION PREVIEW ==--
Database name : engine
Database secured connection : False
Database host : localhost
Database user name : engine
Database host name validation : False
Database port : 5432
NFS setup : True
PKI organization : demo.redhat.com
Application mode : both
Firewall manager : iptables
Update Firewall : True
Configure WebSocket Proxy : True
Host FQDN : rhevm33.demo.redhat.com
NFS mount point : /var/lib/exports/iso
Datacenter storage type : nfs
Configure local database : True
Set application as default page : True
Configure Apache SSL : True
Please confirm installation settings (OK, Cancel) [OK]:
Note
engine-setup with an answer file. An answer file contains answers to the questions asked by the setup command.
- To create an answer file, use the
--generate-answerparameter to specify a path and file name with which to create the answer file. When this option is specified, theengine-setupcommand records your answers to the questions in the setup process to the answer file.#
engine-setup--generate-answer=ANSWER_FILE - To use an answer file for a new installation, use the
--config-appendparameter to specify the path and file name of the answer file to be used. Theengine-setupcommand will use the answers stored in the file to complete the installation.#
engine-setup--config-append=ANSWER_FILE
engine-setup --help for a full list of parameters.
Note
3.3. Subscribing to the Red Hat Enterprise Virtualization Channels
3.3.1. Subscribing to the Red Hat Enterprise Virtualization Manager Channels using Subscription Manager
Procedure 3.1. Subscribing to the Red Hat Enterprise Virtualization Manager Channels using Subscription Manager
Register the System with Subscription Manager
Run thesubscription-manager registercommand to register the system with Red Hat Network. To complete registration successfully, you will need to supply your Red Hat Network Username and Password when prompted.# subscription-manager register
Identify Available Entitlement Pools
To subscribe the system to Red Hat Enterprise Virtualization, you must locate the identifiers for the relevant entitlement pools. Use thelistaction of thesubscription-managerto find these.To identify available subscription pools forRed Hat Enterprise Linux Server, use the command:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux Server"
To identify available subscription pools forRed Hat Enterprise Virtualization, use the command:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Virtualization"
Attach Entitlement Pools to the System
Using the pool identifiers located in the previous step, attach theRed Hat Enterprise Linux ServerandRed Hat Enterprise Virtualizationentitlements to the system. To do so, use theattachparameter of thesubscription-managercommand, replacing [POOLID] with each of the pool identifiers:# subscription-manager attach --pool=[POOLID]
Enable the Red Hat Enterprise Virtualization Manager 3.3 Repository
Attaching aRed Hat Enterprise Virtualizationentitlement pool also subscribes the system to the Red Hat Enterprise Virtualization Manager 3.3 software repository. By default, this software repository is available but disabled. The Red Hat Enterprise Virtualization Manager 3.3 software repository must be enabled using theyum-config-managercommand:# yum-config-manager --enable rhel-6-server-rhevm-3.3-rpms
Enable the Supplementary Repository
Attaching aRed Hat Enterprise Linux Serverentitlement pool also subscribes the system to the supplementary software repository. By default, this software repository is available but disabled. The supplementary software repository must be enabled using theyum-config-managercommand:# yum-config-manager --enable rhel-6-server-supplementary-rpms
Enable the Red Hat JBoss Enterprise Application Platform Repository
The JBoss Enterprise Application Platform channels required for Red Hat Enterprise Virtualization are included in the Red Hat Enterprise Virtualization subscription. However, the repository that contains these channels is disabled by default, and must be enabled using theyum-config-managercommand:# yum-config-manager --enable jb-eap-6-for-rhel-6-server-rpms
3.3.2. Subscribing to the Red Hat Enterprise Virtualization Manager Channels Using RHN Classic
Note
Procedure 3.2. Subscribing to the Red Hat Enterprise Virtualization Manager Channels using RHN Classic
- Run the
rhn_registercommand to register the system with Red Hat Network. To complete registration successfully you will need to supply your Red Hat Network user name and password. Follow the onscreen prompts to complete registration of the system.# rhn_register
Subscribe to Required Channels
You must subscribe the system to the required channels using either the web interface to Red Hat Network or the command linerhn-channelcommand.Using the
rhn-channelCommandRun therhn-channelcommand to subscribe the system to each of the required channels. The commands which need to be run are:# rhn-channel --add --channel=rhel-x86_64-server-6 # rhn-channel --add --channel=rhel-x86_64-server-supplementary-6 # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.3 # rhn-channel --add --channel=jbappplatform-6-x86_64-server-6-rpm
Important
If you are not the administrator for the machine as defined in Red Hat Network, or the machine is not registered to Red Hat Network, then use of therhn-channelcommand will result in an error:Error communicating with server. The message was: Error Class Code: 37 Error Class Info: You are not allowed to perform administrative tasks on this system. Explanation: An error has occurred while processing your request. If this problem persists please enter a bug report at bugzilla.redhat.com. If you choose to submit the bug report, please be sure to include details of what you were trying to do when this error occurred and details on how to reproduce this problem.If you encounter this error when usingrhn-channelthen to add the channel to the system you must use the web user interface.Using the Web Interface to Red Hat Network
To add a channel subscription to a system from the web interface:- Log on to Red Hat Network (http://rhn.redhat.com).
- Move the mouse cursor over the Subscriptions link at the top of the screen, and then click the Registered Systems link in the menu that appears.
- Select the system to which you are adding channels from the list presented on the screen, by clicking the name of the system.
- Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
- Select the channels to be added from the list presented on the screen. Red Hat Enterprise Virtualization Manager requires:
- The Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) channel. This channel is located under the Release Channels for Red Hat Enterprise Linux 6 for x86_64 expandable menu.
- The RHEL Server Supplementary (v. 6 64-bit x86_64) channel. This channel is located under the Release Channels for Red Hat Enterprise Linux 6 for x86_64 expandable menu.
- The Red Hat Enterprise Virtualization Manager (v.3.3 x86_64) channel. This channel is located under the Additional Services Channels for Red Hat Enterprise Linux 6 for x86_64 expandable menu.
- The Red Hat JBoss EAP (v 6) for 6Server x86_64 channel. This channel is located under the Additional Services Channels for Red Hat Enterprise Linux 6 for x86_64 expandable menu.
- Click the Change Subscription button to finalize the change.
3.4. Installing the Red Hat Enterprise Virtualization Manager Packages
Procedure 3.3. Installing the Red Hat Enterprise Virtualization Manager Packages
- Use
yumto ensure that the most up to date versions of all installed packages are in use.#
yumupgrade - Use
yumto initiate installation of the rhevm package and all dependencies. You must run this command as therootuser.#
yuminstallrhevmNote
Installing the rhevm package also installs all packages which it depends on. This includes the java-1.7.0-openjdk package. The java-1.7.0-openjdk package provides the OpenJDK Java Virtual Machine (JVM) required to run Red Hat Enterprise Virtualization Manager. - The rhevm package includes the rhevm-doc package as a dependency. The rhevm-doc package provides a local copy of the Red Hat Enterprise Virtualization documentation suite. This documentation is also used to provide context sensitive help links from the Administration and User Portals.As localized versions of this package become available they will be released to Red Hat Network. Follow these steps to find and install any available localized Red Hat Enterprise Virtualization documentation packages that you require:
- Use the
yumcommand to search for translated Red Hat Enterprise Virtualization Manager documentation packages:#
yumsearchrhevm-doc - While logged in as the
rootuser use theyumcommand to install translated Red Hat Enterprise Virtualization Manager documentation packages. Here the Japanese (ja-JP) version of the package is installed:#
yuminstallrhevm-doc-ja-JP
See Also:
3.5. Configuring Red Hat Enterprise Virtualization Manager
engine-setup script is provided to assist with this task. This script asks you a series of questions, and configures your environment based on your answers. After the required values have been provided, the updated configuration is applied and the Red Hat Enterprise Virtualization Manager services are started.
engine-setup script guides you through several distinct configuration stages, each comprising several steps that require user input. At each step, suggested configuration defaults are provided in square brackets. When these default values are acceptable for a given step, you can press the Enter key to accept the default values and proceed to the next step or stage.
Procedure 3.4. Manager Configuration Overview
Packages Check
Theengine-setupscript checks to see if it is performing an upgrade or an installation, and whether any updates are available for the packages linked to the Manager. No user input is required at this stage.[ INFO ] Checking for product updates... [ INFO ] No product updates found
Network Configuration
A reverse lookup is performed on your host name, which is automatically detected. You can correct the auto-detected host name if it is incorrect, or if you are using virtual hosts. Your fully-qualified domain name should have both forward and reverse lookup records in DNS, especially if will also install the reports server.Host fully qualified DNS name of this server [autodetected host name]:
Theengine-setupscript checks your firewall configuration, and offers to modify it for you to open the ports used by the Manager for external communications (for example, TCP ports 80 and 443). If you do not allowengine-setupscript to modify your iptables configuration, you must manually open the ports used by the Red Hat Enterprise Virtualization Manager (see Red Hat Enterprise Virtualization Manager Firewall Requirements).iptables was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:
Database Configuration
You can use either a local or remote Postgres database. Theengine-setupscript can configure your database completely automatically, including adding a user and a database, or use values that you supply.Where is the database located? (Local, Remote) [Local]: Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications. Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
OVirt Engine Configuration
Set a password for the automatically created administrative user of the Red Hat Enterprise Virtualization Manager: admin@internal.Engine admin password: Confirm engine admin password:
Select Gluster, Virtualization, or Both. Both gives the greatest flexibility.Data center (Both, Virt, Gluster) [Both]:
Choose the initial data center storage type. You can have many data centers in your environment, each with a different type of storage. Here, you are choosing the storage type of your first data center.Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:
PKI Configuration
The Manager uses certificates to communicate securely with its hosts. You provide the organization name for the certificate. This certificate can also optionally be used to secure https communications with the Manager.Organization name for certificate [autodetected domain-based name]:
Apache Configuration
The Red Hat Enterprise Virtualization Manager uses the Apache web server to present a landing page to users. Theengine-setupscript can make the landing page of the Manager the default page presented by Apache.Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications. Do you wish to set the application as the default web page of the server? (Yes, No) [Yes]:
By default, external ssl (https) communications with the Manager are secured with the self-signed certificate created in the PKI configuration stage to securely communicate with hosts. Another certificate may be chosen for external https connections, without affecting how the Manager communicates with hosts.Setup can configure apache to use SSL using a certificate issued from the internal CA Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
System Configuration
Theengine-setupscript can create an NFS export on the Manager to use as an ISO storage domain. Hosting the ISO domain locally to the Manager simplifies keeping some elements of your environment up to date.Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]: Local ISO domain path [/var/lib/exports/iso]: Local ISO domain name [ISO_DOMAIN]:
Websocket Proxy Server Configuration
Theengine-setupscript can optionally configure a websocket proxy server for allowing users to connect to virtual machines via the noVNC or HTML 5 consoles.Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
End of Configuration
Theengine-setupscript validates all of your answers, and warns you of any possible problem with them. User input is only required if some of the answers you provided may adversely impact your environment.--== END OF CONFIGURATION ==-- Would you like transactions from the Red Hat Access Plugin sent from the RHEV Manager to be brokered through a proxy server? (Yes, No) [No]:
[ INFO ] Stage: Setup validation
Preview, and Summary
During the preview phase, theengine-setupscripts shows you the configuration values you have entered, and gives you the opportunity to change your mind. If you choose to proceed,engine-setupconfigures your Red Hat Enterprise Virtualization Manager installation based on the answers you provided in the configuration stages.--== CONFIGURATION PREVIEW ==-- Database name : engine Database secured connection : False Database host : localhost Database user name : engine Database host name validation : False Database port : 5432 NFS setup : True PKI organization : Your Org NFS mount point : /var/lib/exports/iso Application mode : both Firewall manager : iptables Configure WebSocket Proxy : True Host FQDN : Your Manager's FQDN Datacenter storage type : nfs Configure local database : True Set application as default page : True Configure Apache SSL : True Please confirm installation settings (OK, Cancel) [OK]:When your environment is configured, theengine-setupscript provides some details about accessing your environment and it's security details.A default ISO NFS share has been created on this host. If IP based access restrictions are required, edit: entry /var/lib/exports/iso in /etc/exports SSH fingerprint: 87:af:b5:fe:7a:e5:1b:64:83:57:02:07:62:eb:8c:18 Internal CA SHA1 Fingerprint=7B:DF:2A:EE:18:C8:B1:CC:F7:6B:59:42:A3:96:BC:44:32:98:FF:A6 Web access is enabled at: http://manager.fqdn:80/ovirt-engine https://manager.fqdn:443/ovirt-engine Please use the user "admin" and password specified in order to login into oVirt EngineClean up and Termination
Theengine-setupscript cleans up unnecessary files created during the configuration process, and outputs the location of the log file for the Red Hat Enterprise Virtualization Manager configuration process.[ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-installation-date.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully
engine-setup script completes successfully, the Red Hat Enterprise Virtualization Manager is configured and running on your server. You can log in as the admin@internal user to continue configuring the Manager, by adding clusters, hosts, and more.
engine-manage-domains command.
engine-setup script also saves the answers you gave during configuration to a file, to help with disaster recovery.
See Also:
3.6. Passwords in Red Hat Enterprise Virtualization Manager
admin@internal account. To change the admin@internal password, run engine-config on the Red Hat Enterprise Virtualization Manager.
engine-setup --answer-file=/[path_to_answer_file], with the temporary password specified in the answer file. Answer files are generated with the engine-setup --generate-answer=file command and option. The format of the answer file is as follows:
#action=setup
[environment:default]
OVESETUP_CORE/engineStop=none:None
OVESETUP_DIALOG/confirmSettings=bool:True
OVESETUP_DB/database=str:engine
OVESETUP_DB/fixDbViolations=none:None
OVESETUP_DB/secured=bool:False
OVESETUP_DB/host=str:localhost
OVESETUP_DB/user=str:engine
OVESETUP_DB/securedHostValidation=bool:False
OVESETUP_DB/password=str:0056jKkY
OVESETUP_DB/port=int:5432
OVESETUP_SYSTEM/nfsConfigEnabled=bool:True
...
OVESETUP_APACHE/configureSsl=bool:True
OSETUP_RPMDISTRO/requireRollback=none:None
OSETUP_RPMDISTRO/enableUpgrade=none:None
OVESETUP_AIO/configure=none:None
OVESETUP_AIO/storageDomainDir=none:None
3.7. Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager
engine-setup command.
Procedure 3.5. Preparing a PostgreSQL Database for use with Red Hat Enterprise Virtualization Manager
- Run the following commands to initialize the
postgresqldatabase, start thepostgresqlservice and ensure this service starts on boot:# service postgresql initdb # service postgresql start # chkconfig postgresql on
- Create a user for the Red Hat Enterprise Virtualization Manager to use when it writes to and reads from the database, and a database in which to store data about your environment. This step is required for both local and remote databases. Use the psql terminal as the
postgresuser.# su - postgres $ psql postgres=# create role [user name] with login encrypted password '[password]'; postgres=# create database [database name] owner [user name] template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
- Run the following commands to connect to the new database and add the
plpgsqllanguage:postgres=# \c [database name] CREATE LANGUAGE plpgsql;
- Ensure the database can be accessed remotely by enabling client authentication. Edit the
/var/lib/pgsql/data/pg_hba.conffile, and add the following in accordance with the location of the database:- For local databases, add the two following lines immediately underneath the line starting with
Localat the bottom of the file:host [database name] [user name] 0.0.0.0/0 md5 host [database name] [user name] ::0/0 md5
- For remote databases, add the following line immediately underneath the line starting with
Localat the bottom of the file, replacing X.X.X.X with the IP address of the Manager:host [database name] [user name] X.X.X.X/32 md5
- Allow TCP/IP connections to the database. This step is required for remote databases. Edit the
/var/lib/pgsql/data/postgresql.conffile, and add the following line:listen_addresses='*'
This example configures thepostgresqlservice to listen for connections on all interfaces. You can specify an interface by giving its IP address. - Restart the
postgresservice. This step is required on both local and remote manually configured database servers.service postgresql restart
3.8. Configuring the Manager to Use a Manually Configured Local or Remote PostgreSQL Database
Prerequisites:
engine-setup script, you can choose to use a manually configured database. You can select to use a locally or remotely installed PostgreSQL database.
Procedure 3.6. Configuring the Manager to use a Manually Configured Local or Remote PostgreSQL Database
- During configuration of the Red Hat Enterprise Virtualization Manager using the
engine-setupscript, you are prompted to decide where your database is located:Where is the database located? (Local, Remote) [Local]:
The steps involved in manually configuring the Red Hat Enterprise Virtualization Manger to use local or remotely hosted databases are the same, except that to use a remotely hosted database, you must provide the host name of the remote database server and the port on which it is listening. - When prompted, enter
Manualto manually configure the database:Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Manual
- If you are using a remotely hosted database, supply the
engine-setupscript with the host name of your database server and the port on which it is listening:Database host [localhost]: Database port [5432]:
- For both local and remotely hosted databases, you are then prompted to confirm whether your database uses a secured connection, and for the name of the database you configured, the user the Manager will use to access the database, and the password of that user.
Database secured connection (Yes, No) [No]: Database name [engine]: Database user [engine]: Database password:
Note
Using a secured connection to your database requires you to have also manually configured secured database connections.
engine-setup script continues with the rest of your environment configuration.
3.9. Connecting to the Administration Portal
Procedure 3.7. Connecting to the Administration Portal
- Open a supported web browser on your client system.
- Navigate to
https://your-manager-fqdn/ovirt-engine, replacing your-manager-fqdn with the fully qualified domain name that you provided during installation.The first time that you connect, you are prompted to trust the certificate being used to secure communications between your browser and the web server. - The login screen is displayed. Enter your User Name and Password in the fields provided. If you are logging in for the first time, use the user name
adminin conjunction with the administrator password that you specified during installation. - Select the directory services domain to authenticate against from the Domain list provided. If you are logging in using the internal
adminuser name, then select theinternaldomain. - The Administration Portal is available in multiple languages. The default selection will be chosen based on the locale settings of your web browser. If you would like to view the Administration Portal in a language other than that selected by default, select your preferred language from the list.
- Click Login to log in.
3.10. Removing Red Hat Enterprise Virtualization Manager
engine-cleanup script to allow quick and easy removal of the files associated with the Red Hat Enterprise Virtualization Manager environment.
Procedure 3.8. Removing Red Hat Enterprise Virtualization Manager
- Run the
engine-cleanupcommand on the system on which Red Hat Enterprise Virtualization Manager is installed.# engine-cleanup
- You are prompted to confirm removal of all Red Hat Enterprise Virtualization Manager components. These components include PKI keys, the locally hosted ISO domain file system layout, PKI configuration, the local NFS exports configuration, and the Engine database content.
Do you want to remove all components? (Yes, No) [Yes]:
Note
A backup of the Engine database and a compressed archive of the PKI keys and configuration are always automatically created. These are saved under/var/lib/ovirt-engine/backups/, and include the date andengine-andengine-pki-in their file names respectively. - You are given another opportunity to change your mind and cancel the removal of the Red Hat Enterprise Virtualization Manager. If you choose to proceed, the ovirt-engine service is stopped, and your environment's configuration is removed in accordance with the options you selected.
During execution engine service will be stopped (OK, Cancel) [OK]: ovirt-engine is about to be removed, data will be lost (OK, Cancel) [Cancel]:OK
engine-cleanup.
--== SUMMARY ==--
A backup of the database is available at /var/lib/ovirt-engine/backups/engine-date-and-extra-characters.sql
Engine setup successfully cleaned up
A backup of PKI configuration and keys is available at /var/lib/ovirt-engine/backups/engine-pki-date-and-extra-characters.tar.gz
--== END OF SUMMARY ==--
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20130827181911-cleanup.conf'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-remove-date.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of cleanup completed successfullyyum command.
# yum remove rhevm* vdsm-bootstrap
Chapter 4. Self-Hosted Engine
4.1. About the Self-Hosted Engine
4.2. Limitations of the Self-Hosted Engine
- An NFS storage domain is required for the configuration.
- The host and hosted engine must use Red Hat Enterprise Linux 6.5 or above. Red Hat Enterprise Virtualization Hypervisors are not supported.
4.3. Installing the Self-Hosted Engine
root user.
Important
rhel-6-server-rhev-mgmt-agent-rpms in Subscription Manager and rhel-x86_64-rhev-mgmt-agent-6 in RHN Classic.
Procedure 4.1. Installing the Self-Hosted Engine
- Use
yumto ensure that the most up-to-date versions of all installed packages are in use.#
yum upgrade - Use
yumto initiate installation of the ovirt-hosted-engine-setup package and all dependencies.#
yum install ovirt-hosted-engine-setup
4.4. Configuring the Self-Hosted Engine
hosted-engine deployment script is provided to assist with this task. The script asks you a series of questions, and configures your environment based on your answers. When the required values have been provided, the updated configuration is applied and the Red Hat Enterprise Virtualization Manager services are started.
hosted-engine deployment script guides you through several distinct configuration stages. The script suggests possible configuration defaults in square brackets. Where these default values are acceptable, no additional input is required.
Host-HE1.example.com in this procedure.
hosted-engine deployment script to access this virtual machine multiple times to install an operating system and to configure the engine.
root user for the specified machine.
Procedure 4.2. Configuring the Self-Hosted Engine
Initiating Hosted Engine Deployment
Begin configuration of the self-hosted environment by deploying thehosted-enginecustomization script on Host_HE1. To escape the script at any time, use the CTRL+D keyboard combination to abort deployment.# hosted-engine --deployConfiguring Storage
Select the version of NFS and specify the full address, using either the FQDN or IP address, and path name of the shared storage domain. Choose the storage domain and storage data center names to be used in the environment.During customization use CTRL-D to abort. Please specify the storage you would like to use (nfs3, nfs4)[nfs3]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs [ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.Please enter local datacenter name [hosted_datacenter]:
Configuring the Network
The script detects possible network interface controllers (NICs) to use as a management bridge for the environment. It then checks your firewall configuration and offers to modify it for console (SPICE or VNC) access HostedEngine-VM. Provide a pingable gateway IP address, to be used by theovirt-ha-agentto help determine a host's suitability for running HostedEngine-VM.Please indicate a nic to set rhevm bridge on: (eth1, eth0) [eth1]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [X.X.X.X]:
Configuring the Virtual Machine
The script creates a virtual machine to be configured as the Red Hat Enterprise Virtualization Manager, the hosted engine referred to in this procedure as HostedEngine-VM. Specify the boot device and, if applicable, the path name of the installation media, the CPU type, the number of virtual CPUs, and the disk size. Specify a MAC address for the HostedEngine-VM, or accept a randomly generated one. The MAC address can be used to update your DHCP server prior to installing the operating system on the virtual machine. Specify memory size and console connection type for the creation of HostedEngine-VM.Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: The following CPU types are supported by this host: - model_Penryn: Intel Penryn Family - model_Conroe: Intel Conroe Family Please specify the CPU type to be used by the VM [model_Penryn]: Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]: Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]: You may specify a MAC address for the VM or accept a randomly generated default [00:16:3e:77:b2:a4]: Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]: Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:Configuring the Hosted Engine
Specify the name for Host-HE1 to be identified in the Red Hat Enterprise Virtualization environment, and the password for theadmin@internaluser to access the Administrator Portal. Provide the FQDN for HostedEngine-VM; this procedure uses the FQDN HostedEngine-VM.example.com. Finally, provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications.Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]: Host-HE1 Enter 'admin@internal' user password that will be used for accessing the Administrator Portal: Confirm 'admin@internal' user password: Please provide the FQDN for the engine you would like to use. This needs to match the FQDN that you will use for the engine installation within the VM: HostedEngine-VM.example.com Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
Configuration Preview
Before proceeding, thehosted-enginescript displays the configuration values you have entered, and prompts for confirmation to proceed with these values.Bridge interface : eth1 Engine FQDN : HostedEngine-VM.example.com Bridge name : rhevm SSH daemon port : 22 Firewall manager : iptables Gateway address : X.X.X.X Host name for web application : Host-HE1 Host ID : 1 Image size GB : 25 Storage connection : storage.example.com:/hosted_engine/nfs Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:b2:a4 Boot type : pxe Number of CPUs : 2 CPU Type : model_Penryn Please confirm installation settings (Yes, No)[No]:
Creating HostedEngine-VM
The script creates the virtual machine that will be configured to be HostedEngine-VM and provides connection details. You will need to install an operating system on HostedEngine-VM before thehosted-enginescript can proceed on Host-HE1.[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf' [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Generating VDSM certificates [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Initializing sanlock lockspace [ INFO ] Initializing sanlock metadata [ INFO ] Creating VM Image [ INFO ] Disconnecting Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "3042QHpX" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection: (1) Continue setup - VM installation is complete (2) Reboot the VM and restart installation (3) Abort setup (1, 2, 3)[1]:Using the naming convention of this procedure, you would connect to the virtual machine using VNC with the following command:/usr/bin/remote-viewer vnc://Host-HE1.example.com:5900
Installing the Virtual Machine Operating System
Connect to HostedEngine-VM, the virtual machine created by the hosted-engine script, and install a Red Hat Enterprise Linux 6.5 operating system. Ensure the machine is rebooted once installation has completed.Synchronizing the Host and the Virtual Machine
Return to Host-HE1 and continue thehosted-enginedeployment script by selecting option 1:(1) Continue setup - VM installation is complete
Waiting for VM to shut down... [ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "3042QHpX" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password Please install and setup the engine in the VM. You may also be interested in subscribing to "agent" RHN/Satellite channel and installing rhevm-guest-agent-common package in the VM. To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setupInstalling the Manager
Connect to HostedEngine-VM, subscribe to the appropriate Red Hat Enterprise Virtualization Manager channels, ensure that the most up-to-date versions of all installed packages are in use, and install the rhevm packages.#
yum upgrade#
yum install rhevmConfiguring the Manager
Configure the engine on HostedEngine-VM:#
engine-setupSynchronizing the Host and the Manager
Return to Host-HE1 and continue thehosted-enginedeployment script by selecting option 1:(1) Continue setup - engine installation is complete
[ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] The VDSM Host is now operational Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.Shutting Down HostedEngine-VM
Shutdown HostedEngine-VM.# shutdown nowSetup Confirmation
Return to Host-HE1 to confirm it has detected that HostedEngine-VM is down.[ INFO ] Enabling and starting HA services Hosted Engine successfully set up [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
hosted-engine deployment script completes successfully, the Red Hat Enterprise Virtualization Manager is configured and running on your server. In contrast to a bare-metal Manager installation, the hosted engine Manager has already configured the data center, cluster, host (Host-HE1), storage domain, and virtual machine of the hosted engine (HostedEngine-VM). You can log in as the admin@internal user to continue configuring the Manager and add further resources.
engine-manage-domains command.
ovirt-host-engine-setup script also saves the answers you gave during configuration to a file, to help with disaster recovery. If a destination is not specified using the --generate-answer=<file> argument, the answer file is generated at /etc/ovirt-hosted-engine/answers.conf.
4.5. Migrating to a Self-Hosted Environment
hosted-engine deployment script is provided to assist with this task. The script asks you a series of questions, and configures your environment based on your answers. When the required values have been provided, the updated configuration is applied and the Red Hat Enterprise Virtualization Manager services are started.
hosted-engine deployment script guides you through several distinct configuration stages. The script suggests possible configuration defaults in square brackets. Where these default values are acceptable, no additional input is required.
Host-HE1.example.com in this procedure.
hosted-engine deployment script to access this virtual machine multiple times to install an operating system and to configure the engine.
root user for the specified machine.
Important
engine-backup command.
Procedure 4.3. Migrating to a Self-Hosted Environment
Initiating Hosted Engine Deployment
Begin configuration of the self-hosted environment by deploying thehosted-enginecustomization script on Host_HE1. To escape the script at any time, use the CTRL+D keyboard combination to abort deployment.# hosted-engine --deployConfiguring Storage
Select the version of NFS and specify the full address, using either the FQDN or IP address, and path name of the shared storage domain. Choose the storage domain and storage data center names to be used in the environment.During customization use CTRL-D to abort. Please specify the storage you would like to use (nfs3, nfs4)[nfs3]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs [ INFO ] Installing on first host Please provide storage domain name. [hosted_storage]: Local storage datacenter name is an internal name and currently will not be shown in engine's admin UI.Please enter local datacenter name [hosted_datacenter]:
Configuring the Network
The script detects possible network interface controllers (NICs) to use as a management bridge for the environment. It then checks your firewall configuration and offers to modify it for console (SPICE or VNC) access HostedEngine-VM. Provide a pingable gateway IP address, to be used by theovirt-ha-agentto help determine a host's suitability for running HostedEngine-VM.Please indicate a nic to set rhevm bridge on: (eth1, eth0) [eth1]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [X.X.X.X]:
Configuring the Virtual Machine
The script creates a virtual machine to be configured as the Red Hat Enterprise Virtualization Manager, the hosted engine referred to in this procedure as HostedEngine-VM. Specify the boot device and, if applicable, the path name of the installation media, the CPU type, the number of virtual CPUs, and the disk size. Specify a MAC address for the HostedEngine-VM, or accept a randomly generated one. The MAC address can be used to update your DHCP server prior to installing the operating system on the virtual machine. Specify memory size and console connection type for the creation of HostedEngine-VM.Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: The following CPU types are supported by this host: - model_Penryn: Intel Penryn Family - model_Conroe: Intel Conroe Family Please specify the CPU type to be used by the VM [model_Penryn]: Please specify the number of virtual CPUs for the VM [Defaults to minimum requirement: 2]: Please specify the disk size of the VM in GB [Defaults to minimum requirement: 25]: You may specify a MAC address for the VM or accept a randomly generated default [00:16:3e:77:b2:a4]: Please specify the memory size of the VM in MB [Defaults to minimum requirement: 4096]: Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]:Configuring the Hosted Engine
Specify the name for Host-HE1 to be identified in the Red Hat Enterprise Virtualization environment, and the password for theadmin@internaluser to access the Administrator Portal. Provide the FQDN for HostedEngine-VM; this procedure uses the FQDN HostedEngine-VM.example.com. Finally, provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications.Important
The FQDN provided for the engine (HostedEngine-VM.example.com) must be the same FQDN provided when BareMetal-Manager was initially set up.Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]: Host-HE1 Enter 'admin@internal' user password that will be used for accessing the Administrator Portal: Confirm 'admin@internal' user password: Please provide the FQDN for the engine you would like to use. This needs to match the FQDN that you will use for the engine installation within the VM: BareMetal-Manager.example.com Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
Configuration Preview
Before proceeding, thehosted-enginescript displays the configuration values you have entered, and prompts for confirmation to proceed with these values.Bridge interface : eth1 Engine FQDN : BareMetal-Manager.example.com Bridge name : rhevm SSH daemon port : 22 Firewall manager : iptables Gateway address : X.X.X.X Host name for web application : Host-HE1 Host ID : 1 Image size GB : 25 Storage connection : storage.example.com:/hosted_engine/nfs Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:77:b2:a4 Boot type : pxe Number of CPUs : 2 CPU Type : model_Penryn Please confirm installation settings (Yes, No)[No]:
Creating HostedEngine-VM
The script creates the virtual machine that will be configured to be HostedEngine-VM and provides connection details. You will need to install an operating system on HostedEngine-VM before thehosted-enginescript can proceed on Host-HE1.[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf' [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Configuring libvirt [ INFO ] Generating VDSM certificates [ INFO ] Configuring VDSM [ INFO ] Starting vdsmd [ INFO ] Waiting for VDSM hardware info [ INFO ] Creating Storage Domain [ INFO ] Creating Storage Pool [ INFO ] Connecting Storage Pool [ INFO ] Verifying sanlock lockspace initialization [ INFO ] Initializing sanlock lockspace [ INFO ] Initializing sanlock metadata [ INFO ] Creating VM Image [ INFO ] Disconnecting Storage Pool [ INFO ] Start monitoring domain [ INFO ] Configuring VM [ INFO ] Updating hosted-engine configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "5379skAb" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password The VM has been started. Install the OS and shut down or reboot it. To continue please make a selection: (1) Continue setup - VM installation is complete (2) Reboot the VM and restart installation (3) Abort setup (1, 2, 3)[1]:Using the naming convention of this procedure, you would connect to the virtual machine using VNC with the following command:/usr/bin/remote-viewer vnc://Host-HE1.example.com:5900
Installing the Virtual Machine Operating System
Connect to HostedEngine-VM, the virtual machine created by the hosted-engine script, and install a Red Hat Enterprise Linux 6.5 operating system.Synchronizing the Host and the Virtual Machine
Return to Host-HE1 and continue thehosted-enginedeployment script by selecting option 1:(1) Continue setup - VM installation is complete
Waiting for VM to shut down... [ INFO ] Creating VM You can now connect to the VM with the following command: /usr/bin/remote-viewer vnc://localhost:5900 Use temporary password "5379skAb" to connect to vnc console. Please note that in order to use remote-viewer you need to be able to run graphical applications. This means that if you are using ssh you have to supply the -Y flag (enables trusted X11 forwarding). Otherwise you can run the command from a terminal in your preferred desktop environment. If you cannot run graphical applications you can connect to the graphic console from another host or connect to the console using the following command: virsh -c qemu+tls://Test/system console HostedEngine If you need to reboot the VM you will need to start it manually using the command: hosted-engine --vm-start You can then set a temporary password using the command: hosted-engine --add-console-password Please install and setup the engine in the VM. You may also be interested in subscribing to "agent" RHN/Satellite channel and installing rhevm-guest-agent-common package in the VM. To continue make a selection from the options below: (1) Continue setup - engine installation is complete (2) Power off and restart the VM (3) Abort setupInstalling the Manager
Connect to HostedEngine-VM, subscribe to the appropriate Red Hat Enterprise Virtualization Manager channels, ensure that the most up-to-date versions of all installed packages are in use, and install the rhevm packages.#
yum upgrade#
yum install rhevmDisabling BareMetal-Manager
Connect to BareMetal-Manager, the Manager of your established Red Hat Enterprise Virtualization environment, and stop the engine and prevent it from running.# service ovirt-engine stop# service ovirt-engine disable# chkconfig ovirt-engine offNote
Though stopping BareMetal-Manager from running is not obligatory, it is recommended as it ensures no changes will be made to the environment after the backup has been created. Additionally, it prevents BareMetal-Manager and HostedEngine-VM from simultaneously managing existing resources.Updating DNS
Update your DNS so that the FQDN of the Red Hat Enterprise Virtualization environment correlates to the IP address of HostedEngine-VM and the FQDN previously provided when configuring thehosted-enginedeployment script on Host-HE1. In this procedure that FQDN was set as BareMetal-Manager.example.com because in a migrated hosted-engine setup, the FQDN provided for the engine must be identical to that given in the engine setup of the original engine.Creating a Backup of BareMetal-Manager
Connect to BareMetal-Manager and run theengine-backupcommand with the--mode=backup,--file=FILE, and--log=LogFILEparameters to specify the backup mode, the name of the backup file created and used for the backup, and the name of the log file to be created to store the backup log.# engine-backup --mode=backup --file=FILE --log=LogFILECopying the Backup File to HostedEngine-VM
Still on BareMetal-Manager, secure copy the backup file to HostedEngine-VM. In the following example, HostedEngine-VM.example.com is the FQDN for HostedEngine-VM, and /backup/ is any designated folder or path. If the designated folder or path does not exist, you will need to connect to HostedEngine-VM and create it before secure copying the backup from BareMetal-Manager.# scp -p backup1 HostedEngine-VM.example.com:/backup/Restoring the Backup File on HostedEngine-VM
Theengine-backup --mode=restorecommand does not create a database; you are required to create one on HostedEngine-VM before restoring the backup you created on BareMetal-Manager. Connect to HostedEngine-VM and create the database, as detailed in Section 3.7, “Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager”.Note
The procedure in Section 3.7, “Preparing a PostgreSQL Database for Use with Red Hat Enterprise Virtualization Manager” creates a database that is not empty, which will result in the following error when you attempt to restore the backup:FATAL: Database is not empty
Create an empty database using the following command in psql:postgres=# create database [database name] owner [user name]
After the empty database has been created, restore the BareMetal-Manager backup using theengine-backupcommand with the--mode=restore--file=FILE--log=Restore.logparameters to specify the restore mode, the name of the file to be used to restore the database, and the name of the logfile to store the restore log. This restores the files and the database but does not start the service.To specify a different database configuration, use the--change-db-credentialsparameter to activate alternate credentials. Use theengine-backup --helpcommand on the Manager for a list of credential parameters.# engine-backup --mode=restore --file=FILE --log=Restore.log --change-db-credentials --db-host=X.X.X.X --db-user=engine --db-password=password --db-name=engineConfiguring HostedEngine-VM
Configure the engine on HostedEngine-VM. This will identify the existing files and database.# engine-setup[ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140304075238.log Version: otopi-1.1.2 (otopi-1.1.2-1.el6ev) [ INFO ] Stage: Environment packages setup [ INFO ] Yum Downloading: rhel-65-zstream/primary_db 2.8 M(70%) [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization --== PACKAGES ==-- [ INFO ] Checking for product updates... [ INFO ] No product updates found --== NETWORK CONFIGURATION ==-- Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [ INFO ] iptables will be configured as firewall manager. --== DATABASE CONFIGURATION ==-- --== OVIRT ENGINE CONFIGURATION ==-- Skipping storing options as database already prepared --== PKI CONFIGURATION ==-- PKI is already configured --== APACHE CONFIGURATION ==-- --== SYSTEM CONFIGURATION ==-- --== END OF CONFIGURATION ==-- [ INFO ] Stage: Setup validation [WARNING] Less than 16384MB of memory is available [ INFO ] Cleaning stale zombie tasks --== CONFIGURATION PREVIEW ==-- Database name : engine Database secured connection : False Database host : X.X.X.X Database user name : engine Database host name validation : False Database port : 5432 NFS setup : True Firewall manager : iptables Update Firewall : True Configure WebSocket Proxy : True Host FQDN : HostedEngine-VM.example.com NFS mount point : /var/lib/exports/iso Set application as default page : True Configure Apache SSL : True Please confirm installation settings (OK, Cancel) [OK]:Confirm the settings. Upon completion, the setup will provide an SSH fingerprint and an internal Certificate Authority hash.Synchronizing the Host and the Manager
Return to Host-HE1 and continue thehosted-enginedeployment script by selecting option 1:(1) Continue setup - engine installation is complete
[ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational... [ INFO ] The VDSM Host is now operational Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.Shutting Down HostedEngine-VM
Shutdown HostedEngine-VM.# shutdown nowSetup Confirmation
Return to Host-HE1 to confirm it has detected that HostedEngine-VM is down.[ INFO ] Enabling and starting HA services Hosted Engine successfully set up [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination
4.6. Installing Additional Hosts to a Self-Hosted Environment
root user.
Procedure 4.4. Adding the host
- Install the ovirt-hosted-engine-setup package.
# yum install ovirt-hosted-engine-setup - Configure the host with the deployment command.
# hosted-engine --deploy Configuring Storage
Specify the storage type and the full address, using either the Fully Qualified Domain Name (FQDN) or IP address, and path name of the shared storage domain used in the self-hosted environment.Please specify the storage you would like to use (nfs3, nfs4)[nfs3]: Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs
Detecting the Self-Hosted Engine
Thehosted-enginescript detects that the shared storage is being used and asks if this is an additional host setup. You are then prompted for the host ID, which must be an integer not already assigned to an additional host in the environment.The specified storage location already contains a data domain. Is this an additional host setup (Yes, No)[Yes]? [ INFO ] Installing on additional host Please specify the Host ID [Must be integer, default: 2]:
Configuring the System
Thehosted-enginescript uses the answer file generated by the original hosted-engine setup. To achieve this, the script requires the FQDN or IP address and the password of therootuser of that host so as to access and secure-copy the answer file to the additional host.[WARNING] A configuration file must be supplied to deploy Hosted Engine on an additional host. The answer file may be fetched from the first host using scp. If you do not want to download it automatically you can abort the setup answering no to the following question. Do you want to scp the answer file from the first host? (Yes, No)[Yes]: Please provide the FQDN or IP of the first host: Enter 'root' user password for host Host-HE1.example.com: [ INFO ] Answer file successfully downloaded
Configuring the Hosted Engine
Specify the name for the additional host to be identified in the Red Hat Enterprise Virtualization environment, and the password for theadmin@internaluser.Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_2]: Enter 'admin@internal' user password that will be used for accessing the Administrator Portal: Confirm 'admin@internal' user password:
Configuration Preview
Before proceeding, thehosted-enginescript displays the configuration values you have entered, and prompts for confirmation to proceed with these values.Bridge interface : eth1 Engine FQDN : HostedEngine-VM.example.com Bridge name : rhevm SSH daemon port : 22 Firewall manager : iptables Gateway address : X.X.X.X Host name for web application : hosted_engine_2 Host ID : 2 Image size GB : 25 Storage connection : storage.example.com:/hosted_engine/nfs Console type : vnc Memory size MB : 4096 MAC address : 00:16:3e:05:95:50 Boot type : disk Number of CPUs : 2 CPU Type : model_Penryn Please confirm installation settings (Yes, No)[No]:
4.7. Maintaining the Self-Hosted Engine
# hosted-engine --set-maintenance --mode=global# hosted-engine --set-maintenance --mode=noneroot user.
Chapter 5. Data Collection Setup and Reports Installation
5.2. Data Collection Setup and Reports Installation Overview
SELECT statement. The result set of the SELECT statement populates the virtual table returned by the view. If the optional comprehensive management history database has been enabled, the history tables and their associated views are stored in the ovirt_engine_history database.
5.3. Installing and Configuring the History Database
Prerequisites:
Procedure 5.1. Installing and Configuring the History Database
- Install the rhevm-dwh package. This package must be installed on the system on which the Red Hat Enterprise Virtualization Manager is installed.
# yum install rhevm-dwh
- Once the required packages have been downloaded, they are listed for review. You will be prompted to confirm continuing with the installation. Upon confirmation, the packages are installed. However, some further configuration is required before the reports functionality can be used.
- Configure the history database. Use the
rhevm-dwh-setupcommand to configure the Extract, Transform, Load (ETL) process and database scripts used to create and maintain a working history database.- Run the
rhevm-dwh-setupcommand on the system hosting the Red Hat Enterprise Virtualization Manager:# rhevm-dwh-setup
- For the history database installation to take effect, the
ovirt-engineservice must be restarted. Therhevm-dwh-setupcommand prompts you:In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes|no):
Typeyesand then press Enter to proceed. - The
rhevm-dwh-setuputility can optionally create a read-only user to allow remote access to the history database.This utility can configure a read only user for DB access. Would you like to do so? (yes|no): Provide a username for read-only user: Provide a password for read-only user:
If you choose to create a read-only user, therhevm-dwh-setuputility automatically opens the required firewall ports and configures the database to listen on externally facing network interface devices.Note
Therhevm-dwh-setuputility can configure read-only access to local databases only. If you chose to use a remote database duringengine-setup, you have to manually configure read-only access to the history database. See Connecting to the History Database in the Red Hat Enterprise Virtualization Administration Guide. - The
rhevm-dwh-setuputility can optionally configure the history database to use secure connectionsShould postgresql be setup with secure connection? (yes|no):
The command then creates and configures theovirt_engine_historydatabase and starts theovirt-engineservice.
ovirt_engine_history database has been created. Red Hat Enterprise Virtualization Manager is configured to log information to this database for reporting purposes.
5.4. Installing and Configuring Red Hat Enterprise Virtualization Manager Reports
Procedure 5.2. Installing and Configuring Red Hat Enterprise Virtualization Manager Reports
- Install the rhevm-reports package. This package must be installed on the system on which the Red Hat Enterprise Virtualization Manager is installed.
# yum install rhevm-reports
- Run the
rhevm-reports-setupcommand on the system hosting the Red Hat Enterprise Virtualization Manager:# rhevm-reports-setup
- For the Red Hat Enterprise Virtualization Manager Reports installation to take effect, the
ovirt-engineservice must be restarted. Therhevm-reports-setupcommand prompts you:In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes|no):
Type yes and then press Enter to proceed. The command then performs a number of actions before prompting you to set the password for the Red Hat Enterprise Virtualization Manager Reports administrative users (rhevm-adminandsuperuser). Note that the reports system maintains its own set of credentials which are separate to those used for Red Hat Enterprise Virtualization Manager.Please choose a password for the reports admin user(s) (rhevm-admin and superuser):
You will be prompted to enter the password a second time to confirm it.
http://[demo.redhat.com]/rhevm-reports, replacing [demo.redhat.com] with the fully-qualified domain name of the Red Hat Enterprise Virtualization Manager. If during Red Hat Enterprise Virtualization Manager installation you selected a non-default HTTP port then append :[port] to the URL, replacing [port] with the port that you chose.
rhevm-admin and the password you set during reports installation to log in for the first time. Note that the first time you log into Red Hat Enterprise Virtualization Manager Reports, a number of web pages are generated, and as a result your initial attempt to login may take some time to complete.
Chapter 6. Updating the Red Hat Enterprise Virtualization Environment
- 6.1. Upgrades between Minor Releases
- 6.1.1. Checking for Red Hat Enterprise Virtualization Manager Updates
- 6.1.2. Updating Red Hat Enterprise Virtualization Manager
- 6.1.3. Troubleshooting for Upgrading Red Hat Enterprise Virtualization Manager
- 6.1.4. Updating Red Hat Enterprise Virtualization Manager Reports
- 6.1.5. Updating Red Hat Enterprise Virtualization Hypervisors
- 6.1.6. Updating Red Hat Enterprise Linux Virtualization Hosts
- 6.1.7. Updating the Red Hat Enterprise Virtualization Guest Tools
- 6.2. Upgrading to Red Hat Enterprise Virtualization 3.3
- 6.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
- 6.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
- 6.5. Post-upgrade Tasks
6.1. Upgrades between Minor Releases
6.1.1. Checking for Red Hat Enterprise Virtualization Manager Updates
engine-upgrade-check command, included in Red Hat Enterprise Virtualization Manager, to check for updates.
Procedure 6.1. Checking for Red Hat Enterprise Virtualization Manager Updates
- Run
engine-upgrade-checkas a user with administrative privileges such as therootuser.# engine-upgrade-check
- Where no updates are available the command will output the text
No upgrade.# engine-upgrade-check VERB: queue package rhevm-setup for update VERB: package rhevm-setup queued VERB: Building transaction VERB: Empty transaction VERB: Transaction Summary: No upgrade
- Where updates are available the command will list the packages to be updated.
# engine-upgrade-check VERB: queue package rhevm-setup for update VERB: package rhevm-setup queued VERB: Building transaction VERB: Transaction built VERB: Transaction Summary: VERB: updated - rhevm-lib-3.3.0-0.46.el6ev.noarch VERB: update - rhevm-lib-3.3.1-0.48.el6ev.noarch VERB: updated - rhevm-setup-3.3.0-0.46.el6ev.noarch VERB: update - rhevm-setup-3.3.1-0.48.el6ev.noarch Upgrade available
6.1.2. Updating Red Hat Enterprise Virtualization Manager
- Stopping the
ovirt-engineservice. - Downloading and installing the updated packages.
- Backing up and updating the database.
- Performing post installation configuration.
- Restarting the
ovirt-engineservice.
root user.
Procedure 6.2. Updating Red Hat Enterprise Virtualization Manager
- Run the
yumcommand to update the rhevm-setup package.# yum update rhevm-setup
- Run the
engine-setupcommand to update the Red Hat Enterprise Virtualization Manager.# engine-setup
Note
From Version 3.3, installation of Red Hat Enterprise Virtualization Manager supportsotopi, a standalone, plug-in-based installation framework for setting up system components. Under this framework, therhevm-upgradecommand used during the installation process has been updated toengine-setupand is now obsolete.Note
The upgrade process may take some time; allow time for the upgrade process to complete and do not stop the process once initiated. Once the upgrade has been completed, you will also be instructed to separately upgrade the data warehouse and reports functionality. These additional steps are only required if these optional packages are installed.
6.1.3. Troubleshooting for Upgrading Red Hat Enterprise Virtualization Manager
Red Hat Enterprise Virtualization Troubleshooting Cases
- SAM Channel Causes Conflicts with rhevm upgrade
- Running Red Hat Enterprise Virtualization Manager on a machine that has Subscription Asset Manager (SAM) enabled is not supported. The
yum updatecommand fails to update rhevm due to a "file conflicts" error if thesam-rhel-x86_64-server-6channel is enabled.If your Red Hat Enterprise Virtualization environment does not require Subscription Asset Manager (SAM) features, you can disable the following channels in the customer portal:sam-rhel-x86_64-server-6sam-rhel-x86_64-server-6-debuginfo
Then, remove the package causing the conflict by issuing this command:#
yum remove apache-commons-codecAlternatively, remove the channels from the command line:#
rhn-channel -r -c sam-rhel-x86_64-server-6#rhn-channel -r -c sam-rhel-x86_64-server-6-debuginfoThen, remove the package causing the conflict by issuing this command:#
yum remove apache-commons-codec
6.1.4. Updating Red Hat Enterprise Virtualization Manager Reports
root user.
Procedure 6.3. Updating Red Hat Enterprise Virtualization Manager Reports
- Use the
yumcommand to update the rhevm-reports and rhevm-dwh packages.# yum update -y rhevm-reports rhevm-dwh
- Run the
rhevm-dwh-setupcommand to update theovirt_engine_historydatabase.# rhevm-dwh-setup
- Run the
rhevm-reports-setupcommand to update the reporting engine.# rhevm-reports-setup
6.1.5. Updating Red Hat Enterprise Virtualization Hypervisors
Warning
Important
Procedure 6.4. Updating Red Hat Enterprise Virtualization Hypervisors
- Log in to the system hosting Red Hat Enterprise Virtualization Manager as the
rootuser. - Ensure that:
- the system is subscribed to the
Red Hat Enterprise Virtualizationentitlement — if using certificate-based Red Hat Network; or - the system is subscribed to the
Red Hat Enterprise Virtualization Hypervisor (v.6 x86-64)(labeledrhel-x86_64-server-6-rhevh) — if using classic Red Hat Network.
- Run the
yumcommand with theupdaterhev-hypervisor6parameters to ensure that you have the most recent version of the rhev-hypervisor6 package installed.# yum update rhev-hypervisor6
- Use your web browser to log in to the Administration Portal as a Red Hat Enterprise Virtualization administrative user.
- Click the Hosts tab, and then select the host that you intend to upgrade. If the host is not displayed, or the list of hosts is too long to filter visually, perform a search to locate the host.
- With the host selected, click the General tab on the Details pane.
- If the host requires updating, an alert message indicates that a new version of the Red Hat Enterprise Virtualization Hypervisor is available.
- If the host does not require updating, no alert message is displayed and no further action is required.
- Ensure the host remains selected and click the Maintenance button, if the host is not already in maintenance mode. This will cause any virtual machines running on the host to be migrated to other hosts. If the host is the SPM, this function will be moved to another host. The status of the host changes as it enters maintenance mode. When the host status is
Maintenance, the message in the general tab changes, providing you with a link which when clicked will re-install or upgrade the host. - Ensure that the host remains selected, and that you are on the General tab of the the Details pane. Click the Upgrade link. The Install Host dialog box displays.
- Select
rhev-hypervisor.iso, which is symbolically linked to the most recent hypervisor image. - Click OK to update and re-install the host. The dialog closes, the details of the host are updated in the Hosts tab, and the status changes.The host status will transition through these stages:These are all expected, and each stage will take some time.
Installing,Reboot,Non Responsive, andUp.
- Once successfully updated, the host displays a status of
Up. Any virtual machines that were migrated off the host, are at this point able to be migrated back to it.Important
After a Red Hat Enterprise Virtualization Hypervisor is successfully registered to the Red Hat Enterprise Virtualization Manager and then upgraded, it may erroneously appear in the Administration Portal with the status of Install Failed. Click on the button, and the hypervisor will change to an Up status and be ready for use.
6.1.6. Updating Red Hat Enterprise Linux Virtualization Hosts
yum. It is highly recommended that you use yum to update your systems regularly, to ensure timely application of security and bug fixes. All steps in this task must be run while logged into the Red Hat Enterprise Linux virtualization host as the root user.
Procedure 6.5. Updating Red Hat Enterprise Linux Virtualization Hosts
- On the administration portal, navigate to the Hosts tab and select the host to be updated. Click to place it into maintenance mode.
- On the Red Hat Enterprise Linux virtualization host, run the
yumcommand with theupdateparameter to update all installed packages.# yum update
- If a package such as the kernel was updated, you must reboot the host to get the new functionality. If a package such as VDSM or libvirt was updated, you must restart that service to get the new functionality. Moreover, if the libvirt package is updated, you must restart the
VDSMservice.
6.1.7. Updating the Red Hat Enterprise Virtualization Guest Tools
Procedure 6.6. Updating the Red Hat Enterprise Virtualization Guest Tools
- On the Manager, as root user, use the
yum upgradeto upgrade therhev-guest-tools-isopackage.# yum update -y rhev-guest-tools-iso*
- When the
rhev-guest-tools-isopackage has been successfully upgraded, use theengine-iso-uploadercommand to upload it to your ISO storage domain. Replace [ISODomain] with the name of your ISO storage domain.engine-iso-uploader --iso-domain=[ISODomain] upload /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
Therhev-tools-setup.isofile is actually a link to the most recently updated ISO file. The link is automatically changed to point to the newest ISO file every time you upgrade therhev-guest-tools-isopackage. - Using the web portal or REST API, attach the rhev-tools-setup.iso file to each of your guests, and from within each guest, upgrade the tools installed on each guest using the installer on the ISO.
rhev-tools-setup.iso file, uploaded the updated ISO to your ISO storage domain, and attached it to your virtual machines.
6.2. Upgrading to Red Hat Enterprise Virtualization 3.3
6.2.1. Red Hat Enterprise Virtualization Manager 3.3 Upgrade Overview
- Configuring channels and entitlements.
- Updating the required packages.
- Performing the upgrade.
engine-setup, which provides an interactive interface. While the upgrade is in process, virtualization hosts and the virtual machines running on those virtualization hosts continue to operate independently. When the upgrade is complete, you can then upgrade your hosts to the latest versions of Red Hat Enterprise Linux or Red Hat Enterprise Virtualization Hypervisor.
6.2.2. Red Hat Enterprise Virtualization 3.3 Upgrade Considerations
Important
- Upgrading to version 3.3 can only be performed from version 3.2
- Users of Red Hat Enterprise Virtualization 3.1 must migrate to Red Hat Enterprise Virtualization 3.2 before attempting to upgrade to Red Hat Enterprise Virtualization 3.3.
- Red Hat Enterprise Virtualization Manager cannot be installed on the same machine as IPA
- An error message displays if the ipa-server package is installed. Red Hat Enterprise Virtualization Manager 3.3 does not support installation on the same machine as Identity Management (IdM). To resolve this issue, you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information, see https://access.redhat.com/knowledge/articles/233143.
Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.3 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
- Upgrading to JBoss Enterprise Application Platform 6.1.0 is recommended
- Although Red Hat Enterprise Virtualization Manager 3.3 supports Enterprise Application Platform 6.0.1, upgrading to the latest supported version of JBoss is recommended. For more information on upgrading to JBoss Enterprise Application Platform 6.1.0, see Upgrade the JBoss EAP 6 RPM Installation.
- The rhevm-upgrade command has been replaced by engine-setup
- From Version 3.3, installation of Red Hat Enterprise Virtualization Manager supports
otopi, a standalone, plug-in-based installation framework for setting up system components. Under this framework, therhevm-upgradecommand used during the installation process has been updated toengine-setupand is now obsolete.
6.2.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.3
engine-setup command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. For this reason, the channels required by Red Hat Enterprise Virtualization 3.2 must not be removed until after the upgrade is complete as outlined below. If the upgrade fails, detailed instructions display that explain how to restore your installation.
Procedure 6.7. Upgrading to Red Hat Enterprise Virtualization Manager 3.3
- Subscribe the system to the required channels and entitlements for receiving Red Hat Enterprise Virtualization Manager 3.3 packages.Subscription ManagerRed Hat Enterprise Virtualization 3.3 packages are provided by the
rhel-6-server-rhevm-3.3-rpmsrepository associated with theRed Hat Enterprise Virtualizationentitlement. Use theyum-config-managercommand to enable the repository in youryumconfiguration.# yum-config-manager --enable rhel-6-server-rhevm-3.3-rpms
Red Hat Network ClassicThe Red Hat Enterprise Virtualization 3.3 packages are provided by theRed Hat Enterprise Virtualization Manager (v.3.3 x86_64)channel, also referred to asrhel-x86_64-server-6-rhevm-3.3in Red Hat Network Classic. Use therhn-channelcommand or the Red Hat Network web interface to subscribe to theRed Hat Enterprise Virtualization Manager (v.3.3 x86_64)channel:# rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.3
- Update the rhevm-setup package to ensure you have the most recent version of
engine-setup.# yum update rhevm-setup
- Run the
engine-setupcommand and follow the prompts to upgrade Red Hat Enterprise Virtualization Manager.# engine-setup [ INFO ] Stage: Initializing Welcome to the RHEV 3.3.0 upgrade. Please read the following knowledge article for known issues and updated instructions before proceeding with the upgrade. RHEV 3.3.0 Upgrade Guide: Tips, Considerations and Roll-back Issues https://access.redhat.com/site/articles/408623 Would you like to continue with the upgrade? (Yes, No) [Yes]: - Remove Red Hat Enterprise Virtualization Manager 3.2 channels and entitlements to ensure the system does not use any Red Hat Enterprise Virtualization Manager 3.2 packages.Subscription ManagerUse the
yum-config-managercommand to disable the Red Hat Enterprise Virtualization 3.2 repository in youryumconfiguration.# yum-config-manager --disable rhel-6-server-rhevm-3.2-rpms
Red Hat Network ClassicUse therhn-channelcommand or the Red Hat Network web interface to remove theRed Hat Enterprise Virtualization Manager (v.3.2 x86_64)channels.# rhn-channel --remove --channel=rhel-x86_64-server-6-rhevm-3.2
- Run the following command to ensure all packages related to Red Hat Enterprise Virtualization are up to date:
# yum update
In particular, if you are using the JBoss Application Server from JBoss Enterprise Application Platform 6.0.1, you must run the above command to upgrade to Enterprise Application Platform 6.1.
- Ensure all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
- Change all of your clusters to use compatibility version 3.3.
- Change all of your data centers to use compatibility version 3.3.
6.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
6.3.1. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
rhevm-upgrade command. Virtualization hosts, and the virtual machines running upon them, will continue to operate independently while the Manager is being upgraded. Once the Manager upgrade is complete you will be able to upgrade your hosts, if you haven't already, to the latest versions of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization Hypervisor.
Important
Note
rhevm-upgrade command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. Where this also fails detailed instructions for manually restoring the installation are displayed.
Procedure 6.8. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
Add Red Hat Enterprise Virtualization 3.2 Subscription
Ensure that the system is subscribed to the required channels and entitlements to receive Red Hat Enterprise Virtualization Manager 3.2 packages. This procedure assumes that the system is already subscribed to required channels and entitlements to receive Red Hat Enterprise Virtualization 3.1 packages. These must also be available to complete the upgrade process.Certificate-based Red Hat NetworkThe Red Hat Enterprise Virtualization 3.2 packages are provided by therhel-6-server-rhevm-3.2-rpmsrepository associated with theRed Hat Enterprise Virtualizationentitlement. Use theyum-config-managercommand to enable the repository in youryumconfiguration. Theyum-config-managercommand must be run while logged in as therootuser.# yum-config-manager --enable rhel-6-server-rhevm-3.2-rpms
Red Hat Network ClassicThe Red Hat Enterprise Virtualization 3.2 packages are provided by theRed Hat Enterprise Virtualization Manager (v.3.2 x86_64)channel, also referred to asrhel-x86_64-server-6-rhevm-3.2in Red Hat Network Classic.rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.2
Use therhn-channelcommand, or the Red Hat Network Web Interface, to subscribe to theRed Hat Enterprise Virtualization Manager (v.3.2 x86_64)channel.Remove Enterprise Virtualization 3.1 Subscription
Ensure that the system does not use any Red Hat Enterprise Virtualization Manager 3.1 packages by removing the Red Hat Enterprise Vitulization Manager 3.1 channels and entitlements.Certificate-based Red Hat NetworkUse theyum-config-managercommand to disable the Red Hat Enterprise Virtualization 3.1 repository in youryumconfiguration. Theyum-config-managercommand must be run while logged in as therootuser.# yum-config-manager --disablerepo=rhel-6-server-rhevm-3.1-rpms
Red Hat Network ClassicUse therhn-channelcommand, or the Red Hat Network Web Interface, to remove theRed Hat Enterprise Virtualization Manager (v.3.1 x86_64)channels.# rhn-channel --remove --channel=rhel-6-server-rhevm-3.1
Update the rhevm-setup Package
To ensure that you have the most recent version of therhevm-upgradecommand installed you must update the rhevm-setup package. Log in as therootuser and useyumto update the rhevm-setup package.# yum update rhevm-setup
Run the
rhevm-upgradeCommandTo upgrade Red Hat Enterprise Virtualization Manager run therhevm-upgradecommand. You must be logged in as therootuser to run this command.# rhevm-upgrade Loaded plugins: product-id, rhnplugin Info: RHEV Manager 3.1 to 3.2 upgrade detected Checking pre-upgrade conditions...(This may take several minutes)
- If the ipa-server package is installed then an error message is displayed. Red Hat Enterprise Virtualization Manager 3.2 does not support installation on the same machine as Identity Management (IdM).
Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.2 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
To resolve this issue you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information see https://access.redhat.com/knowledge/articles/233143.
- Ensure that all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
- Change all of your clusters to use compatibility version 3.2.
- Change all of your data centers to use compatibility version 3.2.
6.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
6.4.1. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
rhevm-upgrade command. Virtualization hosts, and the virtual machines running upon them, will continue to operate independently while the Manager is being upgraded. Once the Manager upgrade is complete you will be able to upgrade your hosts, if you haven't already, to the latest versions of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization Hypervisor.
Important
Important
Note
rhevm-upgrade command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. Where this also fails detailed instructions for manually restoring the installation are displayed.
Procedure 6.9. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
Red Hat JBoss Enterprise Application Platform 6 Subscription
Ensure that the system is subscribed to the required channels and entitlements to receive Red Hat JBoss Enterprise Application Platform 6 packages. Red Hat JBoss Enterprise Application Platform 6 is a required dependency of Red Hat Enterprise Virtualization Manager 3.1.Certificate-based Red Hat NetworkThe Red Hat JBoss Enterprise Application Platform 6 packages are provided by theRed Hat JBoss Enterprise Application Platformentitlement in certificate-based Red Hat Network.Use thesubscription-managercommand to ensure that the system is subscribed to theRed Hat JBoss Enterprise Application Platformentitlement.# subscription-manager list
Red Hat Network ClassicThe Red Hat JBoss Enterprise Application Platform 6 packages are provided by theRed Hat JBoss Application Platform (v 6) for 6Server x86_64channel, also referred to asjbappplatform-6-x86_64-server-6-rpm, in Red Hat Network Classic. The Channel Entitlement Name for this channel isRed Hat JBoss Enterprise Application Platform (v 4, zip format).Use therhn-channelcommand, or the Red Hat Network Web Interface, to subscribe to theRed Hat JBoss Application Platform (v 6) for 6Server x86_64channel.Add Red Hat Enterprise Virtualization 3.1 Subscription
Ensure that the system is subscribed to the required channels and entitlements to receive Red Hat Enterprise Virtualization Manager 3.1 packages.Certificate-based Red Hat NetworkThe Red Hat Enterprise Virtualization 3.1 packages are provided by therhel-6-server-rhevm-3.1-rpmsrepository associated with theRed Hat Enterprise Virtualizationentitlement. Use theyum-config-managercommand to enable the repository in youryumconfiguration. Theyum-config-managercommand must be run while logged in as therootuser.# yum-config-manager --enable rhel-6-server-rhevm-3.1-rpms
Red Hat Network ClassicThe Red Hat Enterprise Virtualization 3.1 packages are provided by theRed Hat Enterprise Virtualization Manager (v.3.1 x86_64)channel, also referred to asrhel-x86_64-server-6-rhevm-3.1in Red Hat Network Classic.Use therhn-channelcommand, or the Red Hat Network Web Interface, to subscribe to theRed Hat Enterprise Virtualization Manager (v.3.1 x86_64)channel.Remove Red Hat Enterprise Virtualization 3.0 Subscription
Ensure that the system does not use any Red Hat Enterprise Virtualization Manager 3.0 packages by removing the Red Hat Enterprise Virtualization Manager 3.0 channels and entitlements.Certificate-based Red Hat NetworkUse theyum-config-managercommand to disable the Red Hat Enterprise Virtualization 3.0 repositories in youryumconfiguration. Theyum-config-managercommand must be run while logged in as therootuser.# yum-config-manager --disablerepo=rhel-6-server-rhevm-3-rpms
# yum-config-manager --disablerepo=jb-eap-5-for-rhel-6-server-rpms
Red Hat Network ClassicUse therhn-channelcommand, or the Red Hat Network Web Interface, to remove theRed Hat Enterprise Virtualization Manager (v.3.0 x86_64)channels.# rhn-channel --remove --channel=rhel-6-server-rhevm-3
# rhn-channel --remove --channel=jbappplatform-5-x86_64-server-6-rpm
Update the rhevm-setup Package
To ensure that you have the most recent version of therhevm-upgradecommand installed you must update the rhevm-setup package. Log in as therootuser and useyumto update the rhevm-setup package.# yum update rhevm-setup
Run the
rhevm-upgradeCommandTo upgrade Red Hat Enterprise Virtualization Manager run therhevm-upgradecommand. You must be logged in as therootuser to run this command.# rhevm-upgrade Loaded plugins: product-id, rhnplugin Info: RHEV Manager 3.0 to 3.1 upgrade detected Checking pre-upgrade conditions...(This may take several minutes)
- If the ipa-server package is installed then an error message is displayed. Red Hat Enterprise Virtualization Manager 3.1 does not support installation on the same machine as Identity Management (IdM).
Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.1 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
To resolve this issue you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information see https://access.redhat.com/knowledge/articles/233143. - A list of packages that depend on Red Hat JBoss Enterprise Application Platform 5 is displayed. These packages must be removed to install Red Hat JBoss Enterprise Application Platform 6, required by Red Hat Enterprise Virtualization Manager 3.1.
Warning: the following packages will be removed if you proceed with the upgrade: * objectweb-asm Would you like to proceed? (yes|no):You must enteryesto proceed with the upgrade, removing the listed packages.
- Ensure that all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
- Change all of your clusters to use compatibility version 3.1.
- Change all of your data centers to use compatibility version 3.1.
See Also:
6.5. Post-upgrade Tasks
6.5.1. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.3
Table 6.1. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.3
| Feature | Description |
|---|---|
|
Libvirt-to-libvirt virtual machine migration
|
Perform virtual machine migration using libvirt-to-libvirt communication. This is safer, more secure, and has less host configuration requirements than native KVM migration, but has a higher overhead on the host CPU.
|
|
Isolated network to carry virtual machine migration traffic
|
Separates virtual machine migration traffic from other traffic types, like management and display traffic. Reduces chances of migrations causing a network flood that disrupts other important traffic types.
|
|
Define a gateway per logical network
|
Each logical network can have a gateway defined as separate from the management network gateway. This allows more customizable network topologies.
|
|
Snapshots including RAM
|
Snapshots now include the state of a virtual machine's memory as well as disk.
|
|
Optimized iSCSI device driver for virtual machines
|
Virtual machines can now consume iSCSI storage as virtual hard disks using an optimized device driver.
|
|
Host support for MOM management of memory overcommitment
|
MOM is a policy-driven tool that can be used to manage overcommitment on hosts. Currently MOM supports control of memory ballooning and KSM.
|
|
GlusterFS data domains.
|
Native support for the GlusterFS protocol was added as a way to create storage domains, allowing Gluster data centers to be created.
|
|
Custom device property support
|
In addition to defining custom properties of virtual machines, you can also define custom properties of virtual machine devices.
|
|
Multiple monitors using a single virtual PCI device
|
Drive multiple monitors using a single virtual PCI device, rather than one PCI device per monitor.
|
|
Updatable storage server connections
|
It is now possible to edit the storage server connection details of a storage domain.
|
|
Check virtual hard disk alignment
|
Check if a virtual disk, the filesystem installed on it, and its underlying storage are aligned. If it is not aligned, there may be a performance penalty.
|
|
Extendable virtual machine disk images
|
You can now grow your virtual machine disk image when it fills up.
|
|
OpenStack Image Service integration
|
Red Hat Enterprise Virtualization supports the OpenStack Image Service. You can import images from and export images to an Image Service repository.
|
|
Gluster hook support
|
You can manage Gluster hooks, which extend volume life cycle events, from Red Hat Enterprise Virtualization Manager.
|
|
Gluster host UUID support
|
This feature allows a Gluster host to be identified by the Gluster server UUID generated by Gluster in addition to identifying a Gluster host by IP address.
|
|
Network quality of service (QoS) support
|
Limit the inbound and outbound network traffic at the virtual NIC level.
|
|
Cloud-Init support
|
Cloud-Init allows you to automate early configuration tasks in your virtual machines, including setting hostnames, authorized keys, and more.
|
6.5.2. Changing the Cluster Compatibility Version
Prerequisites:
Note
Procedure 6.10. Changing the Cluster Compatibility Version
- Log in to the Administration Portal as the administrative user. By default this is the
adminuser. - Click the Clusters tab.
- Select the cluster that you wish to change from the list displayed. If the list of clusters is too long to filter visually then perform a search to locate the desired cluster.
- Click the button.
- Change the Compatibility Version to the desired value.
- Click .
6.5.3. Changing the Data Center Compatibility Version
Prerequisites:
Procedure 6.11. Changing the Data Center Compatibility Version
- Log in to the Administration Portal as the administrative user. By default this is the
adminuser. - Click the Data Centers tab.
- Select the data center that you wish to change from the list displayed. If the list of data centers is too long to filter visually then perform a search to locate the desired data center.
- Click the button.
- Change the Compatibility Version to the desired value.
- Click .
Part III. Installing Virtualization Hosts
Table of Contents
- 7. Introduction to Virtualization Hosts
- 8. Installing Red Hat Enterprise Virtualization Hypervisor Hosts
- 8.1. Red Hat Enterprise Virtualization Hypervisor Installation Overview
- 8.2. Installing the Red Hat Enterprise Virtualization Hypervisor Packages
- 8.3. Preparing Hypervisor Installation Media
- 8.4. Installing the Hypervisor
- 8.5. Configuring the Hypervisor
- 8.5.1. Logging into the Hypervisor
- 8.5.2. Selecting Hypervisor Keyboard
- 8.5.3. Viewing Hypervisor Status
- 8.5.4. Configuring Hypervisor Network
- 8.5.5. Configuring Hypervisor Security
- 8.5.6. Configuring Hypervisor Simple Network Management Protocol
- 8.5.7. Configuring Hypervisor Common Information Model
- 8.5.8. Configuring Logging
- 8.5.9. Configuring the Hypervisor for Red Hat Network
- 8.5.10. Configuring Hypervisor Kernel Dumps
- 8.5.11. Configuring Hypervisor Remote Storage
- 8.6. Attaching the Hypervisor to the Red Hat Enterprise Virtualization Manager
- 9. Installing Red Hat Enterprise Linux Hosts
Chapter 7. Introduction to Virtualization Hosts
7.2. Introduction to Virtualization Hosts
- all virtualization hosts meet the hardware requirements, and
- you have successfully completed installation of the Red Hat Enterprise Virtualization Manager.
Important
Important
Chapter 8. Installing Red Hat Enterprise Virtualization Hypervisor Hosts
- 8.1. Red Hat Enterprise Virtualization Hypervisor Installation Overview
- 8.2. Installing the Red Hat Enterprise Virtualization Hypervisor Packages
- 8.3. Preparing Hypervisor Installation Media
- 8.4. Installing the Hypervisor
- 8.5. Configuring the Hypervisor
- 8.5.1. Logging into the Hypervisor
- 8.5.2. Selecting Hypervisor Keyboard
- 8.5.3. Viewing Hypervisor Status
- 8.5.4. Configuring Hypervisor Network
- 8.5.5. Configuring Hypervisor Security
- 8.5.6. Configuring Hypervisor Simple Network Management Protocol
- 8.5.7. Configuring Hypervisor Common Information Model
- 8.5.8. Configuring Logging
- 8.5.9. Configuring the Hypervisor for Red Hat Network
- 8.5.10. Configuring Hypervisor Kernel Dumps
- 8.5.11. Configuring Hypervisor Remote Storage
- 8.6. Attaching the Hypervisor to the Red Hat Enterprise Virtualization Manager
8.1. Red Hat Enterprise Virtualization Hypervisor Installation Overview
- The Red Hat Enterprise Virtualization Hypervisor must be installed on a physical server. It must not be installed in a Virtual Machine.
- The installation process will reconfigure the selected storage device and destroy all data. Therefore, ensure that any data to be retained is successfully backed up before proceeding.
- All Hypervisors in an environment must have unique hostnames and IP addresses, in order to avoid network conflicts.
- Instructions for using Network (PXE) Boot to install the Hypervisor are contained in the Red Hat Enterprise Linux - Installation Guide, available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux.
- Red Hat Enterprise Virtualization Hypervisors can use Storage Attached Networks (SANs) and other network storage for storing virtualized guest images. However, a local storage device is required for installing and booting the Hypervisor.
Note
8.2. Installing the Red Hat Enterprise Virtualization Hypervisor Packages
Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64) Red Hat Network channel contains the Hypervisor packages. The Hypervisor itself is contained in the rhev-hypervisor6 package. Additional tools supporting USB and PXE installations are also installed as a dependency. You must install the Hypervisor packages on the system that you intend to use to create Hypervisor boot media.
Procedure 8.1. Installing the Red Hat Enterprise Virtualization Hypervisor Packages
Subscribing to Download the Hypervisor using Certificate-Based RHN
Identify Available Entitlement Pools
To subscribe the system to Red Hat Enterprise Virtualization, you must locate the identifier for the relevant entitlement pool. Use thelistaction of thesubscription-managerto find these:To identify available subscription pools forRed Hat Enterprise Virtualization, use the command:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Virtualization"
Attach Entitlement Pools to the System
Using the pool identifiers located in the previous step, attach theRed Hat Enterprise Linux ServerandRed Hat Enterprise Virtualizationentitlements to the system. Use theattachparameter of thesubscription-managercommand, replacing [POOLID] with each of the pool identifiers:# subscription-manager attach --pool=[POOLID]
Subscribing to Download the Hypervisor using RHN Classic
- Log on to Red Hat Network (http://rhn.redhat.com).
- Move the mouse cursor over the Subscriptions link at the top of the page, and then click Registered Systems in the menu that appears.
- Select the system to which you are adding channels from the list presented on the screen, by clicking the name of the system.
- Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
- Select the
Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64)channel from the list presented on the screen, then click the Change Subscription button to finalize the change.
- Log in to the system on which the Red Hat Enterprise Virtualization Manager is installed. You must log in as the
rootuser. - Use
yumto install rhev-hypervisor6:# yum install rhev-hypervisor6
- Use
yumto install livecd-tools:# yum install livecd-tools
/usr/share/rhev-hypervisor/ directory. The livecd-iso-to-disk and livecd-iso-to-pxeboot scripts are installed to the /usr/bin directory.
Note
/usr/share/rhev-hypervisor/rhev-hypervisor.iso is now a symbolic link to a uniquely-named version of the Hypervisor ISO image, such as /usr/share/rhev-hypervisor/rhev-hypervisor-6.4-20130321.0.el6ev.iso. Different versions of the image can now be installed alongside each other, allowing administrators to run and maintain a cluster on a previous version of the Hypervisor while upgrading another cluster for testing.
/usr/share/rhev-hypervisor/rhevh-latest-6.iso, is created. This links also targets the most recently installed version of the Red Hat Enterprise Virtualization ISO image.
8.3. Preparing Hypervisor Installation Media
8.3.1. Preparing USB Hypervisor Installation Media
8.3.1.1. Preparing a Hypervisor USB Storage Device
Note
See Also:
8.3.1.2. Preparing USB Installation Media Using livecd-iso-to-disk
livecd-iso-to-disk utility included in the livecd-tools package can be used to write a Hypervisor or other disk image to a USB storage device. Once a Hypervisor disk image has been written to a USB storage device with this utility, systems that support booting via USB can boot the Hypervisor using the USB storage device.
livecd-iso-to-disk utility is as follows:
# livecd-iso-to-disk [image] [device]
/usr/share/rhev-hypervisor/rhev-hypervisor.iso. The livecd-iso-to-disk utility requires devices to be formatted with the FAT or EXT3 file system.
Note
livecd-iso-to-disk utility uses a FAT or EXT3 formatted partition or block device.
Note
/dev/sdb. When a USB storage device is formatted with a partition table, use the path name to the device, such as /dev/sdb1.
Procedure 8.2. Preparing USB Installation Media Using livecd-iso-to-disk
- Install the rhev-hypervisor package to download the latest version of the Hypervisor.
- Use the
livecd-iso-to-diskutility to copy the disk image, located in the/usr/share/rhev-hypervisor/directory, to the USB storage device. The--formatparameter formats the USB device. The--reset-mbrparameter initializes the Master Boot Record (MBR).Example 8.1. Use of
livecd-iso-to-diskThis example demonstrates the use oflivecd-iso-to-diskto write to a USB storage device named/dev/sdcand make the USB storage device bootable.# livecd-iso-to-disk --format --reset-mbr /usr/share/rhev-hypervisor/rhev-hypervisor.iso /dev/sdc Verifying image... /usr/share/rhev-hypervisor/rhev-hypervisor.iso: eccc12a0530b9f22e5ba62b848922309 Fragment sums: 8688f5473e9c176a73f7a37499358557e6c397c9ce2dafb5eca5498fb586 Fragment count: 20 Press [Esc] to abort check. Checking: 100.0% The media check is complete, the result is: PASS. It is OK to use this media. WARNING: THIS WILL DESTROY ANY DATA ON /dev/sdc!!! Press Enter to continue or ctrl-c to abort /dev/sdc: 2 bytes were erased at offset 0x000001fe (dos): 55 aa Waiting for devices to settle... mke2fs 1.42.7 (21-Jan-2013) Filesystem label=LIVE OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 488640 inodes, 1953280 blocks 97664 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2000683008 60 block groups 32768 blocks per group, 32768 fragments per group 8144 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done Copying live image to target device. squashfs.img 163360768 100% 184.33MB/s 0:00:00 (xfer#1, to-check=0/1) sent 163380785 bytes received 31 bytes 108920544.00 bytes/sec total size is 163360768 speedup is 1.00 osmin.img 4096 100% 0.00kB/s 0:00:00 (xfer#1, to-check=0/1) sent 4169 bytes received 31 bytes 8400.00 bytes/sec total size is 4096 speedup is 0.98 Updating boot config file Installing boot loader /media/tgttmp.q6aZdS/syslinux is device /dev/sdc Target device is now set up with a Live image!
8.3.1.3. Preparing USB Installation Media Using dd
dd command can also be used to install a hypervisor onto a USB storage device. Media created with the command can boot the Hypervisor on systems which support booting via USB. Red Hat Enterprise Linux provides dd as part of the coreutils package. Versions of dd are also available on a wide variety of Linux and Unix operating systems.
dd command through installation of Red Hat Cygwin, a free Linux-like environment for Windows.
dd command usage follows this structure:
# dd if=image of=device
device parameter is the device name of the USB storage device to install to. The image parameter is a ISO image of the Hypervisor. The default hypervisor image location is /usr/share/rhev-hypervisor/rhev-hypervisor.iso. The dd command does not make assumptions as to the format of the device as it performs a low-level copy of the raw data in the selected image.
See Also:
8.3.1.4. Preparing USB Installation Media Using dd on Linux Systems
dd command available on most Linux systems is suitable for creating USB installation media, to boot and install the Hypervisor.
Procedure 8.3. Preparing USB Installation Media using dd on Linux Systems
- Install the rhev-hypervisor package.
# yum install rhev-hypervisor
- Use the
ddcommand to copy the image file to the disk.Example 8.2. Use of
ddThis example uses a USB storage device named/dev/sdc.# dd if=/usr/share/rhev-hypervisor/rhev-hypervisor.iso of=/dev/sdc 243712+0 records in 243712+0 records out 124780544 bytes (125 MB) copied, 56.3009 s, 2.2 MB/s
Warning
Theddcommand will overwrite all data on the device specified for theofparameter. Any existing data on the device will be destroyed. Ensure that the correct device is specified and that it contains no valuable data before invocation of theddcommand.
8.3.1.5. Preparing USB Installation Media Using dd on Windows Systems
dd command, available on Windows systems with Red Hat Cygwin installed, is suitable for creating USB installation media to boot and install the Hypervisor.
Procedure 8.4. Preparing USB Installation Media using dd on Windows Systems
- Access http://www.redhat.com/services/custom/cygwin/ and click the Red Hat Cygwin official installation utility link. The
rhsetup.exeexecutable will download. - As the
Administratoruser run the downloadedrhsetup.exeexecutable. The Red Hat Cygwin installer will display. - Follow the prompts to complete a standard installation of Red Hat Cygwin. The Coreutils package within the Base package group provides the
ddutility. This is automatically selected for installation. - Copy the
rhev-hypervisor.isofile downloaded from Red Hat Network toC:\rhev-hypervisor.iso. - As the
Administratoruser run Red Hat Cygwin from the desktop. A terminal window will appear.Important
On the Windows 7 and Windows Server 2008 platforms it is necessary to right click the Red Hat Cygwin icon and select the Run as Administrator... option to ensure the application runs with the correct permissions. - In the terminal run
cat /proc/partitionsto see the drives and partitions currently visible to the system.Example 8.3. View of Disk Partitions Attached to System
Administrator@test / $ cat /proc/partitions major minor #blocks name 8 0 15728640 sda 8 1 102400 sda1 8 2 15624192 sda2 - Plug the USB storage device which is to be used as the media for the Hypervisor installation into the system. Re-run the
cat /proc/partitionscommand and compare the output to that of the previous run. A new entry will appear which designates the USB storage device.Example 8.4. View of Disk Partitions Attached to System
Administrator@test / $ cat /proc/partitions major minor #blocks name 8 0 15728640 sda 8 1 102400 sda1 8 2 15624192 sda2 8 16 524288 sdb - Use the
ddcommand to copy therhev-hypervisor.isofile to the disk. The example uses a USB storage device named/dev/sdb. Replace sdb with the correct device name for the USB storage device to be used.Example 8.5. Use of
ddCommand Under Red Hat CygwinAdministrator@test / $ dd if=/cygdrive/c/rhev-hypervisor.iso of=/dev/sdb& pid=$!
The provided command starts the transfer in the background and saves the process identifier so that it can be used to monitor the progress of the transfer. Refer to the next step for the command used to check the progress of the transfer.Warning
Theddcommand will overwrite all data on the device specified for theofparameter. Any existing data on the device will be destroyed. Ensure that the correct device is specified and that it contains no valuable data before invocation of theddcommand. - Transfer of the ISO file to the USB storage device with the version of
ddincluded with Red Hat Cygwin can take significantly longer than the equivalent on other platforms.To check the progress of the transfer in the same terminal window that the process was started in send it theUSR1signal. This can be achieved by issuing thekillcommand in the terminal window as follows:kill -USR1 $pid
- When the transfer operation completes the final record counts will be displayed.
Example 8.6. Result of
ddInitiated Copy210944+0 records in 210944+0 records out 108003328 bytes (108 MB) copied, 2035.82 s, 53.1 kB/s [1]+ Done dd if=/cygdrive/c/rhev-hypervisor.iso of=/dev/sdb
8.3.2. Preparing Optical Hypervisor Installation Media
wodim command. The wodim command is part of the wodim package.
Procedure 8.5. Preparing Optical Hypervisor Installation Media
- Verify that the wodim package is installed on the system.If the package version is in the output the package is available.If nothing is listed, install wodim:
# yum install wodim
- Insert a blank CD-ROM or DVD into your CD or DVD writer.
- Record the ISO file to the disc. The wodim command uses the following:
wodim dev=device image
This example uses the first CD-RW (/dev/cdrw) device available and the default hypervisor image location,/usr/share/rhev-hypervisor/rhev-hypervisor.iso.Example 8.8. Use of
wodimCommand# wodim dev=/dev/cdrw /usr/share/rhev-hypervisor/rhev-hypervisor.iso
isomd5sum) to verify the integrity of the installation media every time the Hypervisor is booted. If media errors are reported in the boot sequence you have a bad CD-ROM. Follow the procedure above to create a new CD-ROM or DVD.
8.3.3. Booting from Hypervisor Installation Media
8.3.3.1. Booting the Hypervisor from USB Installation Media
Procedure 8.6. Booting the Hypervisor from USB Installation Media
- Enter the system's BIOS menu to enable USB storage device booting if not already enabled.
- Enable USB booting if this feature is disabled.
- Set booting USB storage devices to be first boot device.
- Shut down the system.
- Insert the USB storage device that contains the Hypervisor boot image.
- Restart the system.
See Also:
8.3.3.2. Booting the Hypervisor from Optical Installation Media
Procedure 8.7. Booting the Hypervisor from Optical Installation Media
- Ensure that the system's BIOS is configured to boot from the CD-ROM or DVD-ROM drive first. For many systems this the default.
Note
Refer to your manufacturer's manuals for further information on modifying the system's BIOS boot configuration. - Insert the Hypervisor CD-ROM in the CD-ROM or DVD-ROM drive.
- Reboot the system.
See Also:
8.3.3.3. Troubleshooting BIOS Settings and Boot Process
- 3.5 inch diskette
- CD-ROM or DVD device
- Local hard disk
Warning
Procedure 8.8. Troubleshooting BIOS Settings and Boot Process
- Boot the Hypervisor from removable media. For example, a USB stick or CD-ROM.
- When the message
Automatic boot in 30 seconds...is displayed, and begins counting down from thirty, press any key to skip the automatic boot process. - Ensure the Install or Upgrade option is selected and press Tab to edit the boot parameters.
- Add the
rescueparameter to the list of boot parameters shown on the screen, then press Enter. This action will boot the Hypervisor in rescue mode. - Once the Hypervisor boots, verify your CPU contains the virtualization extensions with the following command:
# grep -E "svm|vmx" /proc/cpuinfo
Output displays if the processor has the hardware virtualization extensions. - Verify that the KVM modules load by default:
# lsmod | grep kvm
kvm_intel or kvm_amd then the kvm hardware virtualization modules are loaded and the system meets the requirements. If the output does not include the required modules then you must check that your hardware supports the virtualization extensions and that they are enabled in the system's BIOS.
8.3.3.4. Choosing Hypervisor Boot Options
Procedure 8.9. Choosing Hypervisor Boot Options
- Insert the Red Hat Enterprise Virtualization Hypervisor installation media.
- Power on the system and ensure the system boots from the installation media.
- The boot splash screen appears. If no input is provided, the Hypervisor installation will commence in 30 seconds, using default kernel parameters.
- To modify the boot options, press any key. The boot menu will display.The following boot options are available:
- Install or Upgrade
- Boot the Hypervisor installer.
- Install (Basic Video)
- Install or Upgrade the Hypervisor, using basic video mode.
- Install or Upgrade with Serial Console
- Install or Upgrade the Hypervisor, with the console redirected to a serial device attached to
/dev/ttyS0. - Reinstall
- Reinstall the Hypervisor.
- Reinstall (Basic Video)
- Reinstall the Hypervisor, using basic video mode.
- Reinstall with Serial Console
- Reinstall the Hypervisor, with the console redirected to a serial device attached to
/dev/ttyS0. - Boot from Local Drive
- Boot the operating system installed on the first local drive.
Select the appropriate boot option from the boot menu. - Press the Enter key to boot the Hypervisor with the default kernel parameters for the option selected; or
- press the Tab key to edit the kernel parameters. In edit mode you are able to add or remove kernel parameters. Kernel parameters must be separated from each other by a space. Once the desired kernel parameters have been set press Enter to boot the system. Alternatively pressing Esc reverts any changes that you have made to the kernel parameters.
8.4. Installing the Hypervisor
8.4.1. Hypervisor Menu Actions
- The directional keys (Up, Down, Left, Right) are used to select different controls on the screen. Alternatively the Tab key cycles through the controls on the screen which are enabled.
- Text fields are represented by a series of underscores (_). To enter data in a text field select it and begin entering data.
- Buttons are represented by labels which are enclosed within a pair of angle brackets (< and >). To activate a button ensure it is selected and press Enter or Space.
- Boolean options are represented by an asterisk (*) or a space character enclosed within a pair of square brackets ([ and ]). When the value contained within the brackets is an asterisk then the option is set, otherwise it is not. To toggle a Boolean option on or off press Space while it is selected.
8.4.2. Installing the Hypervisor
- Interactive installation.
- Unattended installation.
Procedure 8.10. Installing the Hypervisor Interactively
- Use the prepared boot media to boot the machine on which the Hypervisor is to be installed.
- Select Install Hypervisor and press Enter to begin the installation process.
- The first screen that appears allows you to configure the appropriate keyboard layout for your locale. Use the arrow keys to highlight the appropriate option and press Enter to save your selection.
Example 8.9. Keyboard Layout Configuration
Keyboard Layout Selection
Available Keyboard LayoutsSwiss German (latin1) Turkish U.S. English U.S. International ... (Hit enter to select a layout) <Quit> <Back> <Continue> - The installation script automatically detects all disks attached to the system. This information is used to assist with selection of the boot and installation disks that the Hypervisor will use. Each entry displayed on these screens indicates the Location, Device Name, and Size of the disks.
Boot Disk
The first disk selection screen is used to select the disk from which the Hypervisor will boot. The Hypervisor's boot loader will be installed to the Master Boot Record (MBR) of the disk that is selected on this screen. The Hypervisor attempts to automatically detect the disks attached to the system and presents the list from which to choose the boot device. Alternatively, you can manually select a device by specifying a block device name using the Other Device option.Important
The selected disk must be identified as a boot device and appear in the boot order either in the system's BIOS or in a pre-existing boot loader.Automatically Detected Device Selection
- Select the entry for the disk the Hypervisor is to boot from in the list and press Enter.
- Select the disk and press Enter. This action saves the boot device selection and starts the next step of installation.
Manual Device Selection
- Select Other device and press Enter.
- When prompted to Please select the disk to use for booting RHEV-H, enter the name of the block device from which the Hypervisor should boot.
- Press Enter. This action saves the boot device selection and starts the next step of installation.
- The disk or disks selected for installation will be those to which the Hypervisor itself is installed. The Hypervisor attempts to automatically detect the disks attached to the system and presents the list from which installation devices are chosen.
Warning
All data on the selected storage devices will be destroyed.- Select each disk on which the Hypervisor is to be installed and press Space to toggle it to enabled. Where other devices are to be used for installation, either solely or in addition to those which are listed automatically, use Other Device.
- Select the button and press Enter to continue.
- Where the Other Device option was specified, a further prompt will appear. Enter the name of each additional block device to use for Hypervisor installation, separated by a comma. Once all required disks have been selected, select the button and press Enter.
Example 8.11. Other Device Selection
Please enter one or more disks to use for installing RHEV-H. Multiple devices can be separated by comma. Device path: /dev/mmcblk0,/dev/mmcblk1______________
Once the installation disks have been selected, the next stage of the installation starts.
- The next screen allows you to configure storage for the Hypervisor.
- Select or clear the Fill disk with Data partition check box. Clearing this text box displays a field showing the remaining space on the drive and allows you to specify the amount of space to be allocated to data storage.
- Enter the preferred values for Swap, Config, and Logging.
- If you selected the Fill disk with Data partition check box, the Data field is automatically set to
0. If the check box was cleared, you can enter a whole number up to the value of the Remaining Space field. Entering a value of-1fills all remaining space.
- The Hypervisor requires a password be set to protect local console access to the
adminuser. The installation script prompts you to enter the preferred password in both the Password and Confirm Password fields.Use a strong password. Strong passwords comprise a mix of uppercase, lowercase, numeric, and punctuation characters. They are six or more characters long and do not contain dictionary words.Once a strong password has been entered, select and press Enter to install the Hypervisor on the selected disks.
RHEV Hypervisor Installation Finished Successfully will be displayed. Select the button and press Enter to reboot the system.
Note
Note
Note
scsi_id functions with multipath. Devices where this is not the case include USB storage and some older ATA disks.
8.5. Configuring the Hypervisor
8.5.1. Logging into the Hypervisor
Procedure 8.11. Logging into the Hypervisor
- Boot the Hypervisor. A login prompt appears:
Please login as 'admin' to configure the node localhost login:
- Enter the user name
adminand press Enter. - Enter the password you set during Hypervisor installation and press Enter.
admin user.
8.5.2. Selecting Hypervisor Keyboard
Procedure 8.12. Configuring the Hypervisor Keyboard Layout
- Select a keyboard layout from the list provided.
Keyboard Layout Selection Choose the Keyboard Layout you would like to apply to this system. Current Active Keyboard Layout: U.S. English
Available Keyboard LayoutsSwiss German (latin1) Turkish U.S. English U.S. International Ukranian ... <Save> - Select and press Enter to save the selection.
8.5.3. Viewing Hypervisor Status
- The current status of the Hypervisor.
- The current status of networking.
- The destinations of logs and reports.
- The number of active virtual machines.
- : Displays the RSA host key fingerprint and host key of the Hypervisor.
- : Displays details on the CPU used by the Hypervisor such as the CPU name and type.
- : Locks the Hypervisor. The user name and password must be entered to unlock the Hypervisor.
- : Logs off the current user.
- : Restarts the Hypervisor.
- : Turns the Hypervisor off.
8.5.4. Configuring Hypervisor Network
8.5.4.1. Hypervisor Network Screen
- The host name of the Hypervisor.
- The DNS servers to use.
- The NTP servers to use.
- The network interface to use.
- : Allows you to ping a given IP address by specifying the address to ping and number of times to ping that address.
- : Allows you to create bonds between network interfaces.
See Also:
8.5.4.2. Configuring Hypervisor Host Name
Procedure 8.13. Configuring Hypervisor Host Name
- Select the Hostname field on the Network screen and enter the new host name.
- Select and press Enter to save changes to the host name.
8.5.4.3. Configuring Hypervisor Domain Name Servers
Procedure 8.14. Configuring Hypervisor Domain Name Servers
- To set or change the primary DNS server, select the DNS Server 1 field and enter the IP address of the new primary DNS server to use.
- To set or change the secondary DNS server, select the DNS Server 2 field and enter the IP address of the new secondary DNS server to use.
- Select and press Enter to save changes to the DNS configuration.
8.5.4.4. Configuring Hypervisor Network Time Protocol
Procedure 8.15. Configuring Hypervisor Network Time Protocol
- To set or change the primary NTP server, select the NTP Server 1 field and enter the IP address or host name of the new primary NTP server to use.
- To set or change the secondary NTP server, select the NTP Server 2 field and enter the IP address or host name of the new secondary NTP server to use.
- Select and press Enter to save changes to the NTP configuration.
8.5.4.5. Configuring Hypervisor Network Interfaces
- Device
- Status
- Model
- MAC Address
Procedure 8.16. Configuring Hypervisor Network Interfaces
Device Identification
Select the network interface to be configured from the list and press Enter.When it is unclear which physical device an entry in the list refers to, the Hypervisor can blink the network traffic lights on the physical device to assist with identification. To use this facility, select the entry from the list, select the button and press Enter. Take note of which physical device's lights start blinking. The configuration screen for the selected device will be displayed.IPv4 Settings
The Hypervisor supports both dynamic (DHCP) and static IPv4 network configuration.Dynamic (DHCP) Network Configuration
Dynamic network configuration allows the Hypervisor to be dynamically assigned an IP address via DHCP. To enable dynamic IPv4 network configuration, select the DHCP option under IPv4 Settings and press Space to toggle it to enabled.Static Network Configuration
Static network configuration allows the Hypervisor to be manually assigned an IP address. To enable static IPv4 network configuration select the Static option under IPv4 Settings and press Space to toggle it to enabled.Selection of the Static option enables the IP Address, Netmask, and Gateway fields. The IP Address, Netmask, and Gateway fields must be populated to complete static network configuration.In particular it is necessary that:Where it is not clear what value should be used for the IP Address, Netmask, or Gateway field consult the network's administrator or consider a dynamic configuration.- the IP Address is not already in use on the network,
- the Netmask matches that used by other machines on the network, and
- the Gateway matches that used by other machines on the network.
Example 8.12. Static IPv4 Networking Configuration
IPv4 Settings ( ) Disabled ( ) DHCP (*) Static IP Address: 192.168.122.100_ Netmask: 255.255.255.0___ Gateway 192.168.1.1_____
IPv6 Settings
The Red Hat Enterprise Virtualization Manager does not currently support IPv6 networking. IPv6 networking must remain set to Disabled.VLAN Configuration
If VLAN support is required, populate the VLAN ID field with the VLAN identifier for the selected device.Save Network Configuration
Once all networking options for the selected device have been set, the configuration must be saved.- Select the button and press Enter to save the network configuration.
- A screen showing the progress of configuration displays. Once configuration is complete, press the Enter key to close the window.
8.5.5. Configuring Hypervisor Security
admin password for both local and remote access. SSH password authentication is also enabled or disabled via this screen.
Procedure 8.17. Configuring Hypervisor Security
Enable SSH Password Authentication
To enable SSH password authentication for remote access, select the Enable ssh password authentication option and press Space to toggle it to enabled.Change
adminPassword- Enter the desired
adminpassword in the Password field. You should use a strong password.Strong passwords contain a mix of uppercase, lowercase, numeric and punctuation characters. They are six or more characters long and do not contain dictionary words. - Enter the desired
adminpassword in the Confirm Password field. Ensure the value entered in the Confirm Password field matches the value entered in the Password field exactly. Where this is not the case, an error message will be displayed to indicate that the two values are different.
- Select and press Enter to save the security configuration.
8.5.6. Configuring Hypervisor Simple Network Management Protocol
Enable SNMP [ ] SNMP Password Password: _______________ Confirm Password: _______________ <Save> <Reset>
Procedure 8.18. Configuring Hypervisor Simple Network Management Protocol
- Select the Enable SNMP field.
- Press Space to toggle between enabling SNMP and disabling SNMP. By default, SNMP is disabled.
- Enter the preferred SNMP Password for the Hypervisor.
- Enter the preferred SNMP password again in the Confirm Password field.
- Select <Save> and press Enter to save your changes.
8.5.7. Configuring Hypervisor Common Information Model
Procedure 8.19. Configuring Hypervisor Common Information Model
- Select the Enable CIM field.
Enable CIM [ ]
- Enter a password in the Password field. This is the password that you will use to access the Hypervisor using CIM.
- Enter the password again in the Confirm Password field.
- Select the button and press Enter to save your changes.
8.5.8. Configuring Logging
Procedure 8.20. Configuring Hypervisor Logging
logrotate Configuration
The logrotate utility simplifies the administration of log files. The Hypervisor uses logrotate to rotate logs when they reach a certain file size.Log rotation involves renaming the current logs and starting new ones in their place. The Logrotate Max Log Size value set on the Logging screen is used to determine when a log will be rotated.Enter the Logrotate Max Log Size in kilobytes. The default maximum log size is 1024 kilobytes.rsyslog Configuration
The rsyslog utility is a multithreaded syslog daemon. The Hypervisor is able to use rsyslog to transmit log files over the network to a remote syslog daemon. For information on setting up the remote syslog daemon, see the Red Hat Enterprise Linux Deployment Guide.- Enter the remote rsyslog server address in the Server Address field.
- Enter the remote rsyslog server port in the Server Port field. The default port is
514.
netconsole Configuration
The netconsole module allows kernel messages to be sent to a remote machine. The Hypervisor uses netconsole to transmit kernel messages over the network.- Enter the Server Address.
- Enter the Server Port. The default port is
6666.
Save Configuration
To save the logging configuration, select and press Enter.
8.5.9. Configuring the Hypervisor for Red Hat Network
Procedure 8.21. Configuring Hypervisor for Red Hat Network
Authentication
Enter your Red Hat Network user name in the Login field.Enter your Red Hat Network password in the Password field.Profile Name
Enter the profile name to be used for the system in the Profile Name field. This is the name that the system will appear under when viewed in Red Hat Network.Update Source
The Hypervisor can register directly to Red Hat Network or, if available, a Satellite installation or a Subscription Asset Manager.To Connect Directly to RHN
Select the RHN option and press Space to toggle it to enabled. The RHN URL and CA URL values do not need to be provided.Example 8.13. Red Hat Network Configuration
(X) RHN ( ) Satellite ( ) SAM RHN URL: _______________________________________________________________ CA URL: _______________________________________________________________
To Connect via Satellite
- Select the Satellite option and press Space to toggle it to enabled.
- Enter the URL of the Satellite server in the RHN URL field.
- Enter the URL of the certificate authority for the Satellite server in the CA URL field.
Example 8.14. Satellite Configuration
( ) RHN (X) Satellite ( ) SAM RHN URL: https://your-satellite.example.com_____________________________ CA URL: https://your-satellite.example.com/pub/RHN-ORG-TRUSTED-SSL-CERT
To Connect via Subscription Asset Manager
- Select the Subscription Asset Manager option and press Space to toggle it to enabled.
- Enter the URL of the Subscription Asset Manager server in the RHN URL field.
- Enter the URL of the certificate authority for the Subscription Asset Manager server in the CA URL field.
Example 8.15. Subscription Asset Manager Configuration
( ) RHN ( ) Satellite (X) SAM URL: https://subscription-asset-manager.example.com_____________________________ CA : https://subscription-asset-manager.example.com/pub/RHN-ORG-TRUSTED-SSL-CERT
HTTP Proxy
Where a HTTP proxy is in use the details to connect to it must be provided. To connect to Red Hat Network or a Satellite server via a proxy you must enter:In environments where a HTTP proxy is not in use, you can ignore this step.- The network address of the proxy Server.
- The Port to connect to the proxy on.
- Optionally, the Username and Password to use to connect to the proxy.
Example 8.16. HTTP Proxy Configuration
HTTP Proxy Configuration Server: proxy.example.com__ Port: 8080_______________ Username: puser______________ Password: ******_____________
Save Configuration
To save the configuration the user must select and press Enter.
8.5.10. Configuring Hypervisor Kernel Dumps
kdump files can be delivered using NFS or SSH so that they can be analyzed at a later date. The Kdump screen allows you to configure this facility.
Procedure 8.22. Configuring Hypervisor Kernel Dumps
- Crash dumps generated by kdump are exported over NFS or SSH. Select the preferred transfer method and press Space to enable it.For the selected export method, a location to which the kdump files are to be exported must also be specified.
NFS Location
Set the NFS location to which crash logs are to be exported in the NFS Location field. The NFS Location must be the full NFS path which includes fully qualified domain name and directory path.SSH Location
Set the SSH location to which crash logs are to be exported in the SSH Location field. The SSH Location must be the full SSH login which includes the fully qualified domain name and user name.
Save Configuration
To save the configuration, select and press Enter.
8.5.11. Configuring Hypervisor Remote Storage
Procedure 8.23. Configuring Hypervisor Remote Storage
iSCSI Initiator Name
Enter the initiator name in the iSCSI Initiator Name field.Save Configuration
To save the configuration the user must select and press Enter.
8.6. Attaching the Hypervisor to the Red Hat Enterprise Virtualization Manager
8.6.1. Configuring Hypervisor Management Server
Important
root password on the Hypervisor and enables SSH password authentication. Once the Hypervisor has successfully been added to the Manager, disabling SSH password authentication is recommended.
Procedure 8.24. Configuring a Hypervisor Management Server
Configuration Using a Management Server Address
- Enter the IP address or fully qualified domain name of the Manager in the Management Server field.
- Enter the management server port in the Management Server Port field. The default value is
443. If a different port was selected during Red Hat Enterprise Virtualization Manager installation, specify it here, replacing the default value. - Select the Retrieve Certificate option to verify that the fingerprint of the certificate retrieved from the specified management server is correct. The value that the certificate fingerprint is compared against is returned at the end of Red Hat Enterprise Virtualization Manager installation.
- Leave the Password and Confirm Password fields blank. These fields are not required if the address of the management server is known.
Configuration Using a Password
- Enter a password in the Password field. It is recommended that you use a strong password. Strong passwords contain a mix of uppercase, lowercase, numeric and punctuation characters. They are six or more characters long and do not contain dictionary words.
- Re-enter the password in the Confirm Password field.
- Leave the Management Server and Management Server Port fields blank. As long as a password is set, allowing the Hypervisor to be added to the Manager later, these fields are not required.
Save Configuration
To save the configuration select and press Enter.
See Also:
8.6.2. Using the Hypervisor
8.6.3. Approving a Hypervisor
Procedure 8.25. Approving a Hypervisor
- Log in to the Red Hat Enterprise Virtualization Manager Administration Portal.
- From the Hosts tab, click on the host to be approved. The host should currently be listed with the status of Pending Approval.
- Click the Approve button. The Edit and Approve Hosts dialog displays. You can use the dialog to set a name for the host, fetch its SSH fingerprint before approving it, and configure power management, where the host has a supported power management card. For information on power management configuration, see the Power Management chapter of the Red Hat Enterprise Virtualization Administration Guide.
- Click . If you have not configured power management you will be prompted to confirm that you wish to proceed without doing so, click .
Chapter 9. Installing Red Hat Enterprise Linux Hosts
9.1. Red Hat Enterprise Linux Hosts
See Also:
9.2. Host Compatibility Matrix
| Red Hat Enterprise Linux Version | Red Hat Enterprise Virtualization 3.3 clusters in 3.0 compatibility mode | Red Hat Enterprise Virtualization 3.3 clusters in 3.1 compatibility mode | Red Hat Enterprise Virtualization 3.3 clusters in 3.2 compatibility mode | Red Hat Enterprise Virtualization 3.3 clusters |
|---|---|---|---|---|
| 6.2 | Supported | Unsupported | Unsupported | Unsupported |
| 6.3 | Supported | Supported | Unsupported | Unsupported |
| 6.4 | Supported | Supported | Supported | Unsupported |
| 6.5 | Supported | Supported | Supported | Supported |
9.3. Preparing a Red Hat Enterprise Linux Host
9.3.1. Installing Red Hat Enterprise Linux
Procedure 9.1. Installing Red Hat Enterprise Linux
Download and Install Red Hat Enterprise Linux 6.5 Server
Download and Install Red Hat Enterprise Linux 6.5 Server on the target virtualization host, referring to the Red Hat Enterprise Linux 6 Installation Guide for detailed instructions. Only the Base package group is required to use the virtualization host in a Red Hat Enterprise Virtualization environment.Important
If you intend to use directory services for authentication on the Red Hat Enterprise Linux host then you must ensure that the authentication files required by theuseraddcommand are locally accessible. The vdsm package, which provides software that is required for successful connection to Red Hat Enterprise Virtualization Manager, will not install correctly if these files are not locally accessible.Ensure Network Connectivity
Following successful installation of Red Hat Enterprise Linux 6.5 Server, ensure that there is network connectivity between your new Red Hat Enterprise Linux host and the system on which your Red Hat Enterprise Virtualization Manager is installed.- Attempt to ping the Manager:
#
ping address of manager - If the Manager can successfully be contacted, this displays:
ping manager.example.redhat.com PING manager.example.redhat.com (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.415 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.419 ms 64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=1.41 ms 64 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=0.487 ms 64 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=0.409 ms 64 bytes from 192.168.0.1: icmp_seq=6 ttl=64 time=0.372 ms 64 bytes from 192.168.0.1: icmp_seq=7 ttl=64 time=0.464 ms --- manager.example.redhat.com ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6267ms
- If the Manager cannot be contacted, this displays:
ping: unknown host manager.usersys.redhat.com
You must configure the network so that the host can contact the Manager. First, disable NetworkManager. Then configure the networking scripts so that the host will acquire an ip address on boot.- Disable NetworkManager.
#
service NetworkManager stop#chkconfig NetworkManager disable - Edit
/etc/sysconfig/network-scripts/ifcfg-eth0. Find this line:ONBOOT=no
Change that line to this:ONBOOT=yes
- Reboot the host machine.
- Ping the Manager again:
#
ping address of managerIf the host still cannot contact the Manager, it is possible that your host machine is not acquiring an IP address from DHCP. Confirm that DHCP is properly configured and that your host machine is properly configured to acquire an IP address from DHCP.If the Manager can successfully be contacted, this displays:ping manager.example.redhat.com PING manager.example.redhat.com (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.415 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.419 ms 64 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=1.41 ms 64 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=0.487 ms 64 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=0.409 ms 64 bytes from 192.168.0.1: icmp_seq=6 ttl=64 time=0.372 ms 64 bytes from 192.168.0.1: icmp_seq=7 ttl=64 time=0.464 ms --- manager.example.redhat.com ping statistics --- 7 packets transmitted, 7 received, 0% packet loss, time 6267ms
9.3.2. Subscribing to Required Channels Using Subscription Manager
Previous Step in Preparing a Red Hat Enterprise Linux Host
- Registered the virtualization host to Red Hat Network using Subscription Manager.
- Attached the
Red Hat Enterprise Linux Serverentitlement to the virtualization host. - Attached the
Red Hat Enterprise Virtualizationentitlement to the virtualization host.
Procedure 9.2. Subscribing to Required Channels using Subscription Manager
Register
Run thesubscription-managercommand with theregisterparameter to register the system with Red Hat Network. To complete registration successfully, you will need to supply your Red Hat Network Username and Password when prompted.# subscription-manager register
Identify Available Entitlement Pools
To attach the correct entitlements to the system, you must first locate the identifiers for the required entitlement pools. Use thelistaction of thesubscription-managerto find these.To identify available subscription pools forRed Hat Enterprise Linux Server, use the command:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux Server"
To identify available subscription pools forRed Hat Enterprise Virtualization, use the command:# subscription-manager list --available | grep -A8 "Red Hat Enterprise Virtualization"
Attach Entitlements to the System
Using the pool identifiers you located in the previous step, attach theRed Hat Enterprise Linux ServerandRed Hat Enterprise Virtualizationentitlements to the system. Use theattachparameter of thesubscription-managercommand, replacing[POOLID]with each of the pool identifiers:# subscription-manager attach --pool=
[POOLID]Enable the Red Hat Enterprise Virtualization Management Agents Repository
Run the following command to enable the Red Hat Enterprise Virtualization Management Agents (RPMs) repository:# subscription-manager repos --enable=rhel-6-server-rhev-mgmt-agent-rpms
9.3.3. Subscribing to Required Channels Using RHN Classic
Previous Step in Preparing a Red Hat Enterprise Linux Host
- Registered the virtualization host to Red Hat Network using RHN Classic.
- Subscribed the virtualization host to the
Red Hat Enterprise Linux Server (v. 6 for 64-bit AMD64 / Intel64)channel. - Subscribed the virtualization host to the
Red Hat Enterprise Virt Management Agent (v 6 x86_64)channel.
Procedure 9.3. Subscribing to Required Channels using RHN Classic
Register
If the machine has not already been registered with Red Hat Network, run therhn_registercommand asrootto register it. To complete registration successfully you will need to supply your Red Hat Network Username and Password. Follow the prompts displayed byrhn_registerto complete registration of the system.# rhn_register
Subscribe to channels
You must subscribe the system to the required channels using either the web interface to Red Hat Network or the command linerhn-channelcommand.Using the Web Interface to Red Hat Network
To add a channel subscription to a system from the web interface:- Log on to Red Hat Network (http://rhn.redhat.com).
- Move the mouse cursor over the Subscriptions link at the top of the screen, and then click the Registered Systems link in the menu that appears.
- Select the system to which you are adding channels from the list presented on the screen, by clicking the name of the system.
- Click Alter Channel Subscriptions in the Subscribed Channels section of the screen.
- Select the channels to be added from the list presented on the screen. To use the virtualization host in a Red Hat Enterprise Virtualization environment you must select:
- Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64); and
- Red Hat Enterprise Virt Management Agent (v 6 x86_64).
- Click the Change Subscription button to finalize the change.
Using the rhn-channel command
Run therhn-channelcommand to subscribe the virtualization host to each of the required channels. The commands that need to be run are:# rhn-channel --add --channel=rhel-x86_64-server-6 # rhn-channel --add --channel=rhel-x86_64-rhev-mgmt-agent-6
Important
If you are not the administrator for the machine as defined in Red Hat Network, or the machine is not registered to Red Hat Network, then use of therhn-channelcommand will result in an error:Error communicating with server. The message was:Error Class Code: 37 Error Class Info: You are not allowed to perform administrative tasks on this system. Explanation: An error has occurred while processing your request. If this problem persists please enter a bug report at bugzilla.redhat.com. If you choose to submit the bug report, please be sure to include details of what you were trying to do when this error occurred and details on how to reproduce this problem.If you encounter this error when usingrhn-channelthen to add the channel to the system you must use the web user interface instead.
9.3.4. Configuring Virtualization Host Firewall
Previous Step in Preparing a Red Hat Enterprise Linux Host
Procedure 9.4. Configuring Virtualization Host Firewall
iptables, to allow traffic on the required network ports. These steps replace any existing firewall configuration on your host with one containing only the required by Red Hat Enterprise Virtualization. If you have existing firewall rules with which this configuration must be merged then you must do so by manually editing the rules defined in the iptables configuration file, /etc/sysconfig/iptables.
root user.
Remove existing firewall rules from configuration
Remove any existing firewall rules using the--flushparameter to theiptablescommand.# iptables --flush
Add new firewall rules to configuration
Add the new firewall rules, required by Red Hat Enterprise Virtualization, using the--appendparameter to theiptablescommand. The prompt character (#) has been intentionally omitted from this list of commands to allow easy copying of the content to a script file or command prompt.iptables --append INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --append INPUT -p icmp -j ACCEPT iptables --append INPUT -i lo -j ACCEPT iptables --append INPUT -p tcp --dport 22 -j ACCEPT iptables --append INPUT -p tcp --dport 16514 -j ACCEPT iptables --append INPUT -p tcp --dport 54321 -j ACCEPT iptables --append INPUT -p tcp -m multiport --dports 5634:6166 -j ACCEPT iptables --append INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT iptables --append INPUT -j REJECT --reject-with icmp-host-prohibited iptables --append FORWARD -m physdev ! --physdev-is-bridged -j REJECT \ --reject-with icmp-host-prohibited
Note
The providediptablescommands add firewall rules to accept network traffic on a number of ports. These include:- port
22for SSH, - ports
5634to6166for guest console connections, - port
16514for libvirt virtual machine migration traffic, - ports
49152to49216for VDSM virtual machine migration traffic, and - port
54321for the Red Hat Enterprise Virtualization Manager.
Save the updated firewall configuration
Save the updated firewall configuration script using thesaveto theiptablesinitialization script.# service iptables save
Enable iptables service
Ensure that theiptablesservice is configured to start on boot and has been restarted, or started for the first time if it was not already running.# chkconfig iptables on # service iptables restart
9.3.5. Configuring Virtualization Host sudo
Previous Step in Preparing a Red Hat Enterprise Linux Host
root on the host. The default Red Hat Enterprise Linux configuration, stored in /etc/sudoers, contains values that allow this. If this file has been modified since Red Hat Enterprise Linux installation these values may have been removed. This procedure provides steps to verify that the required entry still exists in the configuration, and add the required entry if it is not present.
Procedure 9.5. Configuring Virtualization Host sudo
Log in
Log in to the virtualization host as therootuser.Run visudo
Run thevisudocommand to open the/etc/sudoers# visudo
Edit sudoers file
Read the configuration file, and verify that it contains these lines:# Allow root to run any commands anywhere root ALL=(ALL) ALL
If the file does not contain these lines, add them and save the file using the VIM:wcommand.Exit editor
Exitvisudousing the VIM:qcommand.
root user.
9.3.6. Configuring Virtualization Host SSH
Previous Step in Preparing a Red Hat Enterprise Linux Host
root user using an encrypted key for authentication. You must follow this procedure to ensure that SSH is configured to allow this.
Warning
/root/.ssh/authorized_keys file.
Procedure 9.6. Configuring virtualization host SSH
root user.
Install the SSH server (openssh-server)
Install the openssh-server package usingyum.# yum install openssh-server
Edit SSH server configuration
Open the SSH server configuration,/etc/ssh/sshd_config, in a text editor. Search for thePermitRootLogin.- If
PermitRootLoginis set toyes, or is not set at all, no further action is required. - If
PermitRootLoginis set tono, then you must change it toyes.
Save any changes that you have made to the file, and exit the text editor.Enable the SSH server
Configure the SSH server to start at system boot using thechkconfigcommand.# chkconfig --level 345 sshd on
Start the SSH server
Start the SSH, or restart it if it is already running, using theservicecommand.# service sshd restart
root access over SSH.
9.4. Adding a Red Hat Enterprise Linux Host
Procedure 9.7. Adding a Red Hat Enterprise Linux Host
- Click the Hosts resource tab to list the hosts in the results list.
- Click to open the New Host window.
- Use the drop-down menus to select the Data Center and Host Cluster for the new host.
- Enter the Name, Address, and SSH Port of the new host.
- Select an authentication method to use with the host.
- Enter the root user's password to use password authentication.
- Copy the key displayed in the SSH PublicKey field to
/root/.ssh/authorized_keyson the host to use public key authentication.
- You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the button to expand the advanced host settings.
- Optionally disable automatic firewall configuration.
- Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
- You can configure the Power Management and SPM using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
- Click to add the host and close the window.
Installing. Once installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.
Note
9.5. Explanation of Settings and Controls in the New Host and Edit Host Windows
9.5.1. Host General Settings Explained
Table 9.1. General settings
|
Field Name
|
Description
|
|---|---|
|
Data Center
|
The data center to which the host belongs. Red Hat Enterprise Virtualization Hypervisor hosts can not be added to Gluster-enabled clusters.
|
|
Host Cluster
|
The cluster to which the host belongs.
|
|
Use External Providers
|
Select or clear this check box to view or hide options for adding hosts provided by external providers. Upon selection, a drop-down list of external providers that have been added to the Manager displays. The following options are also available:
|
|
Name
|
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
|
|
Comment
|
A field for adding plain text, human-readable comments regarding the host.
|
|
Address
|
The IP address, or resolvable hostname of the host.
|
|
Root password
|
The password of the host's root user. This can only be given when you add the host, it cannot be edited afterwards.
|
|
SSH PublicKey
|
Copy the contents in the text box to the
/root/.known_hosts file on the host if you'd like to use the Manager's ssh key instead of using a password to authenticate with the host.
|
|
Automatically configure host firewall
|
When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.
|
|
SSH Fingerprint
|
You can the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.
|
9.5.2. Host Power Management Settings Explained
Table 9.2. Power Management Settings
|
Field Name
|
Description
|
|---|---|
|
Primary/ Secondary
|
Prior to Red Hat Enterprise Virtualization 3.2, a host with power management configured only recognized one fencing agent. Fencing agents configured on version 3.1 and earlier, and single agents, are treated as primary agents. The secondary option is valid when a second agent is defined.
|
|
Concurrent
|
Valid when there are two fencing agents, for example for dual power hosts in which each power switch has two agents connected to the same power switch.
|
|
Address
|
The address to access your host's power management device. Either a resolvable hostname or an IP address.
|
|
User Name
|
User account to access the power management device with. You may have to set up a user on the device, or use the default user.
|
|
Password
|
Password for the user accessing the power management device.
|
|
Type
|
The type of power management device in your host.
Choose one of the following:
|
|
Port
|
The port number used by the power management device to communicate with the host.
|
|
Options
|
Power management device specific options. Give these as 'key=value' or 'key', refer to the documentation of your host's power management device for the options available.
|
|
Secure
|
Tick this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on and supported by the power management agent.
|
|
Source
|
Specifies whether the host will search within its cluster or data center for a fencing proxy. Use the and buttons to change the sequence in which the resources are used.
|
9.5.3. SPM Priority Settings Explained
Table 9.3. SPM settings
|
Field Name
|
Description
|
|---|---|
|
SPM Priority
|
Defines the likelihood that the host will be given the role of Storage Pool Manager(SPM). The options are Low, Normal, and High priority, where Low priority means a reduced likelihood of the host being assigned the role of SPM, and High priority increases the likelihood. The default setting is Normal.
|
9.5.4. Host Console Settings Explained
Table 9.4. Console settings
|
Field Name
|
Description
|
|---|---|
|
Override display address
|
Select this check box to enable overriding the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, a public IP or FQDN (which is resolved in the external network to the public IP) is returned.
|
|
Display address
|
The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.
|
Part IV. Environment Configuration
Table of Contents
- 10. Planning your Data Center
- 11. Network Setup
- 11.1. Workflow Progress — Network Setup
- 11.2. Networking in Red Hat Enterprise Virtualization
- 11.3. Logical Networks
- 11.3.1. Creating a New Logical Network in a Data Center or Cluster
- 11.3.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts
- 11.3.3. Explanation of Settings and Controls in the General Tab of the New Logical Network and Edit Logical Network Windows
- 11.3.4. Editing a Logical Network
- 11.3.5. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
- 11.3.6. Explanation of Settings in the Manage Networks Window
- 11.3.7. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
- 11.3.8. Multiple Gateways
- 11.4. Using the Networks Tab
- 11.5. Bonds
- 12. Storage Setup
Chapter 10. Planning your Data Center
10.2. Planning Your Data Center
Note
10.3. Data Centers
10.3.1. Data Centers in Red Hat Enterprise Virtualization
Default data center at installation. You can create new data centers that will also be managed from the single Administration Portal. For example, you may choose to have different data centers for different physical locations, business units, or for reasons of security. It is recommended that you do not remove the Default data center, instead set up new appropriately named data centers.
See Also:
10.3.2. Creating a New Data Center
Note
Procedure 10.1. Creating a New Data Center
- Select the Data Centers resource tab to list all data centers in the results list.
- Click to open the New Data Center window.
- Enter the Name and Description of the data center.
- Select the storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
- Click to create the data center and open the New Data Center - Guide Me window.
- The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the button; configuration can be resumed by selecting the data center and clicking the button.
10.4. Clusters
10.4.1. Clusters in Red Hat Enterprise Virtualization
Default cluster in the Default data center at installation time.
Note
Note
See Also:
10.4.2. Creating a New Cluster
Procedure 10.2. Creating a New Cluster
- Select the Clusters resource tab.
- Click to open the New Cluster window.
- Select the Data Center the cluster will belong to from the drop-down list.
- Enter the Name and Description of the cluster.
- Select the CPU Name and Compatibility Version from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.
- Select either the Enable Virt Service or Enable Gluster Service radio box depending on whether the cluster should be populated with virtual machine hosts or Gluster-enabled nodes. Note that you cannot add Red Hat Enterprise Virtualization Hypervisor hosts to a Gluster-enabled cluster.
- Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
- Click the Cluster Policy tab to optionally configure a power policy, scheduler optimization settings, and enable trusted service for hosts in the cluster.
- Click the Resilience Policy tab to select the virtual machine migration policy.
- Click to create the cluster and open the New Cluster - Guide Me window.
- The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the button; configuration can be resumed by selecting the cluster and clicking the button.
10.4.3. Enabling Gluster Processes on Red Hat Storage Nodes
- In the Navigation Pane, select the Clusters tab.
- Select .
- Select the "Enable Gluster Service" radio button. Provide the address, SSH fingerprint, and password as necessary. The address and password fields can be filled in only when the Import existing Gluster configuration check box is selected.
- Click .
Chapter 11. Network Setup
- 11.1. Workflow Progress — Network Setup
- 11.2. Networking in Red Hat Enterprise Virtualization
- 11.3. Logical Networks
- 11.3.1. Creating a New Logical Network in a Data Center or Cluster
- 11.3.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts
- 11.3.3. Explanation of Settings and Controls in the General Tab of the New Logical Network and Edit Logical Network Windows
- 11.3.4. Editing a Logical Network
- 11.3.5. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
- 11.3.6. Explanation of Settings in the Manage Networks Window
- 11.3.7. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
- 11.3.8. Multiple Gateways
- 11.4. Using the Networks Tab
- 11.5. Bonds
11.2. Networking in Red Hat Enterprise Virtualization
rhevm logical network is created by default and labeled as the Management. The rhevm logical network is intended for management traffic between the Red Hat Enterprise Virtualization Manager and virtualization hosts. You are able to define additional logical networks to segregate:
- Display related network traffic.
- General virtual machine network traffic.
- Storage related network traffic.
- The number of logical networks attached to a host is limited to the number of available network devices combined with the maximum number of Virtual LANs (VLANs) which is 4096.
- The number of logical networks in a cluster is limited to the number of logical networks that can be attached to a host as networking must be the same for all hosts in a cluster.
- The number of logical networks in a data center is limited only by the number of clusters it contains in combination with the number of logical networks permitted per cluster.
Note
Note
Important
rhevm network. Incorrect changes to the properties of the rhevm network may cause hosts to become temporarily unreachable.
Important
- Directory Services
- DNS
- Storage
11.3. Logical Networks
11.3.1. Creating a New Logical Network in a Data Center or Cluster
Procedure 11.1. Creating a New Logical Network in a Data Center or Cluster
- Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
- Click the Logical Networks tab of the details pane to list the existing logical networks.
- From the Data Centers details pane, click to open the New Logical Network window.From the Clusters details pane, click to open the New Logical Network window.
- Enter a Name, Description and Comment for the logical network.
- In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down list and enter a Network Label for the logical network.
- Select the Enable VLAN tagging, VM network and Override MTU to enable these options.
- From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
- From the Profiles tab, add vNIC profiles to the logical network as required.
- Click OK.
Note
See Also:
11.3.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts
rhevm management logical network between interfaces, and adding a newly created logical network to a network interface are common reasons to edit host networking.
Procedure 11.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts
- Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results.
- Click the Network Interfaces tab in the details pane to list the network interfaces attached to the host and their configurations.
- Click the button to open the Setup Host Networks window.
- Attach a logical network to a network interface by selecting and dragging a logical network into the Assigned Logical Networks area next to the network interface.Alternatively, right-click the logical network and select a network interface from the drop-down menu.
- Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Management Network window.If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.Select a Boot Protocol from:Click OK.
- None,
- DHCP, or
- Static.If you have chosen Static, provide the IP, Subnet Mask, and the Gateway.
- Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
- Select the Save network configuration check box if you want these network changes to be made persistent when the environment is rebooted.
- Click to implement the changes and close the window.
Note
11.3.3. Explanation of Settings and Controls in the General Tab of the New Logical Network and Edit Logical Network Windows
Table 11.1. New Logical Network and Edit Logical Network Settings
|
Field Name
|
Description
|
|---|---|
|
Name
|
The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
|
|
Description
|
The description of the logical network. This field is recommended but not mandatory.
|
|
Comment
|
A field for adding plain text, human-readable comments regarding the logical network.
|
|
Export
|
Allows you to export the logical network to an OpenStack Network Service that has been added to the Manager as an external provider.
External Provider - Allows you to select the external provider on which the logical network will be created.
Network Label - Allows you to specify the label of the logical network, such as
eth0.
|
|
Enable VLAN tagging
|
VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.
|
|
VM Network
|
Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.
|
|
Override MTU
|
Set a custom maximum transmission unit for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if MTU override is enabled.
|
11.3.4. Editing a Logical Network
Procedure 11.3. Editing a Logical Network
- Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.
- Click the Logical Networks tab in the details pane to list the logical networks in the data center.
- Select a logical network and click to open the Edit Logical Network window.
- Edit the necessary settings.
- Click OK to save the changes.
11.3.5. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
Procedure 11.4. Assigning or Unassigning a Logical Network to a Cluster
- Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.
- Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
- Click to open the Manage Networks window.
- Select appropriate check boxes.
- Click to save the changes and close the window.
Note
See Also:
11.3.6. Explanation of Settings in the Manage Networks Window
Table 11.2. Manage Networks Settings
|
Field
|
Description/Action
|
|---|---|
|
Assign
|
Assigns the logical network to all hosts in the cluster.
|
|
Required
|
A logical network becomes operational when it is attached to an active NIC on all hosts in the cluster.
|
|
VM Network
| The logical network carries the virtual machine network traffic. |
|
Display Network
| The logical network carries the virtual machine SPICE and virtual network controller traffic. |
|
Migration Network
| The logical network carries virtual machine and storage migration traffic. |
11.3.7. Adding Multiple VLANs to a Single Network Interface Using Logical Networks
Important
Procedure 11.5. Adding Multiple VLANs to a Network Interface using Logical Networks
- Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
- Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
- Click to open the Setup Host Networks window.
- Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
- Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.Select a Boot Protocol from:Click OK.
- None,
- DHCP, or
- Static,Provide the IP and Subnet Mask.
- Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
- Select the Save network configuration check box
- Click .
11.3.8. Multiple Gateways
Procedure 11.6. Viewing or Editing the Gateway for a Logical Network
- Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
- Click the Network Interfaces tab in the details pane to list the network interfaces attached to the host and their configurations.
- Click the button to open the Setup Host Networks window.
- Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
11.4. Using the Networks Tab
- Attaching or detaching the networks to clusters and hosts
- Removing network interfaces from virtual machines and templates
- Adding and removing permissions for users to access and manage networks
11.4.1. Importing Networks from External Providers
Procedure 11.7. Importing a Network
- Click on the Networks tab.
- Click the Import button. The Import Networks window appears.
- From the Network Provider drop-down list, select a provider. The networks offered by that provider are automatically discovered and display in the Provider Networks list.
- Select the network to import in the Provider Networks list and click the down arrow to move the network into the Networks to Import list.
- Click the Import button.
Important
11.4.2. Limitations to Importing Networks from External Providers
- Networks offered by external providers must be used as virtual machine networks.
- Networks offered by external providers cannot be used as display networks.
- The same network can be imported more than once, but only to different data centers.
- Networks offered by external providers cannot be edited in the Manager. This is because the management of such networks is the responsibility of the external providers.
- Port mirroring is not available for virtual NIC connected to networks offered by external providers.
- If a virtual machine uses a network offered by an external provider, that provider cannot be deleted from the Manager while the network is still in use by the virtual machine.
- Networks offered by external providers are non-required. As such, scheduling for clusters in which such networks have been imported will not take those networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the network on hosts in clusters in which such networks have been imported.
Important
11.5. Bonds
11.5.1. Bonding Logic in Red Hat Enterprise Virtualization
- Are either of the devices already carrying logical networks?
- Are the devices carrying compatible logical networks? A single device cannot carry both VLAN tagged and non-VLAN tagged logical networks.
Table 11.3. Bonding Scenarios and Their Results
| Bonding Scenario | Result |
|---|---|
|
NIC + NIC
|
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
|
|
NIC + Bond
|
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
|
|
Bond + Bond
|
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
|
11.5.2. Bonding Modes
- Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
- Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses modulo NIC slave count. This calculation ensures that the same interface is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
- Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
- Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is supported in Red Hat Enterprise Virtualization.
11.5.3. Creating a Bond Device Using the Administration Portal
Procedure 11.8. Creating a Bond Device using the Administration Portal
- Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
- Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
- Click to open the Setup Host Networks window.
- Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.If the devices are incompatible, for example one is vlan tagged and the other is not, the bond operation fails with a suggestion on how to correct the compatibility issue.
- Select the Bond Name and Bonding Mode from the drop-down menus.Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
- Click to create the bond and close the Create New Bond window.
- Assign a logical network to the newly created bond device.
- Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
- Click accept the changes and close the Setup Host Networks window.
11.5.4. Example Uses of Custom Bonding Options with Host Interfaces
Example 11.1. xmit_hash_policy
mode=4, xmit_hash_policy=layer2+3
Example 11.2. ARP Monitoring
arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1, arp_interval=1, arp_ip_target=192.168.0.2
Example 11.3. Primary
mode=1, primary=eth0
Chapter 12. Storage Setup
12.2. Introduction to Storage in Red Hat Enterprise Virtualization
- Network File System (NFS)
- GlusterFS exports
- Other POSIX compliant file systems
- Internet Small Computer System Interface (iSCSI)
- Local storage attached directly to the virtualization hosts
- Fibre Channel Protocol (FCP)
- Parallel NFS (pNFS)
- Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.The data domain cannot be shared across data centers, and the data domain must be of the same type as the data center. For example, a data center of a iSCSI type, must have an iSCSI data domain.You must attach a data domain to a data center before you can attach domains of other types to it.
- ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers.
- Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Enterprise Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time.
Important
Support for export storage domains backed by storage on anything other than NFS is being deprecated. While existing export storage domains imported from Red Hat Enterprise Virtualization 2.2 environments remain supported new export storage domains must be created on NFS storage.
Important
Up.
See Also:
12.3. Adding Storage to the Environment
12.3.1. Adding NFS Storage
12.3.1.1. Preparing NFS Storage
Procedure 12.1. Preparing NFS Storage
Install nfs-utils
NFS functionality is provided by the nfs-utils package. Before file shares can be created, check that the package is installed by querying the RPM database for the system:$
rpm -qi nfs-utilsIf the nfs-utils package is installed then the package information will be displayed. If no output is displayed then the package is not currently installed. Install it usingyumwhile logged in as therootuser:#
yum install nfs-utilsConfigure Boot Scripts
To ensure that NFS shares are always available when the system is operational both thenfsandrpcbindservices must start at boot time. Use thechkconfigcommand while logged in asrootto modify the boot scripts.#
chkconfig --add rpcbind#chkconfig --add nfs#chkconfig rpcbind on#chkconfig nfs onOnce the boot script configuration has been done, start the services for the first time.#
service rpcbind start#service nfs startCreate Directory
Create the directory you wish to share using NFS.#
mkdir /exports/isoReplace /exports/iso with the name, and path of the directory you wish to use.Export Directory
To be accessible over the network using NFS the directory must be exported. NFS exports are controlled using the/etc/exportsconfiguration file. Each export path appears on a separate line followed by a tab character and any additional NFS options. Exports to be attached to the Red Hat Enterprise Virtualization Manager must have the read, and write, options set.To grant read, and write access to/exports/isousing NFS for example you add the following line to the/etc/exportsfile./exports/iso *(rw)
Again, replace /exports/iso with the name, and path of the directory you wish to use.Reload NFS Configuration
For the changes to the/etc/exportsfile to take effect the service must be told to reload the configuration. To force the service to reload the configuration run the following command asroot:#
service nfs reloadSet Permissions
The NFS export directory must be configured for read write access and must be owned by vdsm:kvm. If these users do not exist on your external NFS server use the following command, assuming that/exports/isois the directory to be used as an NFS share.#
chown -R 36:36 /exports/isoThe permissions on the directory must be set to allow read and write access to both the owner and the group. The owner should also have execute access to the directory. The permissions are set using thechmodcommand. The following command arguments set the required permissions on the/exports/isodirectory.#
chmod 0755 /exports/iso
12.3.1.2. Attaching NFS Storage
Prerequisites:
Procedure 12.2. Attaching NFS Storage
- Click the resource tab to list the existing storage domains.
- Click to open the New Domain window.
- Enter the Name of the storage domain.
- Select the Data Center, Domain Function / Storage Type, and Use Host from the drop-down menus.If applicable, select the Format from the drop-down menu.
- Enter the Export Path to be used for the storage domain.The export path should be in the format of
192.168.0.10:/data or domain.example.com:/data - Click to enable further configurable settings. It is recommended that the values of these parameters not be modified.
Important
All communication to the storage domain is from the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must be attached to the chosen Data Center before the storage is configured. - Click OK to create the storage domain and close the window.
Locked while the disk prepares. It is automatically attached to the data center upon completion.
12.3.2. Adding pNFS Storage
12.3.2.1. Preparing pNFS Storage
-o minorversion=1
-o v4.1
# chown 36:36 [path to pNFS resource]
$lsmod | grep nfs_layout_nfsv41_files
12.3.2.2. Attaching pNFS Storage
Procedure 12.3. Attaching pNFS Storage
- Click the resource tab to list the existing storage domains.
- Click to open the New Domain window.
- Enter the Name of the storage domain.
- Select the Data Center, Domain Function / Storage Type, and Use Host from the drop-down menus.If applicable, select the Format from the drop-down menu.
- Enter the Export Path to be used for the storage domain.The export path should be in the format of
192.168.0.10:/dataordomain.example.com:/data - In the VFS Type field, enter
nfs4. - In the Mount Options field, enter
minorversion=1.Important
All communication to the storage domain comes from the selected host and not from the Red Hat Enterprise Virtualization Manager. At least one active host must be attached to the chosen Data Center before the storage is configured. - Click OK to create the storage domain and close the window.
Locked while the disk prepares. It is automatically attached to the data center upon completion.
12.3.3. Adding iSCSI Storage
Note
Procedure 12.4. Adding iSCSI Storage
- Click the Storage resource tab to list the existing storage domains in the results list.
- Click the button to open the New Domain window.
- Enter the Name of the new storage domain.
- Use the Data Center drop-down menu to select an iSCSI data center.If you do not yet have an appropriate iSCSI data center, select
(none). - Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen data center are not available.
- Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.
Important
All communication to the storage domain is via the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system, and be attached to the chosen data center, before the storage is configured. - The Red Hat Enterprise Virtualization Manager is able to map either iSCSI targets to LUNs, or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.
iSCSI Target Discovery
- Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.
Note
LUNs used externally to the environment are also displayed.You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs. - Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
- Enter the port to connect to the host on when browsing for targets in the Port field. The default is
3260. - If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
- Click the button.
- Select the target to use from the discovery results and click the button.Alternatively, click the to log in to all of the discovered targets.
- Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
- Select the check box for each LUN that you are using to create the storage domain.
- Click to create the storage domain and close the window.
12.3.4. Adding FCP Storage
Note
Procedure 12.5. Adding FCP Storage
- Click the resource tab to list all storage domains in the virtualized environment.
- Click to open the New Domain window.
- Enter the Name of the storage domain
- Use the Data Center drop-down menu to select an FCP data center.If you do not yet have an appropriate FCP data center, select
(none). - Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen data center are not available.
- Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.
Important
All communication to the storage domain is via the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system, and be attached to the chosen data center, before the storage is configured. - The New Domain window automatically displays known targets with unused LUNs when Data / Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
- Click OK to create the storage domain and close the window.
Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
12.3.5. Adding Local Storage
12.3.5.1. Preparing Local Storage
Important
/data/images. This directory already exists with the correct permissions on Hypervisor installations. The steps in this procedure are only required when preparing local storage on Red Hat Enterprise Linux virtualization hosts.
Procedure 12.6. Preparing Local Storage
- On the virtualization host, create the directory to be used for the local storage.
# mkdir -p /data/images
- Ensure that the directory has permissions allowing read/write access to the
vdsmuser (UID 36) andkvmgroup (GID 36).# chown 36:36 /data /data/images
# chmod 0755 /data /data/images
12.3.5.2. Adding Local Storage
Procedure 12.7. Adding Local Storage
- Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
- Click Maintenance to place the host into maintenance mode.
- Click to open the Configure Local Storage window.
- Click the buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
- Set the path to your local storage in the text entry field.
- If applicable, select the Memory Optimization tab to configure the memory optimization policy for the new local storage cluster.
- Click to save the settings and close the window.
12.3.6. Adding POSIX Compliant File System Storage
12.3.6.1. POSIX Compliant File System Storage in Red Hat Enterprise Virtualization
Important
12.3.6.2. Attaching POSIX Compliant File System Storage
Procedure 12.8. Attaching POSIX Compliant File System Storage
- Click the Storage resource tab to list the existing storage domains in the results list.
- Click New Domain to open the New Domain window.
- Enter the Name for the storage domain.
- Select the Data Center to be associated with the storage domain. The Data Center selected must be of type POSIX (POSIX compliant FS). Alternatively, select
(none). - Select
Data / POSIX compliant FSfrom the Domain Function / Storage Type drop-down menu.If applicable, select the Format from the drop-down menu. - Select a host from the Use Host drop-down menu. Only hosts within the selected data center will be listed. The host that you select will be used to connect the storage domain.
- Enter the Path to the POSIX file system, as you would normally provide it to the
mountcommand. - Enter the VFS Type, as you would normally provide it to the
mountcommand using the-targument. Seeman mountfor a list of valid VFS types. - Enter additional Mount Options, as you would normally provide them to the
mountcommand using the-oargument. The mount options should be provided in a comma-separated list. Seeman mountfor a list of valid mount options. - Click OK to attach the new Storage Domain and close the window.
12.4. Populating the ISO Domain
12.4.1. Populating the ISO Storage Domain
Procedure 12.9. Populating the ISO Storage Domain
- Copy the required ISO image to a temporary directory on the system running Red Hat Enterprise Virtualization Manager.
- Log in to the system running Red Hat Enterprise Virtualization Manager as the
rootuser. - Use the
engine-iso-uploadercommand to upload the ISO image. This action will take some time, the amount of time varies depending on the size of the image being uploaded and available network bandwidth.Example 12.1. ISO Uploader Usage
In this example the ISO imageRHEL6.isois uploaded to the ISO domain calledISODomainusing NFS. The command will prompt for an administrative user name and password. The user name must be provided in the form user name@domain.#
engine-iso-uploader--iso-domain=ISODomainuploadRHEL6.iso
See Also:
12.4.2. VirtIO and Guest Tool Image Files
/usr/share/virtio-win/virtio-win.iso/usr/share/virtio-win/virtio-win_x86.vfd/usr/share/virtio-win/virtio-win_amd64.vfd/usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
engine-iso-uploader command to upload these images to your ISO storage domain. Once uploaded, the image files can be attached to and used by virtual machines.
See Also:
12.4.3. Uploading the VirtIO and Guest Tool Image Files to an ISO Storage Domain
virtio-win.iso, virtio-win_x86.vfd, virtio-win_amd64.vfd, and rhev-tools-setup.iso image files to the ISODomain.
Example 12.2. Uploading the VirtIO and Guest Tool Image Files
# engine-iso-uploader --iso-domain=[ISODomain] upload /usr/share/virtio-win/virtio-win.iso /usr/share/virtio-win/virtio-win_x86.vfd /usr/share/virtio-win/virtio-win_amd64.vfd /usr/share/rhev-guest-tools-iso/rhev-tools-setup.isoA.1. Red Hat Enterprise Virtualization Manager Installation Log Files
Table A.1. Installation
| Log File | Description |
|---|---|
/var/log/ovirt-engine/engine-cleanup_yyyy_mm_dd_hh_mm_ss.log | Log from the engine-cleanup command. This is the command used to reset a Red Hat Enterprise Virtualization Manager installation. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist. |
/var/log/ovirt-engine/engine-db-install-yyyy_mm_dd_hh_mm_ss.log | Log from the engine-setup command detailing the creation and configuration of the rhevm database. |
/var/log/ovirt-engine/rhevm-dwh-setup-yyyy_mm_dd_hh_mm_ss.log | Log from the rhevm-dwh-setup command. This is the command used to create the ovirt_engine_history database for reporting. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently. |
/var/log/ovirt-engine/ovirt-engine-reports-setup-yyyy_mm_dd_hh_mm_ss.log | Log from the rhevm-reports-setup command. This is the command used to install the Red Hat Enterprise Virtualization Manager Reports modules. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently. |
/var/log/ovirt-engine/setup/ovirt-engine-setup-yyyymmddhhmmss.log | Log from the engine-setup command. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently. |
A.2. Red Hat Enterprise Virtualization Manager Log Files
Table A.2. Service Activity
| Log File | Description |
|---|---|
/var/log/ovirt-engine/engine.log | Reflects all Red Hat Enterprise Virtualization Manager GUI crashes, Active Directory look-ups, Database issues, and other events. |
/var/log/ovirt-engine/host-deploy | Log files from hosts deployed from the Red Hat Enterprise Virtualization Manager. |
/var/lib/ovirt-engine/setup-history.txt | Tracks the installation and upgrade of packages associated with the Red Hat Enterprise Virtualization Manager. |
A.3. Red Hat Enterprise Virtualization Host Log Files
Table A.3.
| Log File | Description |
|---|---|
/var/log/vdsm/libvirt.log | Log file for libvirt. |
/var/log/vdsm/spm-lock.log | Log file detailing the host's ability to obtain a lease on the Storage Pool Manager role. The log details when the host has acquired, released, renewed, or failed to renew the lease. |
/var/log/vdsm/vdsm.log | Log file for VDSM, the Manager's agent on the virtualization host(s). |
/tmp/ovirt-host-deploy-@DATE@.log | Host deployment log, copied to engine as /var/log/ovirt-engine/host-deploy/ovirt-@DATE@-@HOST@-@CORRELATION_ID@.log after the host has been successfully deployed. |
A.4. Remotely Logging Host Activities
A.4.1. Setting Up a Virtualization Host Logging Server
Procedure A.1. Setting up a Virtualization Host Logging Server
- Configure SELinux to allow rsyslog traffic.
# semanage port -a -t syslogd_port_t -p udp 514
- Edit
/etc/rsyslog.confand add below lines:$template TmplAuth, "/var/log/%fromhost%/secure" $template TmplMsg, "/var/log/%fromhost%/messages" $RuleSet remote authpriv.* ?TmplAuth *.info,mail.none;authpriv.none,cron.none ?TmplMsg $RuleSet RSYSLOG_DefaultRuleset $InputUDPServerBindRuleset remote
Uncomment the following:#$ModLoad imudp #$UDPServerRun 514
- Restart the rsyslog service:
# service rsyslog restart
messages and secure logs from your virtualization hosts.
A.4.2. Configuring Logging
Procedure A.2. Configuring Hypervisor Logging
logrotate Configuration
The logrotate utility simplifies the administration of log files. The Hypervisor uses logrotate to rotate logs when they reach a certain file size.Log rotation involves renaming the current logs and starting new ones in their place. The Logrotate Max Log Size value set on the Logging screen is used to determine when a log will be rotated.Enter the Logrotate Max Log Size in kilobytes. The default maximum log size is 1024 kilobytes.rsyslog Configuration
The rsyslog utility is a multithreaded syslog daemon. The Hypervisor is able to use rsyslog to transmit log files over the network to a remote syslog daemon. For information on setting up the remote syslog daemon, see the Red Hat Enterprise Linux Deployment Guide.- Enter the remote rsyslog server address in the Server Address field.
- Enter the remote rsyslog server port in the Server Port field. The default port is
514.
netconsole Configuration
The netconsole module allows kernel messages to be sent to a remote machine. The Hypervisor uses netconsole to transmit kernel messages over the network.- Enter the Server Address.
- Enter the Server Port. The default port is
6666.
Save Configuration
To save the logging configuration, select and press Enter.
A.4.3. Configuring Logging
Procedure A.3. Configuring Hypervisor Logging
logrotate Configuration
The logrotate utility simplifies the administration of log files. The Hypervisor uses logrotate to rotate logs when they reach a certain file size.Log rotation involves renaming the current logs and starting new ones in their place. The Logrotate Max Log Size value set on the Logging screen is used to determine when a log will be rotated.Enter the Logrotate Max Log Size in kilobytes. The default maximum log size is 1024 kilobytes.rsyslog Configuration
The rsyslog utility is a multithreaded syslog daemon. The Hypervisor is able to use rsyslog to transmit log files over the network to a remote syslog daemon. For information on setting up the remote syslog daemon, see the Red Hat Enterprise Linux Deployment Guide.- Enter the remote rsyslog server address in the Server Address field.
- Enter the remote rsyslog server port in the Server Port field. The default port is
514.
netconsole Configuration
The netconsole module allows kernel messages to be sent to a remote machine. The Hypervisor uses netconsole to transmit kernel messages over the network.- Enter the Server Address.
- Enter the Server Port. The default port is
6666.
Save Configuration
To save the logging configuration, select and press Enter.
B.1. Domain Management Tool
B.1.1. What is the Domain Management Tool?
admin user to add the directory service that the users must be authenticated against. You add and remove directory services domains using the included domain management tool, engine-manage-domains.
engine-manage-domains command is only accessible on the machine on which Red Hat Enterprise Virtualization Manager is installed. The engine-manage-domains command must be run as the root user.
See Also:
B.1.2. Syntax for the Domain Management Tool
engine-manage-domains -action=ACTION [options]add- Add a domain to Red Hat Enterprise Virtualization Manager's directory services configuration.
edit- Edit a domain in Red Hat Enterprise Virtualization Manager's directory services configuration.
delete- Delete a domain from Red Hat Enterprise Virtualization Manager's directory services configuration.
validate- Validate Red Hat Enterprise Virtualization Manager's directory services configuration. This command attempts to authenticate each domain in the configuration using the configured user name and password.
list- List Red Hat Enterprise Virtualization Manager's current directory services configuration.
-domain=DOMAIN- Specifies the domain on which the action will be performed. The
-domainparameter is mandatory foradd,edit, anddelete. -provider=PROVIDER- Specifies the LDAP provider type of the directory server for the domain. Valid values are:
ActiveDirectory- Active Directory.IPA- Identity Management (IdM).RHDS- Red Hat Directory Server. Red Hat Directory Server does not come with Kerberos. Red Hat Enterprise Virtualization requires Kerberos authentication. RHDS must be made a service within a Kerberos domain to provide directory services to the Manager.Note
If you want to use RHDS as your directory server, you must have thememberofplugin installed in RHDS. To use thememberofplugin, your users must beinetusers. For more information about using thememberofplugin, see the Red Hat Directory Server Plug-in Guide.
-user=USER- Specifies the domain user to use. The
-userparameter is mandatory foradd, and optional foredit. -passwordFile=FILE- Specifies that the domain user's password is on the first line of the provided file. This option, or the
-interactiveoption, must be used to provide the password for use with theaddaction. -addPermissions- Specifies that the domain user will be given the SuperUser role in Red Hat Enterprise Virtualization Manager. By default, if the
-addPermissionsparameter is not specified, the SuperUser role is not assigned to the domain user. The-addPermissionsoption is optional. It is only valid when used in combination with theaddandeditactions. -interactive- Specifies that the domain user's password is to be provided interactively. This option, or the
-passwordFileoption, must be used to provide the password for use with theaddaction. -configFile=FILE- Specifies an alternate configuration file that the command must load. The
-configFileparameter is always optional. -report- In conjunction with the
validateaction results in the output of a report of all encountered validation errors.
engine-manage-domains command's help output:
# engine-manage-domains --help
See Also:
B.1.3. Adding Domains to Configuration
engine-manage-domains command is used to add the IdM domain directory.demo.redhat.com to the Red Hat Enterprise Virtualization Manager configuration. The configuration is set to use the admin user when querying the domain; the password is provided interactively.
Example B.1. engine-manage-domains Add Action
# engine-manage-domains -action=add -domain=directory.demo.redhat.com -provider=IPA -user=admin -interactive loaded template kr5.conf file setting default_tkt_enctypes setting realms setting domain realm success User guid is: 80b71bae-98a1-11e0-8f20-525400866c73 Successfully added domain directory.demo.redhat.com. oVirt Engine restart is required in order for the changes to take place (service ovirt-engine restart).
See Also:
B.1.4. Editing a Domain in the Configuration
engine-manage-domains command is used to edit the directory.demo.redhat.com domain in the Red Hat Enterprise Virtualization Manager configuration. The configuration is updated to use the admin user when querying this domain; the password is provided interactively.
Example B.2. engine-manage-domains Edit Action
# engine-manage-domains -action=edit -domain=directory.demo.redhat.com -user=admin -interactive loaded template kr5.conf file setting default_tkt_enctypes setting realms setting domain realmo success User guide is: 80b71bae-98a1-11e0-8f20-525400866c73 Successfully edited domain directory.demo.redhat.com. oVirt Engine restart is required in order for the changes to take place (service ovirt-engine restart).
See Also:
B.1.5. Deleting a Domain from the Configuration
engine-manage-domains command is used to remove the directory.demo.redhat.com domain from the Red Hat Enterprise Virtualization Manager configuration. Users defined in the removed domain will no longer be able to authenticate with the Red Hat Enterprise Virtualization Manager. The entries for the affected users will remain defined in the Red Hat Enterprise Virtualization Manager until they are explicitly removed.
admin user from the internal domain will be able to log in until another domain is added.
Example B.3. engine-manage-domains Delete Action
# engine-manage-domains -action=delete -domain='directory.demo.redhat.com' WARNING: Domain directory.demo.redhat.com is the last domain in the configuration. After deleting it you will have to either add another domain, or to use the internal admin user in order to login. Successfully deleted domain directory.demo.redhat.com. Please remove all users and groups of this domain using the Administration portal or the API.
See Also:
B.1.6. Validating Domain Configuration
engine-manage-domains command is used to validate the Red Hat Enterprise Virtualization Manager configuration. The command attempts to log into each listed domain with the credentials provided in the configuration. The domain is reported as valid if the attempt is successful.
Example B.4. engine-manage-domains Validate Action
# engine-manage-domains -action=validate User guide is: 80b71bae-98a1-11e0-8f20-525400866c73 Domain directory.demo.redhat.com is valid.
See Also:
B.1.7. Listing Domains in Configuration
engine-manage-domains command lists the directory services domains defined in the Red Hat Enterprise Virtualization Manager configuration. This command prints the domain, the user name in User Principal Name (UPN) format, and whether the domain is local or remote for each configuration entry.
Example B.5. engine-manage-domains List Action
# engine-manage-domains -action=list
Domain: directory.demo.redhat.com
User name: admin@DIRECTORY.DEMO.REDHAT.COM
This domain is a remote domain.See Also:
B.2. Configuration Tool
B.2.1. Configuration Tool
engine-config.
- list all available configuration keys,
- list all available configuration values,
- retrieve the value of a specific configuration key, and
- set the value of a specific configuration key.
--cver parameter to specify the configuration version to be used when retrieving or setting a value for a configuration key. The default configuration version is general.
B.2.2. Syntax for engine-config Command
engine-config command:
# engine-config --helpCommon tasks
- List available configuration keys
- Use the
--listparameter to list available configuration keys.#
engine-config--listEach available configuration key is listed by name and description. - List available configuration values
- Use the
--allparameter to list available configuration values.#
engine-config--allEach available configuration key is listed by name, current value of the key, and the configuration version. - Retrieve value of configuration key
- Use the
--getparameter to retrieve the value of a specific key.#
engine-config--get KEY_NAMEReplace KEY_NAME with the name of the specific key to retrieve the key name, value, and the configuration version. Use the--cverparameter to specify the configuration version of the value to be retrieved. - Set value of configuration key
- Use the
--setparameter to set the value of a specific key. You must also set the configuration version to which the change is to apply using the--cverparameter.#
engine-config--set KEY_NAME=KEY_VALUE--cver=VERSIONReplace KEY_NAME with the name of the specific key to set; replace KEY_VALUE with the value to be set. Environments with more than one configuration version require the VERSION to be specified.
See Also:
B.3. Image Uploader
B.3.1. Virtual Machine Image Uploader
engine-image-uploader command, you can list export storage domains and upload virtual machines in OVF to an export storage domain and have them automatically recognized in the Red Hat Enterprise Virtualization Manager. The tool only supports gzip compressed OVF files created by Red Hat Enterprise Virtualization.
|-- images | |-- [Image Group UUID] | |--- [Image UUID (this is the disk image)] | |--- [Image UUID (this is the disk image)].meta |-- master | |---vms | |--- [UUID] | |--- [UUID].ovf
See Also:
B.3.2. Syntax for the engine-image-uploader Command
engine-image-uploader[options]listengine-image-uploader[options]upload[file].[file]...[file]
list and upload.
- The
listparameter lists the valid export storage domains available for image uploads. - The
uploadparameter uploads selected image file(s) to the specified image storage domain.
list or upload parameter be included for basic usage. The upload parameter requires a minimum of one local file name to upload.
engine-image-uploader command. You can set defaults for any of these in the /etc/ovirt-engine/imageuploader.conf file.
General Options
-h,--help- Displays command usage information and returns to prompt.
--conf-file=PATH- Sets PATH as the configuration file the tool is to use. The default is
etc/ovirt-engine/imageuploader.conf. --log-file=PATH- Sets PATH as the specific file name the command should use for the log output.
--quiet- Sets quiet mode, reducing console output to a minimum. Quiet mode is off by default.
-v,--verbose- Sets verbose mode, providing more console output. Verbose mode is off by default.
-f,--force- Force mode is necessary when the source file being uploaded has an identical file name as an existing file at the destination; it forces the existing file to be overwritten. Force mode is off by default.
Red Hat Enterprise Virtualization Manager Options
-u USER,--user=USER- Sets the user associated with the file to be uploaded. The USER is specified in the format user@domain, where user is the user name and domain is the directory services domain in use. The user must exist in directory services and be known to the Red Hat Enterprise Virtualization Manager.
-r FQDN,--rhevm=FQDN- Sets the fully qualified domain name of the Red Hat Enterprise Virtualization Manager server from which to upload images, where FQDN is replaced by the fully qualified domain name of the Manager. It is assumed that the image uploader is being run on the same client machine as the Red Hat Enterprise Virtualization Manager; the default value is
localhost:443.
Export Storage Domain Options
-e,--export-domain=EXPORT_DOMAIN- Sets the storage domain EXPORT_DOMAIN as the destination for uploads.
-n,--nfs-server=NFSSERVER- Sets the NFS path NFSSERVER as the destination for uploads.
-i,--ovf-id- Use this option if you do not want to update the UUID of the image. By default, the tool will generate a new UUID for the image. This ensures that there is no conflict between the id of the incoming image and those already in the environment.
-d,-disk-instance-id- Use this option if you do not want to rename the instance ID for each disk (i.e. InstanceId) in the image. By default, this tool will generate new UUIDs for disks within the image to be imported. This ensures that there are no conflicts between the disks on the imported image and those within the environment.
-m,--mac-address- Use this option if you do not want to remove the network components from the image that will be imported. By default, this tool will remove any network interface cards from the image to prevent conflicts with network cards on other virtual machines within the environment. Once the image has been imported, use the Administration Portal to add network interface cards back and the Manager will ensure that there are no MAC address conflicts.
-N NEW_IMAGE_NAME,--name=NEW_IMAGE_NAME- Supply this option if you want to rename the image.
See Also:
B.3.3. Creating an OVF Archive That is Compatible with the Image Uploader
engine-image-uploader tool.
Procedure B.1. Creating an OVF Archive That is Compatible with the Image Uploader
- Use the Manager to create an empty export domain. An empty export domain makes it easy to see which directory contains your virtual machine.
- Export your virtual machine to the empty export domain you just created.
- Log in to the storage server that serves as the export domain, find the root of the NFS share and change to the subdirectory under that mount point. You started with a new export domain, there is only one directory under the exported directory. It contains the
images/andmaster/directories. - Run the
tar -zcvf my.ovf images/ master/command to create the tar/gzip ovf archive. - Anyone you give the resulting ovf file to (in this example, called
my.ovf) can import it to Red Hat Enterprise Virtualization Manager using theengine-image-uploadercommand.
engine-image-uploader command to upload your image into their Red Hat Enterprise Virtualization environment.
See Also:
B.3.4. Basic engine-image-uploader Usage Examples
engine-image-uploader to list storage domains:
Example B.6. Uploading a file Using the engine-image-uploader Tool
# engine-image-uploader list Please provide the REST API username for RHEV-M: admin@internal Please provide the REST API password for the admin@internal RHEV-M user: ********** Export Storage Domain Name | Datacenter | Export Domain Status myexportdom | Myowndc | active
-n NFSSERVER) or export domain (-e EXPORT_STORAGE_DOMAIN) and the name of the .ovf file:
# engine-image-uploader -e myexportdom upload myrhel6.ovf Please provide the REST API username for RHEV-M: admin@internal Please provide the REST API password for the admin@internal RHEV-M user: **********
See Also:
B.4. ISO Uploader
B.4.1. ISO Uploader
engine-iso-uploader. You are required to log in as the root user and provide the administration credentials for the Red Hat Enterprise Virtualization environment. The engine-iso-uploader -h command displays usage information, including a list of all valid options for the engine-iso-uploader command.
See Also:
B.4.2. Syntax for engine-iso-uploader Command
engine-iso-uploader[options]listengine-iso-uploader[options]upload[file].[file]...[file]
list and upload.
- The
listparameter lists the valid ISO storage domains available for ISO uploads. The Red Hat Enterprise Virtualization Manager sets this list on the local machine upon installation. - The
uploadparameter uploads single or multiple space-separated ISO files to the specified ISO storage domain. NFS is used as default; SSH is available.
list or upload parameter be included for basic usage. The upload parameter requires a minimum of one local file name to upload.
engine-iso-uploader command.
General Options
--version- Displays the version number of the command in use and returns to prompt.
-h,--help- Displays command usage information and returns to prompt.
--conf-file=PATH- Sets PATH as the configuration file the tool is to use.
--log-file=PATH- Sets PATH as the specific file name the command should use for the log output.
--quiet- Sets quiet mode, reducing console output to a minimum. Quiet mode is off by default.
-v,--verbose- Sets verbose mode, providing more console output. Verbose mode is off by default.
-f,--force- Force mode is necessary when the source file being uploaded has an identical file name as an existing file at the destination; it forces the existing file to be overwritten. Force mode is off by default.
Red Hat Enterprise Virtualization Manager Options
-u USER,--user=USER- Sets the user associated with the file to be uploaded. The USER is specified in the format user@domain, where user is the user name and domain is the directory services domain in use. The user must exist in directory services and be known to the Red Hat Enterprise Virtualization Manager.
-r FQDN,--rhevm=FQDN- Sets the fully qualified domain name of the Red Hat Enterprise Virtualization Manager server from which to upload ISOs, where FQDN is replaced by the fully qualified domain name of the Manager. It is assumed that the ISO uploader is being run on the same client machine as the Red Hat Enterprise Virtualization Manager; the default value is
localhost.
ISO Storage Domain Options
-i,--iso-domain=ISODOMAIN- Sets the storage domain ISODOMAIN as the destination for uploads.
-n,--nfs-server=NFSSERVER- Sets the NFS path NFSSERVER as the destination for uploads.
Connection Options
--ssh-user=USER- Sets USER as the SSH user name to use for the upload.
--ssh-port=PORT- Sets PORT as the port to use when connecting to SSH.
-k KEYFILE,--key-file=KEYFILE- Sets KEYFILE as the public key to use for SSH authentication. You will be prompted to enter the password of the user specified with
--ssh-user=USERif no key is set.
See Also:
B.4.3. Usage Examples
B.4.3.1. Specifying an NFS Server
Example B.7. Uploading to an NFS Server
# engine-iso-uploader --nfs-server=storage.demo.redhat.com:/iso/path upload RHEL6.0.isoSee Also:
B.4.3.2. Basic ISO Uploader Usage
admin@internal user is used because no user was specified in the command. The second command uploads an ISO file over NFS to the specified ISO domain.
Example B.8. List Domains and Upload Image
# engine-iso-uploader list
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ISO Storage Domain Name | Datacenter | ISO Domain Status
ISODomain | Default | active# engine-iso-uploader --iso-domain=[ISODomain] upload [RHEL6.iso]
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):See Also:
B.5. Log Collector
B.5.1. Log Collector
engine-log-collector. You are required to log in as the root user and provide the administration credentials for the Red Hat Enterprise Virtualization environment. The engine-log-collector -h command displays usage information, including a list of all valid options for the engine-log-collector command.
See Also:
B.5.2. Syntax for engine-log-collector Command
engine-log-collector[options]list[all, clusters, datacenters]engine-log-collector[options]collect
list and collect.
- The
listparameter lists either the hosts, clusters, or data centers attached to the Red Hat Enterprise Virtualization Manager. You are able to filter the log collection based on the listed objects. - The
collectparameter performs log collection from the Red Hat Enterprise Virtualization Manager. The collected logs are placed in an archive file under the/tmp/logcollectordirectory. Theengine-log-collectorcommand assigns each log a specific file name.
engine-log-collector command.
General options
--version- Displays the version number of the command in use and returns to prompt.
-h,--help- Displays command usage information and returns to prompt.
--conf-file=PATH- Sets PATH as the configuration file the tool is to use.
--local-tmp=PATH- Sets PATH as the directory in which logs are saved. The default directory is
/tmp/logcollector. --ticket-number=TICKET- Sets TICKET as the ticket, or case number, to associate with the SOS report.
--upload=FTP_SERVER- Sets FTP_SERVER as the destination for retrieved logs to be sent using FTP. Do not use this option unless advised to by a Red Hat support representative.
--log-file=PATH- Sets PATH as the specific file name the command should use for the log output.
--quiet- Sets quiet mode, reducing console output to a minimum. Quiet mode is off by default.
-v,--verbose- Sets verbose mode, providing more console output. Verbose mode is off by default.
Red Hat Enterprise Virtualization Manager Options
engine-log-collector --user=admin@internal --cluster ClusterA,ClusterB --hosts "SalesHost"* specifies the user as admin@internal and limits the log collection to only SalesHost hosts in clusters A and B.
--no-hypervisors- Omits virtualization hosts from the log collection.
-u USER,--user=USER- Sets the user name for login. The USER is specified in the format user@domain, where user is the user name and domain is the directory services domain in use. The user must exist in directory services and be known to the Red Hat Enterprise Virtualization Manager.
-r FQDN,--rhevm=FQDN- Sets the fully qualified domain name of the Red Hat Enterprise Virtualization Manager server from which to collect logs, where FQDN is replaced by the fully qualified domain name of the Manager. It is assumed that the log collector is being run on the same local host as the Red Hat Enterprise Virtualization Manager; the default value is
localhost. -c CLUSTER,--cluster=CLUSTER- Collects logs from the virtualization hosts in the nominated CLUSTER in addition to logs from the Red Hat Enterprise Virtualization Manager. The cluster(s) for inclusion must be specified in a comma-separated list of cluster names or match patterns.
-d DATACENTER,--data-center=DATACENTER- Collects logs from the virtualization hosts in the nominated DATACENTER in addition to logs from the Red Hat Enterprise Virtualization Manager. The data center(s) for inclusion must be specified in a comma-separated list of data center names or match patterns.
-H HOSTS_LIST,--hosts=HOSTS_LIST- Collects logs from the virtualization hosts in the nominated HOSTS_LIST in addition to logs from the Red Hat Enterprise Virtualization Manager. The hosts for inclusion must be specified in a comma-separated list of host names, fully qualified domain names, or IP addresses. Match patterns are also valid.
SOS Report Options
--jboss-home=JBOSS_HOME- JBoss installation directory path. The default is
/var/lib/jbossas. --java-home=JAVA_HOME- Java installation directory path. The default is
/usr/lib/jvm/java. --jboss-profile=JBOSS_PROFILE- Displays a quoted and space-separated list of server profiles; limits log collection to specified profiles. The default is
'rhevm-slimmed'. --enable-jmx- Enables the collection of run-time metrics from Red Hat Enterprise Virtualization's JBoss JMX interface.
--jboss-user=JBOSS_USER- User with permissions to invoke JBoss JMX. The default is
admin. --jboss-logsize=LOG_SIZE- Maximum size in MB for the retrieved log files.
--jboss-stdjar=STATE- Sets collection of JAR statistics for JBoss standard JARs. Replace STATE with
onoroff. The default ison. --jboss-servjar=STATE- Sets collection of JAR statistics from any server configuration directories. Replace STATE with
onoroff. The default ison. --jboss-twiddle=STATE- Sets collection of twiddle data on or off. Twiddle is the JBoss tool used to collect data from the JMX invoker. Replace STATE with
onoroff. The default ison. --jboss-appxml=XML_LIST- Displays a quoted and space-separated list of applications with XML descriptions to be retrieved. Default is
all.
SSH Configuration
--ssh-port=PORT- Sets PORT as the port to use for SSH connections with virtualization hosts.
-k KEYFILE,--key-file=KEYFILE- Sets KEYFILE as the public SSH key to be used for accessing the virtualization hosts.
--max-connections=MAX_CONNECTIONS- Sets MAX_CONNECTIONS as the maximum concurrent SSH connections for logs from virtualization hosts. The default is
10.
PostgreSQL Database Options
pg-pass parameter includes the Red Hat Enterprise Virtualization Manager database in the log. The database user name and database name must be specified if they have been changed from the default values.
pg-dbhost parameter if the database is not on the local host. Use the optional pg-host-key parameter to collect remote logs. The PostgreSQL SOS plugin must be installed on the database server for remote log collection to be successful.
--no-postgresql- Disables collection of database. Database collection is performed by default.
--pg-user=USER- Sets USER as the user name to use for connections with the database server. The default is
postgres. --pg-dbname=DBNAME- Sets DBNAME as the database name to use for connections with the database server. The default is
rhevm. --pg-dbhost=DBHOST- Sets DBHOST as the host name for the database server. The default is
localhost. --pg-host-key=KEYFILE- Sets KEYFILE as the public identity file (private key) for the database server. This value is not set by default; it is required only where the database does not exist on the local host.
B.5.3. Basic Log Collector Usage
engine-log-collector command is run without specifying any additional parameters, its default behavior is to collect all logs from the Red Hat Enterprise Virtualization Manager and its attached hosts. It will also collect database logs unless the --no-postgresql parameter is added. In the following example, log collector is run to collect all logs from the Red Hat Enterprise Virtualization Manager and three attached hosts.
Example B.9. Log Collector Usage
# engine-log-collector
INFO: Gathering oVirt Engine information...
INFO: Gathering PostgreSQL the oVirt Engine database and log files from localhost...
Please provide REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
About to collect information from 3 hypervisors. Continue? (Y/n):
INFO: Gathering information from selected hypervisors...
INFO: collecting information from 192.168.122.250
INFO: collecting information from 192.168.122.251
INFO: collecting information from 192.168.122.252
INFO: finished collecting information from 192.168.122.250
INFO: finished collecting information from 192.168.122.251
INFO: finished collecting information from 192.168.122.252
Creating compressed archive...
INFO Log files have been collected and placed in /tmp/logcollector/sosreport-rhn-account-20110804121320-ce2a.tar.xz.
The MD5 for this file is 6d741b78925998caff29020df2b2ce2a and its size is 26.7MB.6. SPICE Proxy
B.6.1. SPICE Proxy Overview
SpiceProxyDefault to a value consisting of the name and port of the proxy.
SpiceProxyDefault has been set to.
B.6.2. SPICE Proxy Machine Setup
Procedure B.2. Installing Squid on a RHEL Machine
- Install Squid on the Proxy machine:
#yum install squid - Open
/etc/squid/squid.conf. Changehttp_access deny CONNECT !SSL_ports
Tohttp_access deny CONNECT !Safe_ports
- Restart the proxy:
#service squid restart - Open the default squid port:
#iptables -A INPUT -p tcp --dport 3128 -j ACCEPT - Make this iptables rule persistent:
#iptables-save
B.6.3. Turning on SPICE Proxy
Procedure B.3. Activating SPICE Proxy
- On the Manager, use the engine-config tool to set a proxy:
#
engine-config -s SpiceProxyDefault=someProxy - Restart the ovirt-engine service:
#
service ovirt-engine restartThe proxy must have this form:protocol://[host]:[port]
Note
Only the http protocol is supported by SPICE clients. If https is specified, the client will ignore the proxy setting and attempt a direct connection to the hypervisor.
B.6.4. Turning Off a SPICE Proxy
Procedure B.4. Turning Off a SPICE Proxy
- Log in to the Manager:
$
ssh root@[IP of Manager] - Run the following command to clear the SPICE proxy:
#
engine-config -s SpiceProxyDefault="" - Restart the Manager:
#
service ovirt-engine restart
B.7. Squid Proxy
B.7.1. Installing and Configuring a Squid Proxy
Procedure B.5. Configuring a Squid Proxy
Obtaining a Keypair
Obtain a keypair and certificate for the HTTPS port of the Squid proxy server.You can obtain this keypair the same way that you would obtain a keypair for another SSL/TLS service. The keypair is in the form of two PEM files which contain the private key and the signed certificate. In this document we assume that they are namedproxy.keyandproxy.cer.The keypair and certificate can also be generated using the certificate authority of the oVirt engine. If you already have the private key and certificate for the proxy and do not want to generate it with the oVirt engine certificate authority, skip to the next step.Generating a Keypair
Decide on a host name for the proxy. In this procedure, the proxy is calledproxy.example.com.Decide on the rest of the distinguished name of the certificate for the proxy. The important part here is the "common name", which contains the host name of the proxy. Users' browsers use the common name to validate the connection. It is good practice to use the same country and same organization name used by the oVirt engine itself. Find this information by logging in to the oVirt engine machine and running the following command:[root@engine ~]# openssl x509 -in /etc/pki/ovirt-engine/ca.pem -noout -subject
This command will output something like this:subject= /C=US/O=Example Inc./CN=engine.example.com.81108
The relevant part here is/C=us/O=Example Inc.. Use this to build the complete distinguished name for the certificate for the proxy:/C=US/O=Example Inc./CN=proxy.example.com
Log in to the proxy machine and generate a certificate signing request:[root@proxy ~]# openssl req -newkey rsa:2048 -subj '/C=US/O=Example Inc./CN=proxy.example.com' -nodes -keyout proxy.key -out proxy.req
Note
The quotes around the distinguished name for the certificate are very important. Do not leave them out.The command will generate the key pair. It is very important that the private key isn't encrypted (that is the effect of the -nodes option) because otherwise you would need to type the password to start the proxy server.The output of the command looks like this:Generating a 2048 bit RSA private key ......................................................+++ .................................................................................+++ writing new private key to 'proxy.key' -----
The command will generate two files:proxy.keyandproxy.req.proxy.keyis the private key. Keep this file safe.proxy.reqis the certificate signing request.proxy.reqdoesn't require any special protection.To generate the signed certificate, copy theprivate.csrfile to the oVirt engine machine, using the scp command:[root@proxy ~]# scp proxy.req engine.example.com:/etc/pki/ovirt-engine/requests/.
Log in to the oVirt engine machine and run the following command to sign the certificate:[root@engine ~]# /usr/share/ovirt-engine/bin/pki-enroll-request.sh --name=proxy --days=3650 --subject='/C=US/O=Example Inc./CN=proxy.example.com'
This will sign the certificate and make it valid for 10 years (3650 days). Set the certificate to expire earlier, if you prefer.The output of the command looks like this:Using configuration from openssl.conf Check that the request matches the signature Signature ok The Subject's Distinguished Name is as follows countryName :PRINTABLE:'US' organizationName :PRINTABLE:'Example Inc.' commonName :PRINTABLE:'proxy.example.com' Certificate is to be certified until Jul 10 10:05:24 2023 GMT (3650 days) Write out database with 1 new entries Data Base Updated
The generated certificate file is available in the directory/etc/pki/ovirt-engine/certsand should be namedproxy.cer. Copy this file to the proxy machine:[root@proxy ~]# scp engine.example.com:/etc/pki/ovirt-engine/certs/proxy.cer .
Make sure that both theproxy.keyandproxy.cerfiles are present on the proxy machine:[root@proxy ~]# ls -l proxy.key proxy.cer
The output of this command will look like this:-rw-r--r--. 1 root root 4902 Jul 12 12:11 proxy.cer -rw-r--r--. 1 root root 1834 Jul 12 11:58 proxy.key
You are now ready to install and configure the proxy server.Install the Squid proxy server package
Install this system as follows:[root@proxy ~]# yum -y install squid
Configure the Squid proxy server
Move the private key and signed certificate to a place where the proxy can access them, for example to the/etc/squiddirectory:[root@proxy ~]# cp proxy.key proxy.cer /etc/squid/.
Set permissions so that the "squid" user can read these files:[root@proxy ~]# chgrp squid /etc/squid/proxy.* [root@proxy ~]# chmod 640 /etc/squid/proxy.*
The Squid proxy will connect to the oVirt engine web server using the SSL protocol, and must verify the certificate used by the engine. Copy the certificate of the CA that signed the certificate of the oVirt engine web server to a place where the proxy can access it, for example/etc/squid. The default CA certificate is located in the/etc/pki/ovirt-engine/ca.pemfile in the oVirt engine machine. Copy it with the following command:[root@proxy ~]# scp engine.example.com:/etc/pki/ovirt-engine/ca.pem /etc/squid/.
Make sure that the "squid" user can read that file:[root@proxy ~]# chgrp squid /etc/squid/ca.pem [root@proxy ~]# chmod 640 /etc/squid/ca.pem
If SELinux is in enforcing mode, change the context of port 443 using the semanage tool. This permits Squid to use port 443.[root@proxy ~]# yum install -y policycoreutils-python [root@proxy ~]# semanage port -m -p tcp -t http_cache_port_t 443
Replace the existing squid configuration file with the following:https_port 443 key=/etc/squid/proxy.key cert=/etc/squid/proxy.cer ssl-bump defaultsite=engine.example.com cache_peer engine.example.com parent 443 0 no-query originserver ssl sslcafile=/etc/squid/ca.pem name=engine cache_peer_access engine allow all ssl_bump allow all http_access allow all
Restart the Squid Proxy Server
Run the following command in the proxy machine:[root@proxy ~]# service squid restart
Configure the websockets proxy
Note
This step is optional. Do this step only if you want to use the noVNC console or the Spice HTML 5 console.To use the noVNC or Spice HTML 5 consoles to connect to the console of virtual machines, the websocket proxy server must be configured on the machine on which the engine is installed. If you selected to configure the websocket proxy server when prompted during installing or upgrading the engine with theengine-setupcommand, the websocket proxy server will already be configured. If you did not select to configure the websocket proxy server at this time, you can configure it later by running theengine-setupcommand with the following option:engine-setup --otopi-environment="OVESETUP_CONFIG/websocketProxyConfig=bool:True"
You must also make sure that the ovirt-websocket-proxy service is started and will start automatically on boot:[root@engine ~]# service ovirt-websocket-proxy status [root@engine ~]# chkconfig ovirt-websocket-proxy on
Both the noVNC and the Spice HTML 5 consoles use the websocket protocol to connect to the virtual machines, but squid proxy server does not support the websockets protocol, so this communication cannot be proxied with Squid. Tell the system to connect directly to the websockets proxy running in the machine where the engine is running. To do this, update theWebSocketProxyconfiguration parameter using the "engine-config" tool:[root@engine ~]# engine-config \ -s WebSocketProxy=engine.example.com:6100 [root@engine ~]# service ovirt-engine restart
Important
If you skip this step the clients will assume that the websockets proxy is running in the proxy machine, and thus will fail to connect.Connect to the user portal using the complete URL
Connect to the User Portal using the complete URL, for instance:https://proxy.example.com/UserPortal/org.ovirt.engine.ui.userportal.UserPortal/UserPortal.html
Note
Shorter URLs, for examplehttps://proxy.example.com/UserPortal, will not work. These shorter URLs are redirected to the long URL by the application server, using the 302 response code and the Location header. The version of Squid in Red Hat Enterprise Linux and Fedora (Squid version 3.1) does not support rewriting these headers.
| Revision History | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Revision 3.3-44 | Fri 20 Mar 2015 | |||||||||||||
| ||||||||||||||
| Revision 3.3-43 | Tue 07 Oct 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-42 | Thu 22 May 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-41 | Wed 5 Mar 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-40 | Tue 4 Mar 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-39 | Mon 3 Mar 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-38 | Mon 17 Feb 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-37 | Fri 14 Feb 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-36 | Fri 07 Feb 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-35 | Thu 23 Jan 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-34 | Thu 09 Jan 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-33 | Tue 07 Jan 2014 | |||||||||||||
| ||||||||||||||
| Revision 3.3-32 | Mon 23 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-31 | Wed 18 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-30 | Tue 17 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-29 | Tue 17 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-28 | Mon 16 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-27 | Fri 13 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-26 | Thu 12 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-25 | Wed 11 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-24 | Fri 06 Dec 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-23 | Mon 25 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-22 | Mon 25 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-21 | Fri 22 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-20 | Mon 18 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-19 | Wed 13 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-18 | Wed 13 Nov 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-17 | Thu 17 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-16 | Tue 15 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-15 | Fri 11 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-14 | Wed 09 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-13 | Fri 04 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-12 | Wed 02 Oct 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-11 | Mon 30 Sep 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-10 | Thu 26 Sep 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-9 | Thu 29 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-8 | Fri 23 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-7 | Thu 22 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-6 | Fri 16 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-5 | Tue 13 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-4 | Mon 12 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-3 | Fri 09 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-2 | Thu 01 Aug 2013 | |||||||||||||
| ||||||||||||||
| Revision 3.3-1 | Thu 18 Jul 2013 | |||||||||||||
| ||||||||||||||




















