Red Hat Enterprise Virtualization 3.5

Administration Guide

Administration Tasks in Red Hat Enterprise Virtualization

Red Hat Enterprise Virtualization Documentation Team

Red Hat Customer Content Services

Legal Notice

Copyright © 2015 Red Hat.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.

Abstract

This book contains information and procedures relevant to Red Hat Enterprise Virtualization administrators.
1. Introduction
1.1. Red Hat Enterprise Virtualization Architecture
1.2. Red Hat Enterprise Virtualization System Components
1.3. Red Hat Enterprise Virtualization Resources
1.4. Red Hat Enterprise Virtualization API Support Statement
1.5. Administering and Maintaining the Red Hat Enterprise Virtualization Environment
2. Using the Administration Portal
2.1. Browser and Client Requirements
2.2. Graphical User Interface Elements
2.3. Tree Mode and Flat Mode
2.4. Using the Guide Me Facility
2.5. Performing Searches in Red Hat Enterprise Virtualization
2.6. Saving a Query String as a Bookmark
2.7. The Configure Window
I. Administering the Resources
3. Quality of Service
3.1. Storage Quality of Service
3.2. Network Quality of Service
3.3. CPU Quality of Service
4. Data Centers
4.1. Introduction to Data Centers
4.2. The Storage Pool Manager
4.3. SPM Priority
4.4. Using the Events Tab to Identify Problem Objects in Data Centers
4.5. Data Center Tasks
4.6. Data Centers and Storage Domains
4.7. Data Centers and Permissions
5. Clusters
5.1. Introduction to Clusters
5.2. Cluster Tasks
5.3. Clusters and Permissions
6. Logical Networks
6.1. Introduction to Logical Networks
6.2. Required Networks, Optional Networks, and Virtual Machine Networks
6.3. Logical Network Tasks
6.4. Virtual Network Interface Cards
6.5. External Provider Networks
6.6. Logical Networks and Permissions
7. Hosts
7.1. Introduction to Red Hat Enterprise Virtualization Hosts
7.2. Red Hat Enterprise Virtualization Hypervisor Hosts
7.3. Foreman Host Provider Hosts
7.4. Red Hat Enterprise Linux Hosts
7.5. Host Tasks
7.6. Hosts and Networking
7.7. Host Resilience
7.8. Hosts and Permissions
8. Storage
8.1. Understanding Storage Domains
8.2. Storage Metadata Versions in Red Hat Enterprise Virtualization
8.3. Preparing and Adding NFS Storage
8.4. Preparing and Adding Local Storage
8.5. Preparing and Adding POSIX Compliant File System Storage
8.6. Preparing and Adding Block Storage
8.7. Importing Existing Storage Domains
8.8. Storage Tasks
8.9. Storage and Permissions
9. Working with Red Hat Gluster Storage
9.1. Red Hat Gluster Storage Nodes
9.2. Using Red Hat Gluster Storage as a Storage Domain
9.3. Clusters and Gluster Hooks
10. Virtual Machines
10.1. Introduction to Virtual Machines
10.2. Supported Virtual Machine Operating Systems
10.3. Virtual Machine Performance Parameters
10.4. Creating Virtual Machines
10.5. Explanation of Settings and Controls in the New Virtual Machine and Edit Virtual Machine Windows
10.6. Configuring Virtual Machines
10.7. Editing Virtual Machines
10.8. Running Virtual Machines
10.9. Removing Virtual Machines
10.10. Cloning Virtual Machines
10.11. Virtual Machines and Permissions
10.12. Snapshots
10.13. Affinity Groups
10.14. Exporting and Importing Virtual Machines and Templates
10.15. Migrating Virtual Machines Between Hosts
10.16. Improving Uptime with Virtual Machine High Availability
10.17. Other Virtual Machine Tasks
11. Templates
11.1. Introduction to Templates
11.2. Sealing Virtual Machines in Preparation for Deployment as Templates
11.3. Template Tasks
11.4. Templates and Permissions
12. Pools
12.1. Introduction to Virtual Machine Pools
12.2. Virtual Machine Pool Tasks
12.3. Pools and Permissions
12.4. Trusted Compute Pools
13. Virtual Machine Disks
13.1. Understanding Virtual Machine Storage
13.2. Understanding Virtual Disks
13.3. Settings to Wipe Virtual Disks After Deletion
13.4. Shareable Disks in Red Hat Enterprise Virtualization
13.5. Read Only Disks in Red Hat Enterprise Virtualization
13.6. Virtual Disk Tasks
13.7. Virtual Disks and Permissions
14. External Providers
14.1. Introduction to External Providers in Red Hat Enterprise Virtualization
14.2. Enabling the Authentication of OpenStack Providers
14.3. Adding External Providers
14.4. Editing External Providers
14.5. Removing External Providers
II. Administering the Environment
15. Updating the Red Hat Enterprise Virtualization Environment
15.1. Updates between Minor Releases
15.2. Upgrading to Red Hat Enterprise Virtualization 3.5
15.3. Upgrading to Red Hat Enterprise Virtualization 3.4
15.4. Upgrading to Red Hat Enterprise Virtualization 3.3
15.5. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
15.6. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
15.7. Post-upgrade Tasks
16. Backups and Migration
16.1. Backing Up and Restoring the Red Hat Enterprise Virtualization Manager
16.2. Backing Up and Restoring Virtual Machines Using the Backup and Restore API
17. Users and Roles
17.1. Introduction to Users
17.2. Directory Users
17.3. User Authorization
17.4. Red Hat Enterprise Virtualization Manager User Tasks
18. Quotas and Service Level Agreement Policy
18.1. Introduction to Quota
18.2. Shared Quota and Individually Defined Quota
18.3. Quota Accounting
18.4. Enabling and Changing a Quota Mode in a Data Center
18.5. Creating a New Quota Policy
18.6. Explanation of Quota Threshold Settings
18.7. Assigning a Quota to an Object
18.8. Using Quota to Limit Resources by User
18.9. Editing Quotas
18.10. Removing Quotas
18.11. Service Level Agreement Policy Enforcement
19. Event Notifications
19.1. Configuring Event Notifications in the Administration Portal
19.2. Canceling Event Notifications in the Administration Portal
19.3. Parameters for Event Notifications in ovirt-engine-notifier.conf
19.4. Configuring the Red Hat Enterprise Virtualization Manager to Send SNMP Traps
20. Utilities
20.1. The oVirt Engine Rename Tool
20.2. The Domain Management Tool
20.3. The Engine Configuration Tool
20.4. The Image Uploader Tool
20.5. The USB Filter Editor
20.6. The Log Collector Tool
20.7. The ISO Uploader Tool
III. Gathering Information About the Environment
21. Log Files
21.1. Red Hat Enterprise Virtualization Manager Installation Log Files
21.2. Red Hat Enterprise Virtualization Manager Log Files
21.3. SPICE Log Files
21.4. Red Hat Enterprise Virtualization Host Log Files
21.5. Remotely Logging Host Activities
22. Proxies
22.1. SPICE Proxy
22.2. Squid Proxy
23. History Database, Reports, and Dashboards
23.1. Introduction
23.2. History Database
23.3. Reports
23.4. Dashboards
A. Firewalls
A.1. Red Hat Enterprise Virtualization Manager Firewall Requirements
A.2. Virtualization Host Firewall Requirements
A.3. Directory Server Firewall Requirements
A.4. Database Server Firewall Requirements
B. VDSM and Hooks
B.1. VDSM
B.2. VDSM Hooks
B.3. Extending VDSM with Hooks
B.4. Supported VDSM Events
B.5. The VDSM Hook Environment
B.6. The VDSM Hook Domain XML Object
B.7. Defining Custom Properties
B.8. Setting Virtual Machine Custom Properties
B.9. Evaluating Virtual Machine Custom Properties in a VDSM Hook
B.10. Using the VDSM Hooking Module
B.11. VDSM Hook Execution
B.12. VDSM Hook Return Codes
B.13. VDSM Hook Examples
C. Explanation of Network Bridge Parameters
C.1. Explanation of bridge_opts Parameters
D. Red Hat Enterprise Virtualization User Interface Plugins
D.1. Red Hat Enterprise Virtualization User Interface Plug-ins
D.2. Red Hat Enterprise Virtualization User Interface Plugin Lifecycle
D.3. User Interface Plugin-related Files and Their Locations
D.4. Example User Interface Plug-in Deployment
D.5. Installing the Red Hat Support Plug-in
D.6. Using Red Hat Support Plug-in
E. Red Hat Enterprise Virtualization and SSL
E.1. Replacing the Red Hat Enterprise Virtualization Manager SSL Certificate
E.2. Setting Up SSL or TLS Connections between the Manager and an LDAP Server
F. Using Search, Bookmarks, and Tags
F.1. Searches
F.2. Bookmarks
F.3. Tags
G. Branding
G.1. Branding
H. Revision History

Chapter 1. Introduction

1.1. Red Hat Enterprise Virtualization Architecture

A Red Hat Enterprise Virtualization environment consists of:
  • Virtual machine hosts using the Kernel-based Virtual Machine (KVM).
  • Agents and tools running on hosts including VDSM, QEMU, and libvirt. These tools provide local management for virtual machines, networks and storage.
  • The Red Hat Enterprise Virtualization Manager; a centralized management platform for the Red Hat Enterprise Virtualization environment. It provides a graphical interface where you can view, provision and manage resources.
  • Storage domains to hold virtual resources like virtual machines, templates, ISOs.
  • A database to track the state of and changes to the environment.
  • Access to an external Directory Server to provide users and authentication.
  • Networking to link the environment together. This includes physical network links, and logical networks.
Red Hat Enterprise Virtualization Platform Overview

Figure 1.1. Red Hat Enterprise Virtualization Platform Overview

1.2. Red Hat Enterprise Virtualization System Components

The Red Hat Enterprise Virtualization version 3.5 environment consists of one or more hosts (either Red Hat Enterprise Linux 6.5 or 6.6 hosts, Red Hat Enterprise Linux 7 hosts, Red Hat Enterprise Virtualization Hypervisor 6.5 or 6.6 hosts, or Red Hat Enterprise Virtualization Hypervisor 7 hosts) and at least one Red Hat Enterprise Virtualization Manager.
Hosts run virtual machines using KVM (Kernel-based Virtual Machine) virtualization technology.
The Red Hat Enterprise Virtualization Manager runs on Red Hat Enterprise Linux Server 6.5 or 6.6 and provides interfaces for controlling the Red Hat Enterprise Virtualization environment. It manages virtual machine and storage provisioning, connection protocols, user sessions, virtual machine images, and high availability virtual machines.
The Red Hat Enterprise Virtualization Manager is accessed through the Administration Portal using a web browser.

1.3. Red Hat Enterprise Virtualization Resources

The components of the Red Hat Enterprise Virtualization environment fall into two categories: physical resources, and logical resources. Physical resources are physical objects, such as host and storage servers. Logical resources are nonphysical groupings and processes, such as logical networks and virtual machine templates.
  • Data Center - A data center is the highest level container for all physical and logical resources within a managed virtual environment. It is a collection of clusters, virtual machines, storage, and networks.
  • Clusters - A cluster is a set of physical hosts that are treated as a resource pool for virtual machines. Hosts in a cluster share the same network infrastructure and storage. They form a migration domain within which virtual machines can be moved from host to host.
  • Logical Networks - A logical network is a logical representation of a physical network. Logical networks group network traffic and communication between the Manager, hosts, storage, and virtual machines.
  • Hosts - A host is a physical server that runs one or more virtual machines. Hosts are grouped into clusters. Virtual machines can be migrated from one host to another within a cluster.
  • Storage Pool - The storage pool is a logical entity that contains a standalone image repository of a certain type, either iSCSI, Fibre Channel, NFS, or POSIX. Each storage pool can contain several domains, for storing virtual machine disk images, ISO images, and for the import and export of virtual machine images.
  • Virtual Machines - A virtual machine is a virtual desktop or virtual server containing an operating system and a set of applications. Multiple identical virtual machines can be created in a Pool. Virtual machines are created, managed, or deleted by power users and accessed by users.
  • Template - A template is a model virtual machine with predefined settings. A virtual machine that is based on a particular template acquires the settings of the template. Using templates is the quickest way of creating a large number of virtual machines in a single step.
  • Virtual Machine Pool - A virtual machine pool is a group of identical virtual machines that are available on demand by each group member. Virtual machine pools can be set up for different purposes. For example, one pool can be for the Marketing department, another for Research and Development, and so on.
  • Snapshot - A snapshot is a view of a virtual machine's operating system and all its applications at a point in time. It can be used to save the settings of a virtual machine before an upgrade or installing new applications. In case of problems, a snapshot can be used to restore the virtual machine to its original state.
  • User Types - Red Hat Enterprise Virtualization supports multiple levels of administrators and users with distinct levels of permissions. System administrators can manage objects of the physical infrastructure, such as data centers, hosts, and storage. Users access virtual machines available from a virtual machine pool or standalone virtual machines made accessible by an administrator.
  • Events and Monitors - Alerts, warnings, and other notices about activities help the administrator to monitor the performance and status of resources.
  • Reports - A range of reports either from the reports module based on JasperReports, or from the data warehouse. Preconfigured or ad hoc reports can be generated from the reports module. Users can also generate reports using any query tool that supports SQL from a data warehouse that collects monitoring data for hosts, virtual machines, and storage.

1.4. Red Hat Enterprise Virtualization API Support Statement

Red Hat Enterprise Virtualization exposes a number of interfaces for interacting with the components of the virtualization environment. These interfaces are in addition to the user interfaces provided by the Red Hat Enterprise Virtualization Manager Administration, User, and Reports Portals. Many of these interfaces are fully supported. Some however are supported only for read access or only when your use of them has been explicitly requested by Red Hat Support.

Supported Interfaces for Read and Write Access

Direct interaction with these interfaces is supported and encouraged for both read and write access:
Representational State Transfer (REST) API
The REST API exposed by the Red Hat Enterprise Virtualization Manager is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
Software Development Kit (SDK)
The Python SDK and Java SDK are fully supported interfaces for interacting with Red Hat Enterprise Virtualization Manager.
Command Line Shell
The command line shell provided by the rhevm-cli package is a fully supported interface for interacting with the Red Hat Enterprise Virtualization Manager.
VDSM Hooks
The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Enterprise Virtualization Hypervisor is not currently supported.

Supported Interfaces for Read Access

Direct interaction with these interfaces is supported and encouraged only for read access. Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support:
Red Hat Enterprise Virtualization Manager History Database
Read access to the Red Hat Enterprise Virtualization Manager history database using the database views specified in the Administration Guide is supported. Write access is not supported.
Libvirt on Virtualization Hosts
Read access to libvirt using the virsh -r command is a supported method of interacting with virtualization hosts. Write access is not supported.

Unsupported Interfaces

Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support:
The vdsClient Command
Use of the vdsClient command to interact with virtualization hosts is not supported unless explicitly requested by Red Hat Support.
Red Hat Enterprise Virtualization Hypervisor Console
Console access to Red Hat Enterprise Virtualization Hypervisor outside of the provided text user interface for configuration is not supported unless explicitly requested by Red Hat Support.
Red Hat Enterprise Virtualization Manager Database
Direct access to and manipulation of the Red Hat Enterprise Virtualization Manager database is not supported unless explicitly requested by Red Hat Support.

Important

Red Hat Support will not debug user created scripts or hooks except where it can be demonstrated that there is an issue with the interface being used rather than the user created script itself. For more general information about Red Hat support policies see https://access.redhat.com/support/offerings/production/soc.html.

1.5. Administering and Maintaining the Red Hat Enterprise Virtualization Environment

The Red Hat Enterprise Virtualization environment requires an administrator to keep it running. As an administrator, your tasks include:
  • Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.
  • Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).
  • Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).
  • Managing customized object properties using tags.
  • Managing searches saved as public bookmarks.
  • Managing user setup and setting permission levels.
  • Troubleshooting for specific users or virtual machines for overall system functionality.
  • Generating general and specific reports.

Chapter 2. Using the Administration Portal

2.1. Browser and Client Requirements

The following browser versions and operating systems have supported SPICE clients and are optimal for displaying the application graphics of the Administration Portal and the User Portal:
Operating System Family Browser Portal Access Supported SPICE Client?
Red Hat Enterprise Linux Mozilla Firefox 38 Administration Portal and User Portal Yes
Windows Internet Explorer 9 or later Administration Portal Yes
Internet Explorer 8 or later User Portal Yes

2.2. Graphical User Interface Elements

The Red Hat Enterprise Virtualization Administration Portal consists of contextual panes and menus and can be used in two modes - tree mode, and flat mode. Tree mode allows you to browse the object hierarchy of a data center while flat mode allows you to view all resources across data centers in a single list. The elements of the graphical user interface are shown in the diagram below.
Key Graphical User Interface Elements

Figure 2.1. Key Graphical User Interface Elements

Key Graphical User Interface Elements

  • Header
    The header bar contains the name of the currently logged in user, the Sign Out button, the About button, the Configure button, and the Guide button. The About shows information on the version of Red Hat Enterprise Virtualization, the Configure button allows you to configure user roles, and the Guide button provides a shortcut to the book you are reading now.
  • Search Bar
    The search bar allows you to build queries for finding resources such as hosts and clusters in the Red Hat Enterprise Virtualization environment. Queries can be as simple as a list of all the hosts in the system, or more complex, such as a list of resources that match certain conditions. As you type each part of the search query, you are offered choices to assist you in building the search. The star icon can be used to save the search as a bookmark.
  • Resource Tabs
    All resources can be managed using their associated tab. Moreover, the Events tab allows you to view events for each resource. The Administration Portal provides the following tabs: Data Centers, Clusters, Hosts, Networks, Storage, Disks, Virtual Machines, Pools, Templates, Volumes, Users, and Events, and a Dashboard tab if you have installed the data warehouse and reports.
  • Results List
    You can perform a task on an individual item, multiple items, or all the items in the results list by selecting the items and clicking the relevant action button. Information on a selected item is displayed in the details pane.
  • Details Pane
    The details pane shows detailed information about a selected item in the results list. If no items are selected, this pane is hidden. If multiple items are selected, the details pane displays information on the first selected item only.
  • System/Bookmarks/Tags Pane
    The system pane displays a navigable hierarchy of the resources in the virtualized environment. Bookmarks are used to save frequently used or complicated searches for repeated use. Bookmarks can be added, edited, or removed. Tags are applied to groups of resources and are used to search for all resources associated with that tag. The System/Bookmarks/Tags Pane can be minimized using the arrow in the upper right corner of the panel.
  • Alerts/Events Pane
    The Alerts tab lists all high severity events such as errors or warnings. The Events tab shows a list of events for all resources. The Tasks tab lists the currently running tasks. You can view this panel by clicking the maximize/minimize button.
  • Refresh Rate
    The refresh rate drop-down menu allows you to set the time, in seconds, between Administration Portal refreshes. To avoid the delay between a user performing an action and the result appearing the portal, the portal will automatically refresh upon an action or event regardless of the chosen refresh interval. You can set this interval by clicking the refresh symbol in top right of the portal.

2.3. Tree Mode and Flat Mode

The Administration Portal provides two different modes for managing your resources: tree mode and flat mode. Tree mode displays resources in a hierarchical view per data center, from the highest level of the data center down to the individual virtual machine. Working in tree mode is highly recommended for most operations.
Tree Mode

Figure 2.2. Tree Mode

Flat mode allows you to search across data centers, or storage domains. It does not limit you to viewing the resources of a single hierarchy. For example, you may need to find all virtual machines that are using more than 80% CPU across clusters and data centers, or locate all hosts that have the highest utilization. Flat mode makes this possible. In addition, certain objects, such as Pools and Users are not in the data center hierarchy and can be accessed only in flat mode.
To access flat mode, click on the System item in the Tree pane on the left side of the screen. You are in flat mode if the Pools and Users resource tabs appear.
Flat Mode

Figure 2.3. Flat Mode

2.4. Using the Guide Me Facility

When setting up resources such as data centers and clusters, a number of tasks must be completed in sequence. The context-sensitive Guide Me window prompts for actions that are appropriate to the resource being configured. The Guide Me window can be accessed at any time by clicking the Guide Me button on the resource toolbar.
New Data Center Guide Me Window

Figure 2.4. New Data Center Guide Me Window

2.5. Performing Searches in Red Hat Enterprise Virtualization

The Administration Portal enables the management of thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) in the search bar. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are needed.

Note

In versions previous to Red Hat Enterprise Virtualization 3.4, searches in the Administration Portal were case sensitive. Now, the search bar supports case insensitive searches.

2.6. Saving a Query String as a Bookmark

Summary
A bookmark can be used to remember a search query, and shared with other users.

Procedure 2.1. Saving a Query String as a Bookmark

  1. Enter the desired search query in the search bar and perform the search.
  2. Click the star-shaped Bookmark button to the right of the search bar to open the New Bookmark window.
    Bookmark Icon

    Figure 2.5. Bookmark Icon

  3. Enter the Name of the bookmark.
  4. Edit the Search string field (if applicable).
  5. Click OK to save the query as a bookmark and close the window.
  6. The search query is saved and displays in the Bookmarks pane.
Result
You have saved a search query as a bookmark for future reuse. Use the Bookmark pane to find and select the bookmark.

2.7. The Configure Window

Accessed from the main title bar in the Administration Portal, the Configure window allows you to configure a number of global resources for your Red Hat Enterprise Virtualization environment, such as users, roles, and permissions, cluster policies, and instance types. This window allows you to customize the way in which users interact with resources in the environment, and provides a central location for configuring options that can be applied to multiple clusters.
Accessing the Configure window

Figure 2.6. Accessing the Configure window

2.7.1. Roles

Roles are predefined sets of privileges that can be configured from Red Hat Enterprise Virtualization Manager. Roles provide access and management permissions to different levels of resources in the data center, and to specific physical and virtual resources.
With multilevel administration, any permissions which apply to a container object also apply to all individual objects within that container. For example, when a host administrator role is assigned to a user on a specific host, the user gains permissions to perform any of the available host operations, but only on the assigned host. However, if the host administrator role is assigned to a user on a data center, the user gains permissions to perform host operations on all hosts within the cluster of the data center.

2.7.1.1. Creating a New Role

Summary
If the role you require is not on Red Hat Enterprise Virtualization's default list of roles, you can create a new role and customize it to suit your purposes.

Procedure 2.2. Creating a New Role

  1. On the header bar, click the Configure button to open the Configure window. The window shows a list of default User and Administrator roles, and any custom roles.
  2. Click New. The New Role dialog box displays.
    The New Role Dialog

    Figure 2.7. The New Role Dialog

  3. Enter the Name and Description of the new role.
  4. Select either Admin or User as the Account Type.
  5. Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object.
  6. For each of the objects, select or clear the actions you wish to permit or deny for the role you are setting up.
  7. Click OK to apply the changes you have made. The new role displays on the list of roles.
Result
You have created a new role with permissions to specific resources. You can assign the new role to users.

2.7.1.2. Editing or Copying a Role

Summary
You can change the settings for roles you have created, but you cannot change default roles. To change default roles, clone and modify them to suit your requirements.

Procedure 2.3. Editing or Copying a Role

  1. On the header bar, click the Configure button to open the Configure window. The window shows a list of default User and Administrator roles, and any custom roles.
  2. Select the role you wish to change. Click Edit to open the Edit Role window, or click Copy to open the Copy Role window.
  3. If necessary, edit the Name and Description of the role.
  4. Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
  5. For each of the objects, select or clear the actions you wish to permit or deny for the role you are editing.
  6. Click OK to apply the changes you have made.
Result
You have edited the properties of a role, or cloned a role.

2.7.1.3. User Role and Authorization Examples

The following examples illustrate how to apply authorization controls for various scenarios, using the different features of the authorization system described in this chapter.

Example 2.1. Cluster Permissions

Sarah is the system administrator for the accounts department of a company. All the virtual resources for her department are organized under a Red Hat Enterprise Virtualization cluster called Accounts. She is assigned the ClusterAdmin role on the accounts cluster. This enables her to manage all virtual machines in the cluster, since the virtual machines are child objects of the cluster. Managing the virtual machines includes editing, adding, or removing virtual resources such as disks, and taking snapshots. It does not allow her to manage any resources outside this cluster. Because ClusterAdmin is an administrator role, it allows her to use the Administration Portal to manage these resources, but does not give her any access via the User Portal.

Example 2.2. VM PowerUser Permissions

John is a software developer in the accounts department. He uses virtual machines to build and test his software. Sarah has created a virtual desktop called johndesktop for him. John is assigned the UserVmManager role on the johndesktop virtual machine. This allows him to access this single virtual machine using the User Portal. Because he has UserVmManager permissions, he can modify the virtual machine and add resources to it, such as new virtual disks. Because UserVmManager is a user role, it does not allow him to use the Administration Portal.

Example 2.3. Data Center Power User Role Permissions

Penelope is an office manager. In addition to her own responsibilities, she occasionally helps the HR manager with recruitment tasks, such as scheduling interviews and following up on reference checks. As per corporate policy, Penelope needs to use a particular application for recruitment tasks.
While Penelope has her own machine for office management tasks, she wants to create a separate virtual machine to run the recruitment application. She is assigned PowerUserRole permissions for the data center in which her new virtual machine will reside. This is because to create a new virtual machine, she needs to make changes to several components within the data center, including creating the virtual machine disk image in the storage domain.
Note that this is not the same as assigning DataCenterAdmin privileges to Penelope. As a PowerUser for a data center, Penelope can log in to the User Portal and perform virtual machine-specific actions on virtual machines within the data center. She cannot perform data center-level operations such as attaching hosts or storage to a data center.

Example 2.4. Network Administrator Permissions

Chris works as the network administrator in the IT department. Her day-to-day responsibilities include creating, manipulating, and removing networks in the department's Red Hat Enterprise Virtualization environment. For her role, she requires administrative privileges on the resources and on the networks of each resource. For example, if Chris has NetworkAdmin privileges on the IT department's data center, she can add and remove networks in the data center, and attach and detach networks for all virtual machines belonging to the data center.
In addition to managing the networks of the company's virtualized infrastructure, Chris also has a junior network administrator reporting to her. The junior network administrator, Pat, is managing a smaller virtualized environment for the company's internal training department. Chris has assigned Pat NetworkUser permissions and UserVmManager permissions for the virtual machines used by the internal training department. With these permissions, Pat can perform simple administrative tasks such as adding network interfaces onto virtual machines in the Extended tab of the User Portal. However, he does not have permissions to alter the networks for the hosts on which the virtual machines run, or the networks on the data center to which the virtual machines belong.

Example 2.5. Custom Role Permissions

Rachel works in the IT department, and is responsible for managing user accounts in Red Hat Enterprise Virtualization. She needs permission to add user accounts and assign them the appropriate roles and permissions. She does not use any virtual machines herself, and should not have access to administration of hosts, virtual machines, clusters or data centers. There is no built-in role which provides her with this specific set of permissions. A custom role must be created to define the set of permissions appropriate to Rachel's position.
UserManager Custom Role

Figure 2.8. UserManager Custom Role

The UserManager custom role shown above allows manipulation of users, permissions and roles. These actions are organized under System - the top level object of the hierarchy shown in Figure 2.8, “UserManager Custom Role”. This means they apply to all other objects in the system. The role is set to have an Account Type of Admin. This means that when she is assigned this role, Rachel can only use the Administration Portal, not the User Portal.

2.7.2. System Permissions

Permissions enable users to perform actions on objects, where objects are either individual objects or container objects.
Permissions & Roles

Figure 2.9. Permissions & Roles

Any permissions that apply to a container object also apply to all members of that container. The following diagram depicts the hierarchy of objects in the system.
Red Hat Enterprise Virtualization Object Hierarchy

Figure 2.10. Red Hat Enterprise Virtualization Object Hierarchy

2.7.2.1. User Properties

Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine.

2.7.2.2. User and Administrator Roles

Red Hat Enterprise Virtualization provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles:
  • Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the User Portal; however, it has no bearing on what a user can see in the User Portal.
  • User Role: Allows access to the User Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the User Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the User Portal.
For example, if you have an administrator role on a cluster, you can manage all virtual machines in the cluster using the Administration Portal. However, you cannot access any of these virtual machines in the User Portal; this requires a user role.

2.7.2.3. User Roles Explained

The table below describes basic user roles which confer permissions to access and configure virtual machines in the User Portal.

Table 2.1. Red Hat Enterprise Virtualization User Roles - Basic

Role Privileges Notes
UserRole Can access and use virtual machines and pools. Can log in to the User Portal, use assigned virtual machines and pools, view virtual machine state and details.
PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center.
UserVmManager System administrator of a virtual machine. Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the User Portal is automatically assigned the UserVmManager role on the machine.
The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the User Portal.

Table 2.2. Red Hat Enterprise Virtualization User Roles - Advanced

Role Privileges Notes
UserTemplateBasedVm Limited privileges to only use Templates. Can use templates to create virtual machines.
DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached.
VmCreator Can create virtual machines in the User Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains.
TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.
DiskCreator Can create, edit, manage and remove virtual machine disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains.
TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template.
NetworkUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks.

2.7.2.4. Administrator Roles Explained

The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal.

Table 2.3. Red Hat Enterprise Virtualization System Administrator Roles - Basic

Role Privileges Notes
SuperUser System Administrator of the Red Hat Enterprise Virtualization environment. Has full permissions across all objects and levels, can manage all objects across all data centers.
ClusterAdmin Cluster Administrator. Possesses administrative permissions for all objects underneath a specific cluster.
DataCenterAdmin Data Center Administrator. Possesses administrative permissions for all objects underneath a specific data center except for storage.

Important

Do not use the administrative user for the directory server as the Red Hat Enterprise Virtualization administrative user. Create a user in the directory server specifically for use as the Red Hat Enterprise Virtualization administrative user.
The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal.

Table 2.4. Red Hat Enterprise Virtualization System Administrator Roles - Advanced

Role Privileges Notes
TemplateAdmin Administrator of a virtual machine template. Can create, delete, and configure the storage domains and network details of templates, and move templates between domains.
StorageAdmin Storage Administrator. Can create, delete, configure, and manage an assigned storage domain.
HostAdmin Host Administrator. Can attach, remove, configure, and manage a specific host.
NetworkAdmin Network Administrator. Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster.
VmPoolAdmin System Administrator of a virtual pool. Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool.
GlusterAdmin Gluster Storage Administrator. Can create, delete, configure, and manage Gluster storage volumes.

2.7.3. Cluster Policies

A cluster policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster to which that cluster policy is applied. Cluster policies determine this logic via a combination of filters, weightings, and a load balancing policy. While the Red Hat Enterprise Virtualization Manager provides four default cluster policies - Evenly_Distributed, None, Power_Saving, and VM_Evenly_Distributed - you can also define new cluster policies that provide fine-grained control over the distribution of virtual machines.

2.7.3.1. Creating a Cluster Policy

Summary
You can create new cluster policies to control the logic by which virtual machines are distributed amongst a given cluster in your Red Hat Enterprise Virtualization environment.

Procedure 2.4. Creating a Cluster Policy

  1. Click the Configure button in the header bar of the Administration Portal to open the Configure window.
  2. Click Cluster Policies to view the cluster policies tab.
  3. Click New to open the New Cluster Policy window.
    The New Cluster Policy Window

    Figure 2.11. The New Cluster Policy Window

  4. Enter a Name and Description for the cluster policy.
  5. Configure filter modules:
    1. In the Filter Modules section, drag and drop the preferred filter modules to apply to the cluster policy from the Disabled Filters section into the Enabled Filters section.
    2. Specific filter modules can also be set as the First, to be given highest priority, or Last, to be given lowest priority, for basic optimization.
      To set the priority, right-click any filter module, hover the cursor over Position and select First or Last.
  6. Configure weight modules:
    1. In the Weights Modules section, drag and drop the preferred weights modules to apply to the cluster policy from the Disabled Weights section into the Enabled Weights & Factors section.
    2. Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules.
  7. Specify a load balancing policy:
    1. From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the cluster policy.
    2. From the drop-down menu in the Properties section, select a load balancing property to apply to the cluster policy and use the text field to the right of that property to specify a value.
    3. Use the + and - buttons to add or remove additional properties.
  8. Click OK.
Result
You have created a new cluster policy that can be applied to clusters in your Red Hat Enterprise Virtualization environment.

2.7.3.2. Explanation of Settings in the New Cluster Policy and Edit Cluster Policy Window

The following table details the options available in the New Cluster Policy and Edit Cluster Policy windows.

Table 2.5. New Cluster Policy Settings

Field Name
Description
Name
The name of the cluster policy. This is the name used to refer to the cluster policy in the Red Hat Enterprise Virtualization Manager.
Description
A description of the cluster policy. This field is recommended but not mandatory.
Filter Modules
A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
  • PinToHost: Hosts other than the host to which the virtual machine is pinned.
  • CPU-Level: Hosts that do not meet the CPU topology of the virtual machine.
  • CPU: Hosts with fewer CPUs than the number assigned to the virtual machine.
  • Memory: Hosts that do not have sufficient memory to run the virtual machine.
  • VmAffinityGroups: Hosts that do not meet the conditions specified for a virtual machine that is a member of an affinity group. For example, that virtual machines in an affinity group must run on the same host or on separate hosts.
  • HA: Forces the hosted engine virtual machine to only run on hosts with a positive high availability score.
  • Network: Hosts on which networks required by the network interface controller of a virtual machine are not installed, or on which the cluster's display network is not installed.
Weights Modules
A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
  • OptimalForHaReservation: Weights hosts in accordance with their high availability score.
  • None: Weights hosts in accordance with the even distribution module.
  • OptimalForEvenGuestDistribution: Weights hosts in accordance with the number of virtual machines running on those hosts.
  • VmAffinityGroups: Weights hosts in accordance with the affinity groups defined for virtual machines. This weight module determines how likely virtual machines in an affinity group are to run on the same host or on separate hosts in accordance with the parameters of that affinity group.
  • OptimalForPowerSaving: Weights hosts in accordance with their CPU usage, giving priority to hosts with higher CPU usage.
  • OptimalForEvenDistribution: Weights hosts in accordance with their CPU usage, giving priority to hosts with lower CPU usage.
  • HA: Weights hosts in accordance with their high availability score.
Load Balancer
This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage.
Properties
This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the cluster policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module.

2.7.4. Instance Types

Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field.
A set of predefined instance types are available by default, as outlined in the following table:

Table 2.6. Predefined Instance Types

Name
Memory
vCPUs
Tiny
512 MB
1
Small
2 GB
1
Medium
4 GB
2
Large
8 GB
2
XLarge
16 GB
4
Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window.
The Instance Types Tab

Figure 2.12. The Instance Types Tab

Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type will have a chain link image next to them ( ). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom, and the chain will appear broken ( ). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one.

2.7.4.1. Creating Instance Types

Summary
Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines.

Procedure 2.5. Creating an Instance Type

  1. On the header bar, click the Configure button to open the Configure window.
  2. Click the Instance Types tab.
  3. Click the New button to open the New Instance Type window.
    The New Instance Type Window

    Figure 2.13. The New Instance Type Window

  4. On the General tab, fill in the Name and Description fields. You can accept the default settings for other fields, or change them if required.
  5. Click the System, Console, Host, High Availability, Resource Allocation, Boot Options, and Random Generator tabs in turn to define your instance configuration as required. The settings that appear under these tabs are identical to those in the New Virtual Machine window, but with the relevant fields only.
  6. Click OK to create the instance type and close the window.
Result
The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop down menu when creating or editing a virtual machine.

2.7.4.2. Editing Instance Types

Summary
Administrators can edit existing instance types from the Configure window.

Procedure 2.6. Editing Instance Type Properties

  1. Select the instance type to be edited.
  2. Click the Edit button to open the Edit Instance Type window.
  3. Change the General, System, Console, Host, High Availability, Resource Allocation, Boot Options, and Random Generator fields as required.
  4. Click OK to save your changes.
Result
The configuration of the instance type is updated. Both new virtual machines and existing virtual machines based on the instance type will use the new configuration.

2.7.4.3. Removing Instance Types

Summary
Remove an instance type from the Red Hat Enterprise Virtualization environment.

Procedure 2.7. Removing an Instance Type

  1. Select the instance type to be removed.
  2. Click the Remove button to open the Remove Instance Type window.
  3. If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, click the Approve Operation checkbox. Otherwise click Cancel.
  4. Click OK.
Result
The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type).

Part I. Administering the Resources

Table of Contents

3. Quality of Service
3.1. Storage Quality of Service
3.2. Network Quality of Service
3.3. CPU Quality of Service
4. Data Centers
4.1. Introduction to Data Centers
4.2. The Storage Pool Manager
4.3. SPM Priority
4.4. Using the Events Tab to Identify Problem Objects in Data Centers
4.5. Data Center Tasks
4.6. Data Centers and Storage Domains
4.7. Data Centers and Permissions
5. Clusters
5.1. Introduction to Clusters
5.2. Cluster Tasks
5.3. Clusters and Permissions
6. Logical Networks
6.1. Introduction to Logical Networks
6.2. Required Networks, Optional Networks, and Virtual Machine Networks
6.3. Logical Network Tasks
6.4. Virtual Network Interface Cards
6.5. External Provider Networks
6.6. Logical Networks and Permissions
7. Hosts
7.1. Introduction to Red Hat Enterprise Virtualization Hosts
7.2. Red Hat Enterprise Virtualization Hypervisor Hosts
7.3. Foreman Host Provider Hosts
7.4. Red Hat Enterprise Linux Hosts
7.5. Host Tasks
7.6. Hosts and Networking
7.7. Host Resilience
7.8. Hosts and Permissions
8. Storage
8.1. Understanding Storage Domains
8.2. Storage Metadata Versions in Red Hat Enterprise Virtualization
8.3. Preparing and Adding NFS Storage
8.4. Preparing and Adding Local Storage
8.5. Preparing and Adding POSIX Compliant File System Storage
8.6. Preparing and Adding Block Storage
8.7. Importing Existing Storage Domains
8.8. Storage Tasks
8.9. Storage and Permissions
9. Working with Red Hat Gluster Storage
9.1. Red Hat Gluster Storage Nodes
9.2. Using Red Hat Gluster Storage as a Storage Domain
9.3. Clusters and Gluster Hooks
10. Virtual Machines
10.1. Introduction to Virtual Machines
10.2. Supported Virtual Machine Operating Systems
10.3. Virtual Machine Performance Parameters
10.4. Creating Virtual Machines
10.5. Explanation of Settings and Controls in the New Virtual Machine and Edit Virtual Machine Windows
10.6. Configuring Virtual Machines
10.7. Editing Virtual Machines
10.8. Running Virtual Machines
10.9. Removing Virtual Machines
10.10. Cloning Virtual Machines
10.11. Virtual Machines and Permissions
10.12. Snapshots
10.13. Affinity Groups
10.14. Exporting and Importing Virtual Machines and Templates
10.15. Migrating Virtual Machines Between Hosts
10.16. Improving Uptime with Virtual Machine High Availability
10.17. Other Virtual Machine Tasks
11. Templates
11.1. Introduction to Templates
11.2. Sealing Virtual Machines in Preparation for Deployment as Templates
11.3. Template Tasks
11.4. Templates and Permissions
12. Pools
12.1. Introduction to Virtual Machine Pools
12.2. Virtual Machine Pool Tasks
12.3. Pools and Permissions
12.4. Trusted Compute Pools
13. Virtual Machine Disks
13.1. Understanding Virtual Machine Storage
13.2. Understanding Virtual Disks
13.3. Settings to Wipe Virtual Disks After Deletion
13.4. Shareable Disks in Red Hat Enterprise Virtualization
13.5. Read Only Disks in Red Hat Enterprise Virtualization
13.6. Virtual Disk Tasks
13.7. Virtual Disks and Permissions
14. External Providers
14.1. Introduction to External Providers in Red Hat Enterprise Virtualization
14.2. Enabling the Authentication of OpenStack Providers
14.3. Adding External Providers
14.4. Editing External Providers
14.5. Removing External Providers

Chapter 3. Quality of Service

Red Hat Enterprise Virtualization allows you to define quality of service entries that provide fine-grained control over the level of input and output, processing, and networking capabilities that resources in your environment can access. Quality of service entries are defined at the data center level and are assigned to profiles created under clusters and storage domains. These profiles are then assigned to individual resources in the clusters and storage domains where the profiles were created.

3.1. Storage Quality of Service

Storage quality of service defines the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Assigning storage quality of service to a virtual disk allows you to fine tune the performance of storage domains and prevent the storage operations associated with one virtual disk from affecting the storage capabilities available to other virtual disks hosted in the same storage domain.

3.1.1. Creating a Storage Quality of Service Entry

Create a storage quality of service entry.

Procedure 3.1. Creating a Storage Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click Storage.
  4. Click New.
  5. Enter a name for the quality of service entry in the QoS Name field.
  6. Enter a description for the quality of service entry in the Description field.
  7. Specify the throughput quality of service:
    1. Select the Throughput check box.
    2. Enter the maximum permitted total throughput in the Total field.
    3. Enter the maximum permitted throughput for read operations in the Read field.
    4. Enter the maximum permitted throughput for write operations in the Write field.
  8. Specify the input and output quality of service:
    1. Select the IOps check box.
    2. Enter the maximum permitted number of input and output operations per second in the Total field.
    3. Enter the maximum permitted number of input operations per second in the Read field.
    4. Enter the maximum permitted number of output operations per second in the Write field.
  9. Click OK.
You have created a storage quality of service entry, and can create disk profiles based on that entry in data storage domains that belong to the data center.

3.1.2. Removing a Storage Quality of Service Entry

Remove an existing storage quality of service entry.

Procedure 3.2. Removing a Storage Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click Storage.
  4. Select the storage quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the storage quality of service entry, and that entry is no longer available. If any disk profiles were based on that entry, the storage quality of service entry for those profiles is automatically set to [unlimited].

3.2. Network Quality of Service

Network quality of service is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual network interface controllers. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.

Important

Network quality of service is only supported on cluster compatibility version 3.3 and higher.

3.2.1. Creating a Network Quality of Service Entry

Create a network quality of service entry to regulate network traffic when applied to a virtual network interface controller (vNIC) profile, also known as a virtual machine network interface profile.

Procedure 3.3. Creating a Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click the QoS tab in the details pane.
  3. Click Network.
  4. Click New.
  5. Enter a name for the network quality of service entry in the Name field.
  6. Enter the limits for the Inbound and Outbound network traffic.
  7. Click OK.
You have created a network quality of service entry that can be used in a virtual network interface controller.

3.2.2. Settings in the New Network QoS and Edit Network QoS Windows Explained

Network QoS settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.

Table 3.1. Network QoS Settings

Field Name
Description
Data Center
The data center to which the Network QoS policy is to be added. This field is configured automatically according to the selected data center.
Name
A name to represent the network QoS policy within the Manager.
Inbound
The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.
  • Average: The average speed of inbound traffic.
  • Peak: The speed of inbound traffic during peak times.
  • Burst: The speed of inbound traffic during bursts.
Outbound
The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.
  • Average: The average speed of outbound traffic.
  • Peak: The speed of outbound traffic during peak times.
  • Burst: The speed of outbound traffic during bursts.

3.2.3. Removing a Network Quality of Service Entry

Remove an existing network quality of service entry.

Procedure 3.4. Removing a Network Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. In the details pane, click QoS.
  3. Click Network.
  4. Select the network quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the network quality of service entry, and that entry is no longer available.

3.3. CPU Quality of Service

CPU quality of service defines the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. Assigning CPU quality of service to a virtual machine allows you to prevent the workload on one virtual machine in a cluster from affecting the processing resources available to other virtual machines in that cluster.

3.3.1. Creating a CPU Quality of Service Entry

Create a CPU quality of service entry.

Procedure 3.5. Creating a CPU Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click CPU.
  4. Click New.
  5. Enter a name for the quality of service entry in the QoS Name field.
  6. Enter a description for the quality of service entry in the Description field.
  7. Enter the maximum processing capability the quality of service entry permits in the Limit field, in percentage. Do not include the % symbol.
  8. Click OK.
You have created a CPU quality of service entry, and can create CPU profiles based on that entry in clusters that belong to the data center.

3.3.2. Removing a CPU Quality of Service Entry

Remove an existing CPU quality of service entry.

Procedure 3.6. Removing a CPU Quality of Service Entry

  1. Click the Data Centers resource tab and select a data center.
  2. Click QoS in the details pane.
  3. Click CPU.
  4. Select the CPU quality of service entry to remove.
  5. Click Remove.
  6. Click OK when prompted.
You have removed the CPU quality of service entry, and that entry is no longer available. If any CPU profiles were based on that entry, the CPU quality of service entry for those profiles is automatically set to [unlimited].

Chapter 4. Data Centers

4.1. Introduction to Data Centers

A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.
A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. A Red Hat Enterprise Virtualization environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.
All data centers are managed from the single Administration Portal.
Data Centers

Figure 4.1. Data Centers

Red Hat Enterprise Virtualization creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.
Data Center Objects

Figure 4.2. Data Center Objects

4.2. The Storage Pool Manager

The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the Red Hat Enterprise Virtualization Manager grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.
The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.
The Red Hat Enterprise Virtualization Manager ensures that the SPM is always available. The Manager moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.

4.3. SPM Priority

The SPM role uses some of a host's available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.
You can change a host's SPM priority by editing the host.

4.4. Using the Events Tab to Identify Problem Objects in Data Centers

The Events tab for a data center displays all events associated with that data center; events include audits, warnings, and errors. The information displayed in the results list will enable you to identify problem objects in your Red Hat Enterprise Virtualization environment.
The Events results list has two views: Basic and Advanced. Basic view displays the event icon, the time of the event, and the description of the events. Advanced view displays these also and includes, where applicable, the event ID; the associated user, host, virtual machine, template, data center, storage, and cluster; the Gluster volume, and the correlation ID.

4.5. Data Center Tasks

4.5.1. Creating a New Data Center

Summary
This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.

Note

The storage Type can be edited until the first storage domain is added to the data center. Once a storage domain has been added, the storage Type cannot be changed.
If you set the Compatibility Version as 3.1, it cannot be changed to 3.0 at a later time; version regression is not allowed.

Procedure 4.1. Creating a New Data Center

  1. Select the Data Centers resource tab to list all data centers in the results list.
  2. Click New to open the New Data Center window.
  3. Enter the Name and Description of the data center.
  4. Select the storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
  5. Click OK to create the data center and open the New Data Center - Guide Me window.
  6. The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the data center and clicking the Guide Me button.
Result
The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.

4.5.2. Explanation of Settings in the New Data Center and Edit Data Center Windows

The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 4.1. Data Center Properties

Field
Description/Action
Name
The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the data center. This field is recommended but not mandatory.
Type
The storage type. Choose one of the following:
  • Shared
  • Local
The type of data domain dictates the type of the data center and cannot be changed after creation without significant disruption. Multiple types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, though local and shared domains cannot be mixed.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of the following:
  • 3.0
  • 3.1
  • 3.2
  • 3.3
  • 3.4
  • 3.5
After upgrading the Red Hat Enterprise Virtualization Manager, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center.
Quota Mode
Quota is a resource limitation tool provided with Red Hat Enterprise Virtualization. Choose one of:
  • Disabled: Select if you do not want to implement Quota
  • Audit: Select if you want to edit the Quota settings
  • Enforced: Select to implement Quota

4.5.3. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 4.2. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

4.5.4. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 4.3. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
    • From the Data Centers details pane, click New to open the New Logical Network window.
    • From the Clusters details pane, click Add Network to open the New Logical Network window.
  3. Enter a Name, Description, and Comment for the logical network.
  4. Optionally select the Create on external provider check box. Select the External Provider from the drop-down list and provide the IP address of the Physical Network.
    If Create on external provider is selected, the Network Label, VM Network, and MTU options are disabled.
  5. Enter a new label or select an existing label for the logical network in the Network Label text field.
  6. Optionally enable Enable VLAN tagging.
  7. Optionally disable VM Network.
  8. Set the MTU value to Default (1500) or Custom.
  9. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  10. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
  11. From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
  12. Click OK.
Result
You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

4.5.5. Removing a Logical Network

Summary
Remove a logical network from the Manager.

Procedure 4.4. Removing Logical Networks

  1. Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.
  2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.
  3. Select a logical network and click Remove to open the Remove Logical Network(s) window.
  4. Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider.
  5. Click OK.
Result
The logical network is removed from the Manager and is no longer available. If the logical network was provided by an external provider and you elected to remove the logical network from that external provider, it is removed from the external provider and is no longer available on that external provider as well.

4.5.6. Re-Initializing a Data Center: Recovery Procedure

Summary
This recovery procedure replaces the master data domain of your data center with a new master data domain; necessary in the event of data corruption of your master data domain. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.
You can import any backup or exported virtual machines or templates into your new master data domain.

Procedure 4.5. Re-Initializing a Data Center

  1. Click the Data Centers resource tab and select the data center to re-initialize.
  2. Ensure that any storage domains attached to the data center are in maintenance mode.
  3. Right-click the data center and select Re-Initialize Data Center from the drop-down menu to open the Data Center Re-Initialize window.
  4. The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
  5. Select the Approve operation check box.
  6. Click OK to close the window and re-initialize the data center.
Result
The storage domain is attached to the data center as the master data domain and activated. You can now import any backup or exported virtual machines or templates into your new master data domain.

4.5.7. Removing a Data Center

Summary
An active host is required to remove a data center. Removing a data center will not remove the associated resources.

Procedure 4.6. Removing a Data Center

  1. Ensure the storage domains attached to the data center is in maintenance mode.
  2. Click the Data Centers resource tab and select the data center to remove.
  3. Click Remove to open the Remove Data Center(s) confirmation window.
  4. Click OK.
Result
The data center has been removed.

4.5.8. Force Removing a Data Center

Summary
A data center becomes Non Responsive if the attached storage domain is corrupt or if the host becomes Non Responsive. You cannot Remove the data center under either circumstance.
Force Remove does not require an active host. It also permanently removes the attached storage domain.
It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.

Procedure 4.7. Force Removing a Data Center

  1. Click the Data Centers resource tab and select the data center to remove.
  2. Click Force Remove to open the Force Remove Data Center confirmation window.
  3. Select the Approve operation check box.
  4. Click OK
Result
The data center and attached storage domain are permanently removed from the Red Hat Enterprise Virtualization environment.

4.5.9. Changing the Data Center Compatibility Version

Summary
Red Hat Enterprise Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Enterprise Virtualization that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.

Note

To change the data center compatibility version, you must have first updated all the clusters in your data center to a level that supports your desired compatibility level.

Procedure 4.8. Changing the Data Center Compatibility Version

  1. Log in to the Administration Portal as the administrative user. By default this is the admin user.
  2. Click the Data Centers tab.
  3. Select the data center to change from the list displayed. If the list of data centers is too long to filter visually then perform a search to locate the desired data center.
  4. Click the Edit button.
  5. Change the Compatibility Version to the desired value.
  6. Click OK.
Result
You have updated the compatibility version of the data center.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

4.6. Data Centers and Storage Domains

4.6.1. Attaching an Existing Data Domain to a Data Center

Summary
Data domains that are Unattached can be attached to a data center. The data domain must be of the same Storage Type as the data center.

Procedure 4.9. Attaching an Existing Data Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Data to open the Attach Storage window.
  4. Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
  5. Click OK.
Result
The data domain is attached to the data center and is automatically activated.

Note

In Red Hat Enterprise Virtualization 3.4 and later, shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.

4.6.2. Attaching an Existing ISO domain to a Data Center

Summary
An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.
Only one ISO domain can be attached to a data center.

Procedure 4.10. Attaching an Existing ISO Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach ISO to open the Attach ISO Library window.
  4. Click the radio button for the appropriate ISO domain.
  5. Click OK.
Result
The ISO domain is attached to the data center and is automatically activated.

4.6.3. Attaching an Existing Export Domain to a Data Center

Summary
An export domain that is Unattached can be attached to a data center.
Only one export domain can be attached to a data center.

Procedure 4.11. Attaching an Existing Export Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Export to open the Attach Export Domain window.
  4. Click the radio button for the appropriate Export domain.
  5. Click OK.
Result
The Export domain is attached to the data center and is automatically activated.

4.6.4. Detaching a Storage Domain from a Data Center

Summary
Detaching a storage domain from a data center will stop the data center from associating with that storage domain. The storage domain is not removed from the Red Hat Enterprise Virtualization environment; it can be attached to another data center.
Data, such as virtual machines and templates, remains attached to the storage domain.

Note

The master storage, if it is the last available storage domain, cannot be removed.

Procedure 4.12. Detaching a Storage Domain from a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains attached to the data center.
  3. Select the storage domain to detach. If the storage domain is Active, click Maintenance to open the Maintenance Storage Domain(s) confirmation window.
  4. Click OK to initiate maintenance mode.
  5. Click Detach to open the Detach Storage confirmation window.
  6. Click OK.
Result
You have detached the storage domain from the data center. It can take up to several minutes for the storage domain to disappear from the details pane.

4.6.5. Activating a Storage Domain from Maintenance Mode

Summary
Storage domains in maintenance mode must be activated to be used.

Procedure 4.13. Activating a Data Domain from Maintenance Mode

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains attached to the data center.
  3. Select the appropriate storage domain and click Activate.
Result
The storage domain is activated and can be used in the data center.

4.7. Data Centers and Permissions

4.7.1. Managing System Permissions for a Data Center

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.
The data center administrator role permits the following actions:
  • Create and remove clusters associated with the data center.
  • Add and remove hosts, virtual machines, and pools associated with the data center.
  • Edit user permissions for virtual machines associated with the data center.

Note

You can only assign roles and permissions to existing users.
You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.

4.7.2. Data Center Administrator Roles Explained

Data Center Permission Roles
The table below describes the administrator roles and privileges applicable to data center administration.

Table 4.2. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DataCenterAdmin Data Center Administrator Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines.
NetworkAdmin Network Administrator Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well.

4.7.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 4.14. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

4.7.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 4.15. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 5. Clusters

5.1. Introduction to Clusters

A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.
Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the Clusters tab and in the Configuration tool during runtime. The cluster is the highest level at which power and load-sharing policies can be defined.
The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count, respectively.
Clusters run virtual machines or Red Hat Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.
Red Hat Enterprise Virtualization creates a default cluster in the default data center during installation.
Cluster

Figure 5.1. Cluster

5.2. Cluster Tasks

5.2.1. Creating a New Cluster

A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button.

Procedure 5.1. Creating a New Cluster

  1. Select the Clusters resource tab.
  2. Click New to open the New Cluster window.
  3. Select the Data Center the cluster will belong to from the drop-down list.
  4. Enter the Name and Description of the cluster.
  5. Select the CPU Type and Compatibility Version from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.

    Note

    For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. If your cluster includes hosts with different CPU models, select the oldest CPU model. For more information on each CPU model, see https://access.redhat.com/solutions/634853.
  6. Select either the Enable Virt Service or Enable Gluster Service radio button to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes. Note that you cannot add Red Hat Enterprise Virtualization Hypervisor hosts to a Gluster-enabled cluster.
  7. Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance.
  8. Select either the /dev/random source (Linux-provided device) or /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster must use.
  9. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
  10. Click the Resilience Policy tab to select the virtual machine migration policy.
  11. Click the Cluster Policy tab to optionally configure a cluster policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and add a custom serial number policy.
  12. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.
  13. Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options.
  14. Click OK to create the cluster and open the New Cluster - Guide Me window.
  15. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the cluster and clicking the Guide Me button.
The new cluster is added to the virtualization environment.

5.2.2. Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

5.2.2.1. General Cluster Settings Explained

New Cluster window

Figure 5.2. New Cluster window

The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 5.1. General Cluster Settings

Field
Description/Action
Data Center
The data center that will contain the cluster. The data center must be created before adding a cluster.
Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description / Comment
The description of the cluster or additional notes. These fields are recommended but not mandatory.
CPU Type
The CPU type of the cluster. Choose one of:
  • Intel Conroe Family
  • Intel Penryn Family
  • Intel Nehalem Family
  • Intel Westmere Family
  • Intel SandyBridge Family
  • Intel Haswell
  • AMD Opteron G1
  • AMD Opteron G2
  • AMD Opteron G3
  • AMD Opteron G4
  • AMD Opteron G5
  • IBM POWER8
All hosts in a cluster must run either Intel, AMD, or IBM POWER8 CPU type; this cannot be changed after creation without significant disruption. The CPU type should be set to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of:
  • 3.0
  • 3.1
  • 3.2
  • 3.3
  • 3.4
  • 3.5
You will not be able to select a version older than the version specified for the data center.
CPU Architecture The CPU architecture of the cluster. Choose one of:
  • undefined
  • x86_64
  • ppc64
This field only appears if a CPU Type is not specified.
Enable Virt Service
If this radio button is selected, hosts in this cluster will be used to run virtual machines.
Enable Gluster Service
If this radio button is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. You cannot add a Red Hat Enterprise Virtualization Hypervisor host to a cluster with this option enabled.
Import existing gluster configuration
This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Enterprise Virtualization Manager.
The following options are required for each host in the cluster that is being imported:
  • Address: Enter the IP or fully qualified domain name of the Gluster host server.
  • Fingerprint: Red Hat Enterprise Virtualization Manager fetches the host's fingerprint, to ensure you are connecting with the correct host.
  • Root Password: Enter the root password required for communicating with the host.
Enable to set VM maintenance reason If this check box is selected, an optional reason field will appear when a virtual machine in the cluster is shut down from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the virtual machine is powered on again.
Required Random Number Generator sources:
If one of the following check boxes is selected, all hosts in the cluster must have that device available. This enables passthrough of entropy from the random number generator device to virtual machines.
  • /dev/random source - The Linux-provided random number generator.
  • /dev/hwrng source - An external hardware generator.
Note that this feature is only supported on hosts running Red Hat Enterprise Linux 6.6 and later or Red Hat Enterprise Linux 7.0 and later.

5.2.2.2. Optimization Settings Explained

Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Enterprise Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.
CPU Thread Handling allows hosts to run virtual machines with a total number of processor cores greater than number of cores in the host. This is useful for non-CPU-intensive workloads, where allowing a greater number of virtual machines to run can reduce hardware requirements. It also allows virtual machines to run with CPU topologies that would otherwise not be possible, specifically if the number of guest cores is between the number of host cores and number of host threads.
The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.

Table 5.2. Optimization Settings

Field
Description/Action
Memory Optimization
  • None - Disable memory overcommit: Disables memory page sharing.
  • For Server Load - Allow scheduling of 150% of physical memory: Sets the memory page sharing threshold to 150% of the system memory on each host.
  • For Desktop Load - Allow scheduling of 200% of physical memory: Sets the memory page sharing threshold to 200% of the system memory on each host.
CPU Threads
Selecting the Count Threads As Cores check box allows hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host.
The exposed host threads would be treated as cores which can be utilized by virtual machines. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores.
Memory Balloon
Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this option is set, the Memory Overcommit Manager (MoM) will start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.
To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine in cluster level 3.2 and higher includes a balloon device, unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Section 5.2.5, “Updating the MoM Policy on Hosts in a Cluster”.
It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution.
KSM control
Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost.

5.2.2.3. Resilience Policy Settings Explained

The resilience policy sets the virtual machine migration policy in the event of a host becoming non-operational. Virtual machines running on a host that becomes non-operational are live migrated to other hosts in the cluster; this migration is dependent upon your cluster resilience policy. If a host is non-responsive and gets rebooted, virtual machines with high availability are restarted on another host in the cluster. The resilience policy only applies to hosts in a non-operational state.

Table 5.3. Host Failure State Explained

State
Description
Non Operational
Non-operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. If a host becomes non-operational, the migration of virtual machines depends on the cluster resilience policy.
Non Responsive
Non-responsive hosts cannot be communicated with by the Manager. If a host becomes non-responsive, all virtual machines with high availability are restarted on a different host in the cluster.
Virtual machine migration is a network-intensive operation. For instance, on a setup where a host is running ten or more virtual machines, migrating and restarting all of them can be a long and resource-consuming process. Therefore, select the policy action to best suit your setup. If you prefer a conservative approach, disable all migration of virtual machines. Alternatively, if you have many virtual machines, but only several which are running critical workloads, select the option to migrate only highly available virtual machines.
The table below describes the settings for the Resilience Policy tab in the New Cluster and Edit Cluster windows. See Section 5.2.1, “Creating a New Cluster” for more information on how to set the resilience policy when creating a new cluster.

Table 5.4. Resilience Policy Settings

Field
Description/Action
Migrate Virtual Machines
Migrates all virtual machines in order of their defined priority.
Migrate only Highly Available Virtual Machines
Migrates only highly available virtual machines to prevent overloading other hosts.
Do Not Migrate Virtual Machines
Prevents virtual machines from being migrated.

5.2.2.4. Cluster Policy Settings Explained

Cluster policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the cluster policy to enable automatic load balancing across the hosts in a cluster.
Cluster Policy Settings: vm_evenly_distributed

Figure 5.3. Cluster Policy Settings: vm_evenly_distributed

The table below describes the settings for the Cluster Policy tab.

Table 5.5. Cluster Policy Tab Properties

Field
Description/Action
Select Policy
Select a policy from the drop-down list.
  • none: Set the policy value to none to have no load or power sharing between hosts. This is the default mode.
  • evenly_distributed: Distributes the CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined Maximum Service Level.
  • power_saving: Distributes the CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.
  • vm_evenly_distributed: Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.
Properties
The following properties appear depending on the selected policy, and can be edited if necessary:
  • HighVmCount: Sets the maximum number of virtual machines that can run on each host. Exceeding this limit qualifies the host as overloaded. The default value is 10.
  • MigrationThreshold: Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5.
  • SpmVmGrace: Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run. The default value is 5.
  • CpuOverCommitDurationMinutes: Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the cluster policy takes action. The defined time interval protects against temporary spikes in CPU load activating cluster policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2.
  • HighUtilization: Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80.
  • LowUtilization: Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20.
  • ScaleDown: Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none.
  • HostsInReserve: Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy.
  • EnableAutomaticHostPowerManagement: Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true.
Scheduler Optimization
Optimize scheduling for host weighing/ordering.
  • Optimize for Utilization: Includes weight modules in scheduling to allow best selection.
  • Optimize for Speed: Skips host weighting in cases where there are more than ten pending requests.
Enable Trusted Service
Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details.
Enable HA Reservation
Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly.
Provide custom serial number policy
This check box allows you to specify a serial number policy for the virtual machines in the cluster. Select one of the following options:
  • Host ID: Sets the host's UUID as the virtual machine's serial number.
  • Vm ID: Sets the virtual machine's UUID as its serial number.
  • Custom serial number: Allows you to specify a custom serial number.
When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.

5.2.2.5. Cluster Console Settings Explained

The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows.

Table 5.6. Console Settings

Field
Description/Action
Define SPICE Proxy for Cluster
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.
Overridden SPICE proxy address
The proxy by which the SPICE client will connect to virtual machines. The address must be in the following format:
protocol://[host]:[port]

5.2.2.6. Fencing Policy Settings Explained

The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows.

Table 5.7. Fencing Policy Settings

Field Description/Action
Enable fencing Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere.
Skip fencing if host has live lease on storage If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced.
Skip fencing on cluster connectivity issues If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold. The Threshold value is selected from the drop-down list; available values are 25, 50, 75, and 100.

5.2.3. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 5.2. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

5.2.4. Setting Load and Power Management Policies for Hosts in a Cluster

The evenly_distributed and power_saving cluster policies allow you to specify acceptable CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed cluster policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the cluster policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each cluster policy, see Section 5.2.2.4, “Cluster Policy Settings Explained”.

Procedure 5.3. Setting Load and Power Management Policies for Hosts

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Click Edit to open the Edit Cluster window.
    Edit Cluster Policy

    Figure 5.4. Edit Cluster Policy

  3. Select one of the following policies:
    • none
    • vm_evenly_distributed
      1. Set the maximum number of virtual machines that can run on each host in the HighVmCount field.
      2. Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field.
      3. Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field.
    • evenly_distributed
      1. Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the cluster policy takes action in the CpuOverCommitDurationMinutes field.
      2. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
    • power_saving
      1. Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the cluster policy takes action in the CpuOverCommitDurationMinutes field.
      2. Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field.
      3. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
  4. Choose one of the following as the Scheduler Optimization for the cluster:
    • Select Optimize for Utilization to include weight modules in scheduling to allow best selection.
    • Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests.
  5. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box.
  6. Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines.
  7. Optionally select the Provide custom serial number policy check box to specify a serial number policy for the virtual machines in the cluster, and then select one of the following options:
    • Select Host ID to set the host's UUID as the virtual machine's serial number.
    • Select Vm ID to set the virtual machine's UUID as its serial number.
    • Select Custom serial number, and then specify a custom serial number in the text field.
  8. Click OK.

5.2.5. Updating the MoM Policy on Hosts in a Cluster

The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions at the cluster level are only passed to hosts the next time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up. The following procedure must be performed on each host individually.

Procedure 5.4. Synchronizing MoM Policy on a Host

  1. Click the Clusters tab and select the cluster to which the host belongs.
  2. Click the Hosts tab in the details pane and select the host that requires an updated MoM policy.
  3. Click Sync MoM Policy.
The MoM policy on the host is updated without having to move the host to maintenance mode and back Up.

5.2.6. CPU Profiles

CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect.

5.2.6.1. Creating a CPU Profile

Create a CPU profile. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs.

Procedure 5.5. Creating a CPU Profile

  1. Click the Clusters resource tab and select a cluster.
  2. Click the CPU Profiles sub tab in the details pane.
  3. Click New.
  4. Enter a name for the CPU profile in the Name field.
  5. Enter a description for the CPU profile in the Description field.
  6. Select the quality of service to apply to the CPU profile from the QoS list.
  7. Click OK.
You have created a CPU profile, and that CPU profile can be applied to virtual machines in the cluster.

5.2.6.2. Removing a CPU Profile

Remove an existing CPU profile from your Red Hat Enterprise Virtualization environment.

Procedure 5.6. Removing a CPU Profile

  1. Click the Clusters resource tab and select a cluster.
  2. Click the CPU Profiles sub tab in the details pane.
  3. Select the CPU profile to remove.
  4. Click Remove.
  5. Click OK.
You have removed a CPU profile, and that CPU profile is no longer available. If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile.

5.2.7. Importing an Existing Red Hat Gluster Storage Cluster

You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Enterprise Virtualization Manager.
When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.

Important

Currently, a Red Hat Gluster Storage node can only be added to a cluster which has its compatibility level set to 3.1, 3.2, or 3.3.

Procedure 5.7. Importing an Existing Red Hat Gluster Storage Cluster to Red Hat Enterprise Virtualization Manager

  1. Select the Clusters resource tab to list all clusters in the results list.
  2. Click New to open the New Cluster window.
  3. Select the Data Center the cluster will belong to from the drop-down menu.
  4. Enter the Name and Description of the cluster.
  5. Select the Enable Gluster Service radio button and the Import existing gluster configuration check box.
    The Import existing gluster configuration field is displayed only if you select Enable Gluster Service radio button.
  6. In the Address field, enter the hostname or IP address of any server in the cluster.
    The host Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.
  7. Enter the Root Password for the server, and click OK.
  8. The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.
  9. For each host, enter the Name and the Root Password.
  10. If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.
    Click Apply to set the entered password all hosts.
    Make sure the fingerprints are valid and submit your changes by clicking OK.
The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Enterprise Virtualization Manager.

5.2.8. Explanation of Settings in the Add Hosts Window

The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.

Table 5.8. Add Gluster Hosts Settings

Field Description
Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts.
Name Enter the name of the host.
Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window.
Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster.
Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window.

5.2.9. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 5.8. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
    • From the Data Centers details pane, click New to open the New Logical Network window.
    • From the Clusters details pane, click Add Network to open the New Logical Network window.
  3. Enter a Name, Description, and Comment for the logical network.
  4. Optionally select the Create on external provider check box. Select the External Provider from the drop-down list and provide the IP address of the Physical Network.
    If Create on external provider is selected, the Network Label, VM Network, and MTU options are disabled.
  5. Enter a new label or select an existing label for the logical network in the Network Label text field.
  6. Optionally enable Enable VLAN tagging.
  7. Optionally disable VM Network.
  8. Set the MTU value to Default (1500) or Custom.
  9. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  10. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
  11. From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
  12. Click OK.
Result
You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

5.2.10. Removing a Cluster

Summary
Move all hosts out of a cluster before removing it.

Note

You cannot remove the Default cluster, as it holds the Blank template. You can however rename the Default cluster and add it to a new data center.

Procedure 5.9. Removing a Cluster

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Ensure there are no hosts in the cluster.
  3. Click Remove to open the Remove Cluster(s) confirmation window.
  4. Click OK
Result
The cluster is removed.

5.2.11. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary
Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 5.10. Specifying Traffic Types for Logical Networks

  1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.
  2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
  3. Click Manage Networks to open the Manage Networks window.
    The Manage Networks window

    Figure 5.5. Manage Networks

  4. Select appropriate check boxes.
  5. Click OK to save the changes and close the window.
Result
You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note

Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.

5.2.12. Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 5.9. Manage Networks Settings

Field
Description/Action
Assign
Assigns the logical network to all hosts in the cluster.
Required
A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.
VM Network
A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.
Display Network
A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller.
Migration Network
A logical network marked "Migration Network" carries virtual machine and storage migration traffic.

5.2.13. Changing the Cluster Compatibility Version

Summary
Red Hat Enterprise Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Enterprise Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Note

To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level.

Procedure 5.11. Changing the Cluster Compatibility Version

  1. Log in to the Administration Portal as the administrative user. By default this is the admin user.
  2. Click the Clusters tab.
  3. Select the cluster to change from the list displayed. If the list of clusters is too long to filter visually then perform a search to locate the desired cluster.
  4. Click the Edit button.
  5. Change the Compatibility Version to the desired value.
  6. Click OK to open the Change Cluster Compatibility Version confirmation window.
  7. Click OK to confirm.
Result
You have updated the compatibility version of the cluster. Once you have updated the compatibility version of all clusters in a data center, then you are also able to change the compatibility version of the data center itself.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

5.3. Clusters and Permissions

5.3.1. Managing System Permissions for a Cluster

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A cluster administrator is a system administration role for a specific data center only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.
The cluster administrator role permits the following actions:
  • Create and remove associated clusters.
  • Add and remove hosts, virtual machines, and pools associated with the cluster.
  • Edit user permissions for virtual machines associated with the cluster.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.

5.3.2. Cluster Administrator Roles Explained

Cluster Permission Roles
The table below describes the administrator roles and privileges applicable to cluster administration.

Table 5.10. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
ClusterAdmin Cluster Administrator
Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required.
However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required.
NetworkAdmin Network Administrator Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well.

5.3.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 5.12. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

5.3.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 5.13. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 6. Logical Networks

6.1. Introduction to Logical Networks

A logical network is a named set of global network connectivity properties in your data center. When a logical network is added to a host, it may be further configured with host-specific network parameters. Logical networks optimize network flow by grouping network traffic by usage, type, and requirements.
Logical networks allow both connectivity and segregation. You can create a logical network for storage communication to optimize network traffic between hosts and storage domains, a logical network specifically for all virtual machine traffic, or multiple logical networks to carry the traffic of groups of virtual machines.
The default logical network in all data centers is the management network called rhevm. The rhevm network carries all traffic, until another logical network is created. It is meant especially for management communication between the Red Hat Enterprise Virtualization Manager and hosts.
A logical network is a data center level resource; creating one in a data center makes it available to the clusters in a data center. A logical network that has been designated a Required must be configured in all of a cluster's hosts before it is operational. Optional networks can be used by any host they have been added to.
Data Center Objects

Figure 6.1. Data Center Objects

Warning

Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.

Important

If you plan to use Red Hat Enterprise Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Enterprise Virtualization environment stops operating.
This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Enterprise Virtualization:
  • Directory Services
  • DNS
  • Storage

6.2. Required Networks, Optional Networks, and Virtual Machine Networks

Red Hat Enterprise Virtualization 3.1 and higher distinguishes between required networks and optional networks.
Required networks must be applied to all hosts in a cluster for the cluster and network to be Operational. Logical networks are added to clusters as Required networks by default.
When a host's required network becomes non-operational, virtual machines running on that host are migrated to another host; the extent of this migration is dependent upon the chosen cluster policy. This is beneficial if you have machines running mission critical workloads.
When a non-required network becomes non-operational, the virtual machines running on the network are not migrated to another host. This prevents unnecessary I/O overload caused by mass migrations.
Optional networks are those logical networks that have not been explicitly declared Required networks. Optional networks can be implemented on only the hosts that use them. The presence or absence of these networks does not affect the Operational status of a host.
Use the Manage Networks button to change a network's Required designation.
Virtual machine networks (called a VM network in the user interface) are logical networks designated to carry only virtual machine network traffic. Virtual machine networks can be required or optional.

Note

A virtual machine with a network interface on an optional virtual machine network will not start on a host without the network.

6.3. Logical Network Tasks

6.3.1. Using the Networks Tab

The Networks resource tab provides a central location for users to perform network-related operations and search for networks based on each network's property or association with other resources.
All networks in the Red Hat Enterprise Virtualization environment display in the results list of the Networks tab. The New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.
Click on each network name and use the Clusters, Hosts, Virtual Machines, Templates, and Permissions tabs in the details pane to perform functions including:
  • Attaching or detaching the networks to clusters and hosts
  • Removing network interfaces from virtual machines and templates
  • Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource tab.

6.3.2. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 6.1. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
    • From the Data Centers details pane, click New to open the New Logical Network window.
    • From the Clusters details pane, click Add Network to open the New Logical Network window.
  3. Enter a Name, Description, and Comment for the logical network.
  4. Optionally select the Create on external provider check box. Select the External Provider from the drop-down list and provide the IP address of the Physical Network.
    If Create on external provider is selected, the Network Label, VM Network, and MTU options are disabled.
  5. Enter a new label or select an existing label for the logical network in the Network Label text field.
  6. Optionally enable Enable VLAN tagging.
  7. Optionally disable VM Network.
  8. Set the MTU value to Default (1500) or Custom.
  9. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  10. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
  11. From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
  12. Click OK.
Result
You have defined a logical network as a resource required by a cluster or clusters in the data center. If you entered a label for the logical network, it will be automatically added to all host network interfaces with that label.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

6.3.3. Editing a Logical Network

Summary
Edit the settings of a logical network.

Procedure 6.2. Editing a Logical Network

Important

A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Section 7.6.2, “Editing Host Network Interfaces and Assigning Logical Networks to Hosts” on how to synchronize your networks.
  1. Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.
  2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.
  3. Select a logical network and click Edit to open the Edit Logical Network window.
  4. Edit the necessary settings.
  5. Click OK to save the changes.
Result
You have updated the settings of your logical network.

Note

Multi-host network configuration is available on data centers with 3.1-or-higher compatibility, and automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.

6.3.4. Explanation of Settings and Controls in the New Logical Network and Edit Logical Network Windows

6.3.4.1. Logical Network General Settings Explained

The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.

Table 6.1. New Logical Network and Edit Logical Network Settings

Field Name
Description
Name
The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the logical network. This text field has a 40-character limit.
Comment
A field for adding plain text, human-readable comments regarding the logical network.
Create on external provider
Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider.
External Provider - Allows you to select the external provider on which the logical network will be created.
Enable VLAN tagging
VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.
VM Network
Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.
MTU
Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected.
Network Label
Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label.

6.3.4.2. Logical Network Cluster Settings Explained

The table below describes the settings for the Cluster tab of the New Logical Network window.

Table 6.2. New Logical Network Settings

Field Name
Description
Attach/Detach Network to/from Cluster(s)
Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters.
Name - the name of the cluster to which the settings will apply. This value cannot be edited.
Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster.
Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster.

6.3.4.3. Logical Network vNIC Profiles Settings Explained

The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.

Table 6.3. New Logical Network Settings

Field Name
Description
vNIC Profiles
Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile.
Public - Allows you to specify whether the profile is available to all users.
QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile.

6.3.5. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary
Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 6.3. Specifying Traffic Types for Logical Networks

  1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.
  2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
  3. Click Manage Networks to open the Manage Networks window.
    The Manage Networks window

    Figure 6.2. Manage Networks

  4. Select appropriate check boxes.
  5. Click OK to save the changes and close the window.
Result
You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note

Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.

6.3.6. Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 6.4. Manage Networks Settings

Field
Description/Action
Assign
Assigns the logical network to all hosts in the cluster.
Required
A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.
VM Network
A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.
Display Network
A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller.
Migration Network
A logical network marked "Migration Network" carries virtual machine and storage migration traffic.

6.3.7. Network Labels

6.3.7.1. Network Labels

Network labels can be used to greatly simplify several administrative tasks associated with creating and administering logical networks and associating those logical networks with physical host network interfaces and bonds.
A network label is a plain text, human readable label that can be attached to a logical network or a physical host network interface. There is no strict limit on the length of label, but you must use a combination of lowercase and uppercase letters, underscores and hyphens; no spaces or special characters are allowed.
Attaching a label to a logical network or physical host network interface creates an association with other logical networks or physical host network interfaces to which the same label has been attached, as follows:

Network Label Associations

  • When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label.
  • When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface.
  • Changing the label attached to a logical network or physical host network interface acts in the same way as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated.

Network Labels and Clusters

  • When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface.
  • When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface.

Network Labels and Logical Networks With Roles

  • When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address.

6.4. Virtual Network Interface Cards

6.4.1. vNIC Profile Overview

A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.

Note

Starting with Red Hat Enterprise Virtualization 3.3, virtual machines now access logical networks only through vNIC profiles and cannot access a logical network if no vNIC profiles exist for that logical network. When you create a new logical network in the Manager, a vNIC profile of the same name as the logical network is automatically created under that logical network.

6.4.2. Creating or Editing a vNIC Profile

Summary
Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.

Note

If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing.

Procedure 6.4. Creating or editing a vNIC Profile

  1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.
  2. Select the vNIC Profiles tab in the details pane. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results list.
  3. Click New or Edit to open the VM Interface Profile window.
    The VM Interface Profile window

    Figure 6.3. The VM Interface Profile window

  4. Enter the Name and Description of the profile.
  5. Select the relevant Quality of Service policy from the QoS list.
  6. Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
  7. Select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties.
  8. Click OK.
Result
You have created a vNIC profile. Apply this profile to users and groups to regulate their network bandwidth. Note that if you edited a vNIC profile, you must either restart the virtual machine or hot unplug and then hotplug the vNIC.

6.4.3. Explanation of Settings in the VM Interface Profile Window

Table 6.5. VM Interface Profile Window

Field Name
Description
Network
A drop-down menu of the available networks to apply the vNIC profile.
Name
The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters.
Description
The description of the vNIC profile. This field is recommended but not mandatory.
QoS
A drop-down menu of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC.
Port Mirroring
A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default.
Device Custom Properties
A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively.
Allow all users to use this Profile
A check box to toggle the availability of the profile to all users in the environment. It is selected by default.

6.4.4. Port Mirroring

Port mirroring copies layer 3 network traffic on a given logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.
The only traffic copied is internal to one logical network on one host. There is no increase on traffic on the network external to the host; however a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines. Port mirroring is enabled or disabled by editing the network interface on the virtual machine.
Port mirroring has the following requirements and limitations:
  • Port mirroring requires an IPv4 IP address.
  • Hot plugging profiles with port mirroring is not supported.
Port mirroring is included in vNIC profiles but cannot be altered when the vNIC profile associated with port mirroring is attached to a virtual machine. To use port mirroring, create a dedicated vNIC profile that has port mirroring enabled.

Important

Enabling port mirroring reduces the privacy of other network users.

6.4.5. Removing a vNIC Profile

Summary
Remove a vNIC profile to delete it from your virtualized environment.

Procedure 6.5. Removing a vNIC Profile

  1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.
  2. Select the Profiles tab in the details pane to display available vNIC profiles. If you selected the logical network in tree mode, you can select the VNIC Profiles tab in the results list.
  3. Select one or more profiles and click Remove to open the Remove VM Interface Profile(s) window.
  4. Click OK to remove the profile and close the window.
Result
You have removed the vNIC profile.

6.4.6. Assigning Security Groups to vNIC Profiles

Note

This feature is only available for users who are integrating with OpenStack Neutron. Security groups cannot be created with Red Hat Enterprise Virtualization Manager. You must create security groups within OpenStack. For more information, see the Red Hat Enterprise Linux OpenStack Platform Administration Guide, available at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/.
Summary
You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.

Note

A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed:
# neutron security-group-list

Procedure 6.6. Assigning Security Groups to vNIC Profiles

  1. Click the Networks tab and select a logical network in the results list.
  2. Click the vNIC Profiles tab in the details pane.
  3. Click New, or select an existing vNIC profile and click Edit, to open the VM Interface Profile window.
  4. From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
  5. In the text field, enter the ID of the security group to attach to the vNIC profile.
  6. Click OK.
Result
You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.

6.4.7. User Permissions for vNIC Profiles

Summary
Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.

Procedure 6.7. User Permissions for vNIC Profiles

  1. Use tree mode to select a logical network.
  2. Select the vNIC Profiles resource tab to display the vNIC profiles.
  3. Select the Permissions tab in the details pane to show the current user permissions for the profile.
  4. Use the Add button to open the Add Permission to User window, and the Remove button to open the Remove Permission window, to affect user permissions for the vNIC profile.
Result
You have configured user permissions for a vNIC profile.

6.5. External Provider Networks

6.5.1. Importing Networks From External Providers

Summary
If an external provider offering networking services has been registered in the Manager, the networks provided by that provider can be imported into the Manager and used by virtual machines.

Procedure 6.8. Importing a Network From an External Provider

  1. Click the Networks tab.
  2. Click the Import button to open the Import Networks window.
    The Import Networks Window

    Figure 6.4. The Import Networks Window

  3. From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
  4. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
  5. It is possible to customize the name of the network that you are importing. To customize the name, click on the network's name in the Name column, and change the text.
  6. From the Data Center drop-down list, select the data center into which the networks will be imported.
  7. Optionally, clear the Allow All check box for a network in the Networks to Import list to prevent that network from being available to all users.
  8. Click the Import button.
Result
The selected networks are imported into the target data center and can now be used in the Manager.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

6.5.2. Limitations to Using External Provider Networks

The following limitations apply to using logical networks imported from an external provider in a Red Hat Enterprise Virtualization environment.
  • Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
  • The same logical network can be imported more than once, but only to different data centers.
  • You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the OpenStack Networking instance that provides that logical network.
  • Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
  • If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine.
  • Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.

Important

Logical networks imported from external providers are only compatible with Red Hat Enterprise Linux hosts and cannot be assigned to virtual machines running on Red Hat Enterprise Virtualization Hypervisor hosts.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

6.5.3. Configuring Subnets on External Provider Logical Networks

6.5.3.1. Configuring Subnets on External Provider Logical Networks

A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the Neutron instance on which the logical network is hosted is responsible for assigning these IP addresses.
While the Red Hat Enterprise Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager.

6.5.3.2. Adding Subnets to External Provider Logical Networks

Summary
Create a subnet on a logical network provided by an external provider.

Procedure 6.9. Adding Subnets to External Provider Logical Networks

  1. Click the Networks tab.
  2. Click the logical network provided by an external provider to which the subnet will be added.
  3. Click the Subnets tab in the details pane.
  4. Click the New button to open the New External Subnet window.
    The New External Subnet Window

    Figure 6.5. The New External Subnet Window

  5. Enter a Name and CIDR for the new subnet.
  6. From the IP Version drop-down menu, select either IPv4 or IPv6.
  7. Click OK.
Result
A new subnet is created on the logical network.

6.5.3.3. Removing Subnets from External Provider Logical Networks

Summary
Remove a subnet from a logical network provided by an external provider.

Procedure 6.10. Removing Subnets from External Provider Logical Networks

  1. Click the Networks tab.
  2. Click the logical network provided by an external provider from which the subnet will be removed.
  3. Click the Subnets tab in the details pane.
  4. Click the subnet to remove.
  5. Click the Remove button and click OK when prompted.
Result
The subnet is removed from the logical network.

6.6. Logical Networks and Permissions

6.6.1. Managing System Permissions for a Network

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment.
The network administrator role permits the following actions:
  • Create, edit and remove networks.
  • Edit the configuration of the network, including configuring port mirroring.
  • Attach and detach networks from resources including clusters and virtual machines.
The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator.

6.6.2. Network Administrator and User Roles Explained

Network Permission Roles
The table below describes the administrator and user roles and privileges applicable to network administration.

Table 6.6. Red Hat Enterprise Virtualization Network Administrator and User Roles

Role Privileges Notes
NetworkAdmin Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine.
NetworkUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks.

6.6.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 6.11. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

6.6.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 6.12. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 7. Hosts

7.1. Introduction to Red Hat Enterprise Virtualization Hosts

Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).
KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Enterprise Virtualization Manager. A Red Hat Enterprise Virtualization environment has one or more hosts attached to it.
Red Hat Enterprise Virtualization supports two methods of installing hosts. You can use the Red Hat Enterprise Virtualization Hypervisor installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation.
Red Hat Enterprise Virtualization hosts take advantage of tuned profiles, which provide virtualization optimizations. For more information on tuned, see the Red Hat Enterprise Linux 6.0 Performance Tuning Guide.
The Red Hat Enterprise Virtualization Hypervisor has security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details pane. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment. For a full list of ports, see Section A.2, “Virtualization Host Firewall Requirements”.
A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 6.5 or later AMD64/Intel 64 version.
A physical host on the Red Hat Enterprise Virtualization platform:
  • Must belong to only one cluster in the system.
  • Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
  • Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
  • Has a minimum of 2 GB RAM.
  • Can have an assigned system administrator with system permissions.
Administrators can receive the latest security advisories from the Red Hat Enterprise Virtualization watch list. Subscribe to the Red Hat Enterprise Virtualization watch list to receive new security advisories for Red Hat Enterprise Virtualization products by email. Subscribe by completing this form:

7.2. Red Hat Enterprise Virtualization Hypervisor Hosts

Red Hat Enterprise Virtualization Hypervisor hosts are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. They run stateless, not writing any changes to disk unless explicitly required to.
Red hat Enterprise Virtualization Hypervisor hosts can be added directly to, and configured by, the Red Hat Enterprise Virtualization Manager. Alternatively a host can be configured locally to connect to the Manager; the Manager then is only used to approve the host to be used in the environment.
Unlike Red Hat Enterprise Linux hosts, Red Hat Enterprise Virtualization Hypervisor hosts cannot be added to clusters that have been enabled for Gluster service for use as Red Hat Gluster Storage nodes.

Important

The Red Hat Enterprise Virtualization Hypervisor is a closed system. Use a Red Hat Enterprise Linux host if additional rpm packages are required for your environment.

7.3. Foreman Host Provider Hosts

Hosts provided by a Foreman host provider can also be used as virtualization hosts by the Red Hat Enterprise Virtualization Manager. After a Foreman host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Enterprise Virtualization in the same way as Red Hat Enterprise Virtualization Hypervisor hosts and Red Hat Enterprise Linux hosts.

Important

Foreman host provider hosts are a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

7.4. Red Hat Enterprise Linux Hosts

You can use a Red Hat Enterprise Linux 6.6 or 7 installation on capable hardware as a host. Red Hat Enterprise Virtualization supports hosts running Red Hat Enterprise Linux 6.6 or 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server entitlement and the Red Hat Enterprise Virtualization entitlement.
Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, creation of bridge, and a reboot of the host. Use the details pane to monitor the process as the host and management system establish a connection.

7.5. Host Tasks

7.5.1. Adding a Red Hat Enterprise Linux Host

Summary
A Red Hat Enterprise Linux host is based on a standard "basic" installation of Red Hat Enterprise Linux, with specific entitlements enabled. The physical host must be set up before you can add it to the Red Hat Enterprise Virtualization environment.

Important

Make sure that virtualization is enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation.

Procedure 7.1. Adding a Red Hat Enterprise Linux Host

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the new host.
  4. Enter the Name, Address, and SSH Port of the new host.
  5. Select an authentication method to use with the host.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
  6. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters button to expand the advanced host settings.
    1. Optionally disable automatic firewall configuration.
    2. Optionally disable use of JSON protocol.

      Note

      With Red Hat Enterprise Virtualization 3.5, the communication model between the Manager and VDSM now uses JSON protocol, which reduces parsing time. As a result, the communication message format has changed from XML format to JSON format. Web requests have changed from synchronous HTTP requests to asynchronous TCP requests.
    3. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  7. You can configure the Power Management and SPM using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  8. Click OK.
Result
The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the details pane. After installation is complete, the status updates to Reboot. The host must be activated for the status to change to Up.

7.5.2. Adding a Foreman Host Provider Host

Summary
The process for adding a Foreman host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Foreman host provider.

Procedure 7.2. Adding a Foreman Host Provider Host

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the new host.
  4. Select the Use Foreman Hosts Providers check box to display the options for adding a Foreman host provider host and select the provider from which the host is to be added.
  5. Select either Discovered Hosts or Provisioned Hosts.
    • Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists.
    • Provisioned Hosts: Select a host from the Providers Hosts drop-down list.
    Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired.
  6. Enter the Name, Address, and SSH Port (Provisioned Hosts only) of the new host.
  7. Select an authentication method to use with the host.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only).
  8. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.
    1. Optionally disable automatic firewall configuration.
    2. Optionally disable the use of JSON protocol.
    3. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  9. You can configure the Power Management, SPM, Console, and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  10. Click OK to add the host and close the window.
Result
The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the details pane. After installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.

7.5.3. Approving a Registered Hypervisor

Approve a Hypervisor that has been registered using the details of the Manager.

Procedure 7.3. Approving a Registered Hypervisor

  1. From the Administration Portal, click the Hosts tab, and then click the host to be approved. The host is currently listed with the status of Pending Approval.
  2. Click Approve to open the Edit and Approve Hosts window. You can use the window to specify a name for the Hypervisor, fetch its SSH fingerprint before approving it, and configure power management. For information on power management configuration, refer to Section 7.5.4.2, “Host Power Management Settings Explained”.
  3. Click OK. If you have not configured power management, you are prompted to confirm whether to proceed without doing so; click OK.

7.5.4. Explanation of Settings and Controls in the New Host and Edit Host Windows

7.5.4.1. Host General Settings Explained

These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Foreman host provider hosts.
The General settings table contains the information required on the General tab of the New Host or Edit Host window.

Table 7.1. General settings

Field Name
Description
Data Center
The data center to which the host belongs. Red Hat Enterprise Virtualization Hypervisor hosts cannot be added to Gluster-enabled clusters.
Host Cluster
The cluster to which the host belongs.
Use Foreman Hosts Providers
Select or clear this check box to view or hide options for adding hosts provided by Foreman hosts providers. The following options are also available:
Discovered Hosts
  • Discovered Hosts - A drop-down list that is populated with the name of Foreman hosts discovered by the engine.
  • Host Groups -A drop-down list of host groups available.
  • Compute Resources - A drop-down list of hypervisors to provide compute resources.
Provisioned Hosts
  • Providers Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter.
  • Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts.
Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Comment
A field for adding plain text, human-readable comments regarding the host.
Address
The IP address, or resolvable hostname of the host.
Password
The password of the host's root user. This can only be given when you add the host; it cannot be edited afterwards.
SSH PublicKey
Copy the contents in the text box to the /root/.known_hosts file on the host to use the Manager's ssh key instead of using a password to authenticate with the host.
Automatically configure host firewall
When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.
Use JSON protocol
This is enabled by default. This is an Advanced Parameter.
SSH Fingerprint
You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.

7.5.4.2. Host Power Management Settings Explained

The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows.

Table 7.2. Power Management Settings

Field Name
Description
Kdump integration
Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. Kdump is available by default on new Red Hat Enterprise Linux 6.6 and 7.1 hosts and Hypervisors. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If this is the case, see Section 7.7.4, “fence_kdump Advanced Configuration”.
Primary/ Secondary
Prior to Red Hat Enterprise Virtualization 3.2, a host with power management configured only recognized one fencing agent. Fencing agents configured on version 3.1 and earlier, and single agents, are treated as primary agents. The secondary option is valid when a second agent is defined.
Concurrent
Valid when there are two fencing agents, for example for dual power hosts in which each power switch has two agents connected to the same power switch.
  • If this check box is selected, both fencing agents are used concurrently when a host is fenced. This means that both fencing agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up.
  • If this check box is not selected, the fencing agents are used sequentially. This means that to stop or start a host, the primary agent is used first, and if it fails, the secondary agent is used.
Address
The address to access your host's power management device. Either a resolvable hostname or an IP address.
User Name
User account with which to access the power management device. You can set up a user on the device, or use the default user.
Password
Password for the user accessing the power management device.
Type
The type of power management device in your host.
Choose one of the following:
  • apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices.
  • apc_snmp - Use with APC 5.x power switch devices.
  • bladecenter - IBM Bladecenter Remote Supervisor Adapter.
  • cisco_ucs - Cisco Unified Computing System.
  • drac5 - Dell Remote Access Controller for Dell computers.
  • drac7 - Dell Remote Access Controller for Dell computers.
  • eps - ePowerSwitch 8M+ network power switch.
  • hpblade - HP BladeSystem.
  • ilo, ilo2, ilo3, ilo4 - HP Integrated Lights-Out.
  • ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices.
  • rsa - IBM Remote Supervisor Adapter.
  • rsb - Fujitsu-Siemens RSB management interface.
  • wti - WTI Network Power Switch.
Port
The port number used by the power management device to communicate with the host.
Options
Power management device specific options. Enter these as 'key=value' or 'key'. See the documentation of your host's power management device for the options available.
For Red Hat Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append ssl_insecure=1 to the Options field.
Secure
Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on and supported by the power management agent.
Source
Specifies whether the host will search within its cluster or data center for a fencing proxy. Use the Up and Down buttons to change the sequence in which the resources are used.
Disable policy control of power management
Power management is controlled by the Cluster Policy of the host's cluster. If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control.

7.5.4.3. SPM Priority Settings Explained

The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.

Table 7.3. SPM settings

Field Name
Description
SPM Priority
Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low, Normal, and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal.

7.5.4.4. Host Console Settings Explained

The Console settings table details the information required on the Console tab of the New Host or Edit Host window.

Table 7.4. Console settings

Field Name
Description
Override display address
Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP).
Display address
The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.

7.5.5. Configuring Host Power Management Settings

Summary
Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.
It is necessary to configure host power management in order to utilize host high availability and virtual machine high availability.

Important

Ensure that your host is in maintenance mode before configuring power management settings. Otherwise, all running virtual machines on that host will be stopped ungracefully upon restarting the host, which can cause disruptions in production environments. A warning dialog will appear if you have not correctly set your host to maintenance mode.

Procedure 7.4. Configuring Power Management Settings

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab to display the Power Management settings.
  4. Select the Enable Power Management check box to enable the fields.
  5. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    When you enable Kdump integration on an existing host, the host must be reinstalled for kdump to be configured. See Section 7.5.12, “Reinstalling Virtualization Hosts”.
  6. The Primary option is selected by default if you are configuring a new power management device. If you are adding a new device, set it to Secondary.
  7. Select the Concurrent check box to enable multiple fence agents to be used concurrently.
  8. Enter the Address, User Name, and Password of the power management device into the appropriate fields.
  9. Use the drop-down menu to select the Type of power management device.
  10. Enter the Port number used by the power management device to communicate with the host.
  11. Enter the Options for the power management device. Use a comma-separated list of 'key=value' or 'key'.
  12. Select the Secure check box to enable the power management device to connect securely to the host.
  13. Click Test to ensure the settings are correct.
  14. Click OK to save your settings and close the window.
Result
You have configured the power management settings for the host. The Power Management drop-down menu is now enabled in the Administration Portal.

Note

Power management is controlled by the Cluster Policy of the host's cluster. If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Tick the Disable policy control of power management check box if you do not wish for your host to automatically perform these functions.

7.5.6. Configuring Host Storage Pool Manager Settings

Summary
The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources.
The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.

Procedure 7.5. Configuring SPM settings

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the SPM tab to display the SPM Priority settings.
  4. Use the radio buttons to select the appropriate SPM priority for the host.
  5. Click OK to save the settings and close the window.
Result
You have configured the SPM priority of the host.

7.5.7. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 7.6. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

7.5.8. Approving Newly Added Red Hat Enterprise Virtualization Hypervisor Hosts

Summary
You have to install your Red Hat Enterprise Virtualization Hypervisor hosts before you can approve them in the Red Hat Enterprise Virtualization Manager. Read about installing Red Hat Enterprise Virtualization Hypervisors in the Red Hat Enterprise Virtualization Installation Guide.
Once installed, the Red Hat Enterprise Virtualization Hypervisor host is visible in the Administration Portal but not active. Approve it so that it can host virtual machines.

Procedure 7.7. Approving newly added Red Hat Enterprise Virtualization Hypervisor hosts

  1. In the Hosts tab, select the host you recently installed using the Red Hat Enterprise Virtualization Hypervisor host installation media. This host shows a status of Pending Approval.
  2. Click the Approve button.
Result
The host's status changes to Up and it can be used to run virtual machines.

Note

You can also add this host using the procedure in Section 7.5.1, “Adding a Red Hat Enterprise Linux Host”, which utilizes the Red Hat Enterprise Virtualization Hypervisor host's IP address and the password that was set on the RHEV-M screen.

7.5.9. Moving a Host to Maintenance Mode

Summary
Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. When a host is placed into maintenance mode the Red Hat Enterprise Virtualization Manager attempts to migrate all running virtual machines to alternative hosts.
The normal prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Procedure 7.8. Moving a Host to Maintenance Mode

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to initiate maintenance mode.
Result:
All running virtual machines are migrated to alternative hosts. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully.

7.5.10. Activating a Host from Maintenance Mode

Summary
A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used.

Procedure 7.9. Activating a Host from Maintenance Mode

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Activate.
Result
The host status changes to Unassigned, and finally Up when the operation is complete. Virtual machines can now run on the host.

7.5.11. Removing a Host

Remove a host from your virtualized environment.

Procedure 7.10. Removing a host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Place the host into maintenance mode.
  3. Click Remove to open the Remove Host(s) confirmation window.
  4. Select the Force Remove check box if the host is part of a Red Hat Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive.
  5. Click OK.
Your host has been removed from the environment and is no longer visible in the Hosts tab.

7.5.12. Reinstalling Virtualization Hosts

Reinstall Red Hat Enterprise Virtualization Hypervisors and Red Hat Enterprise Linux hosts from the Administration Portal. Use this procedure to reinstall Hypervisors from the same version of the Hypervisor ISO image from which it is currently installed; the procedure reinstalls VDSM on Red Hat Enterprise Linux Hosts. This includes stopping and restarting the Hypervisor. If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host reinstalls are performed at a time when the Hypervisor's usage is relatively low.
The cluster to which the Hypervisor belongs must have sufficient memory reserve in order for its hosts to perform maintenance. Moving a host with live virtual machines to maintenance in a cluster that lacks sufficient memory causes the virtual machine migration operation to hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance.

Important

Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure 7.11. Reinstalling Red Hat Enterprise Virtualization Hypervisors and Red Hat Enterprise Linux Hosts

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance. If migration is enabled at cluster level, any virtual machines running on the host are migrated to other hosts. If the host is the SPM, this function is moved to another host. The status of the host changes as it enters maintenance mode.
  3. Click Reinstall to open the Install Host window.
  4. Click OK to reinstall the host.
Once successfully reinstalled, the host displays a status of Up. Any virtual machines that were migrated off the host, are at this point able to be migrated back to it.

Important

After a Red Hat Enterprise Virtualization Hypervisor is successfully registered to the Red Hat Enterprise Virtualization Manager and then reinstalled, it may erroneously appear in the Administration Portal with the status of Install Failed. Click Activate, and the Hypervisor will change to an Up status and be ready for use.

7.5.13. Customizing Hosts with Tags

Summary
You can use tags to store information about your hosts. You can then search for hosts based on tags.

Procedure 7.12. Customizing hosts with tags

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Assign Tags to open the Assign Tags window.
    Assign Tags Window

    Figure 7.1. Assign Tags Window

  3. The Assign Tags window lists all available tags. Select the check boxes of applicable tags.
  4. Click OK to assign the tags and close the window.
Result
You have added extra, searchable information about your host as tags.

7.5.14. Changing the IP Address of a Red Hat Enterprise Virtualization Hypervisor (RHEV-H)

Procedure 7.13. 

  1. Place the Hypervisor into maintenance mode so the virtual machines are live migrated to another Hypervisor. See Section 7.5.9, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another Hypervisor. See Manually Migrating Virtual Machines for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
  3. Log in to your Hypervisor as the admin user.
  4. Press F2, select OK, and press Enter to enter the rescue shell.
  5. Modify the IP address by editing the /etc/sysconfig/network-scripts/ifcfg-rhevm file. For example:
    # vi /etc/sysconfig/network-scripts/ifcfg-rhevm
    ...
    BOOTPROTO=none	
    IPADDR=10.x.x.x
    PREFIX=24
    ...
  6. Restart the network service and verify that the IP address has been updated.
    • For Red Hat Enterprise Linux 6:
      # service network restart
      # ifconfig rhevm
    • For Red Hat Enterprise Linux 7:
      # systemctl restart network.service
      # ip addr show rhevm
  7. Type exit to exit the rescue shell and return to the text user interface.
  8. Re-register the host with the Manager. See Installation Guide, Manually Adding a Hypervisor from the Administration Portal for more information.

7.5.15. Changing the FQDN of a Host

Use the following procedure to change the fully qualified domain name of hypervisor hosts.

Procedure 7.14. Updating the FQDN of a Hypervisor Host

  1. Place the Hypervisor into maintenance mode so the virtual machines are live migrated to another Hypervisor. See Section 7.5.9, “Moving a Host to Maintenance Mode” for more information. Alternatively, manually shut down or migrate all the virtual machines to another Hypervisor. See Manually Migrating Virtual Machines for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
    • For RHEL-based hosts:
      • For Red Hat Enterprise Linux 6:
        Edit the /etc/sysconfig/network file, update the host name, and save.
        # vi /etc/sysconfig/network
        HOSTNAME=NEW_FQDN
      • For Red Hat Enterprise Linux 7:
        Use the hostnamectl tool to update the host name. For more options, see Red Hat Enterprise Linux 7 Networking Guide, Configure Host Names.
        # hostnamectl set-hostname NEW_FQDN
    • For Red Hat Enterprise Virtualization Hypervisors (RHEV-H):
      In the text user interface, select the Network screen, press the right arrow key and enter a new host name in the Hostname field. Select <Save> and press Enter.
  3. Reboot the host.
  4. Re-register the host with the Manager. See Installation Guide, Manually Adding a Hypervisor from the Administration Portal for more information.

7.6. Hosts and Networking

7.6.1. Refreshing Host Capabilities

Summary
When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.

Procedure 7.15. To Refresh Host Capabilities

  1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.
  2. Click the Refresh Capabilities button.
Result
The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

7.6.2. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

Summary
You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces.

Important

You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Procedure 7.16. Editing Host Network Interfaces and Assigning Logical Networks to Hosts

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results.
  2. Click the Network Interfaces tab in the details pane.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
    The Setup Host Networks window

    Figure 7.2. The Setup Host Networks window

  4. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
    Alternatively, right-click the logical network and select a network interface from the drop-down menu.
  5. Configure the logical network:
    1. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
    2. Select a Boot Protocol from:
      • None,
      • DHCP, or
      • Static.
        If you selected Static, enter the IP, Subnet Mask, and the Gateway.
    3. To configure a network bridge, click the Custom Properties drop-down menu and select bridge_opts. Enter a valid key and value with the following syntax: [key]=[value]. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Section C.1, “Explanation of bridge_opts Parameters”.
      forward_delay=1500 
      gc_timer=3765 
      group_addr=1:80:c2:0:0:0 
      group_fwd_mask=0x0 
      hash_elasticity=4 
      hash_max=512
      hello_time=200 
      hello_timer=70 
      max_age=2000 
      multicast_last_member_count=2 
      multicast_last_member_interval=100 
      multicast_membership_interval=26000 
      multicast_querier=0 
      multicast_querier_interval=25500 
      multicast_query_interval=13000 
      multicast_query_response_interval=1000 
      multicast_query_use_ifaddr=0 
      multicast_router=1 
      multicast_snooping=1 
      multicast_startup_query_count=2 
      multicast_startup_query_interval=3125
    4. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. A logical network cannot be edited or moved to another interface until it is synchronized.

      Note

      Networks are not considered synchronized if they have one of the following conditions:
      • The VM Network is different from the physical host network.
      • The VLAN identifier is different from the physical host network.
      • A Custom MTU is set on the logical network, and is different from the physical host network.
  6. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.
  8. Click OK.
Result
You have assigned logical networks to and configured a physical host network interface.

Note

If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

7.6.3. Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Summary
Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important

You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 7.17. Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
    Setup Host Networks

    Figure 7.3. Setup Host Networks

  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static,
      Provide the IP and Subnet Mask.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box
  8. Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational
Result
You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

7.6.4. Adding Network Labels to Host Network Interfaces

Summary
Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Procedure 7.18. Adding Network Labels to Host Network Interfaces

  1. Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Edit a physical network interface by hovering your cursor over a physical network interface and clicking the pencil icon to open the Edit Interface window.
    The Edit Interface Window

    Figure 7.4. The Edit Interface Window

  5. Enter a name for the network label in the Label text field and use the + and - buttons to add or remove additional network labels.
  6. Click OK.
Result
You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

7.6.5. Bonds

7.6.5.1. Bonding Logic in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.
Two factors that affect bonding logic are:
  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks? A single device cannot carry both VLAN tagged and non-VLAN tagged logical networks.

Table 7.5. Bonding Scenarios and Their Results

Bonding Scenario Result
NIC + NIC
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
NIC + Bond
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
Bond + Bond
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

7.6.5.2. Bonds

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.
The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important

Modes 1, 2, 3 and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.
Bonding Modes
Red Hat Enterprise Virtualization uses Mode 4 by default, but supports the following common bonding modes:
Mode 0 (round-robin policy)
Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 2 (XOR policy)
Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
Mode 3 (broadcast policy)
Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
Mode 4 (IEEE 802.3ad policy)
Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
Mode 5 (adaptive transmit load balancing policy)
Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

7.6.5.3. Creating a Bond Device Using the Administration Portal

Summary
You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two.
A bond cannot carry both VLAN tagged and non-VLAN traffic.

Procedure 7.19. Creating a Bond Device using the Administration Portal

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, for example one is VLAN tagged and the other is not, the bond operation fails with a suggestion on how to correct the compatibility issue.
  5. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
  6. Click OK to create the bond and close the Create New Bond window.
  7. Assign a logical network to the newly created bond device.
  8. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  9. Click OK accept the changes and close the Setup Host Networks window.
Result
Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.
Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

7.6.5.4. Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 7.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:
mode=4 xmit_hash_policy=layer2+3

Example 7.2. ARP Monitoring

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Example 7.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:
mode=1 primary=eth0

7.6.6. Saving a Host Network Configuration

Summary
One of the options when configuring a host network is to save the configuration as you apply it, making the changes persistent.
Any changes made to the host network configuration will be temporary if you did not select the Save network configuration check box in the Setup Host Networks window.
Save the host network configuration to make it persistent.

Procedure 7.20. Saving a host network configuration

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Network Interfaces tab on the Details pane to list the NICs on the host, their address, and other specifications.
  3. Click the Save Network Configuration button.
  4. The host network configuration is saved and the following message is displayed on the task bar: "Network changes were saved on host [Hostname]."
Result
The host's network configuration is saved persistently and will survive reboots.

Note

Saving the host network configuration also updates the list of available network interfaces for the host. This behavior is similar to that of the Refresh Capabilities button.

7.7. Host Resilience

7.7.1. Host High Availability

The Red Hat Enterprise Virtualization Manager uses fencing to keep the hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager.
If a host with a power management device loses communication with the Manager, it can be fenced (rebooted) from the Administration Portal. All the virtual machines running on that host are stopped, and highly available virtual machines are started on a different host.
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.
Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host's power management device and test their correctness from time to time.
Hosts can be fenced automatically using the power management parameters, or manually by right-clicking on a host and using the options on the menu. In a fencing operation, an unresponsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains unresponsive pending manual intervention and troubleshooting.
If the host is required to run virtual machines that are highly available, power management must be enabled and configured.

7.7.2. Power Management by Proxy in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.
You can select between:
  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.
A viable fencing proxy host has a status of either UP or Maintenance.

7.7.3. Setting Fencing Parameters on a Host

The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC).
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.

Procedure 7.21. Setting fencing parameters on a host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab.
    Power Management Settings

    Figure 7.5. Power Management Settings

  4. Select the Enable Power Management check box to enable the fields.
  5. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    When you enable Kdump integration on an existing host, the host must be reinstalled for kdump to be configured. See Section 7.5.12, “Reinstalling Virtualization Hosts”.
  6. The Primary option is selected by default if you are configuring a new power management device. If you are adding a new device, set it to Secondary.
  7. Select the Concurrent check box to enable multiple fence agents to be used concurrently.
  8. Enter the Address, User Name, and Password of the power management device.
  9. Select the power management device Type from the drop-down menu.

    Note

    With the Red Hat Enterprise Virtualization 3.5 release, you now have the option to use a custom power management device. For more information on how to set up a custom power management device, see https://access.redhat.com/articles/1238743.
  10. Enter the Port number used by the power management device to communicate with the host.
  11. Enter the specific Options of the power management device. Use a comma-separated list of 'key=value' or 'key' entries.
  12. Click the Test button to test the power management device. Test Succeeded, Host Status is: on will display upon successful verification.

    Warning

    Power management parameters (userid, password, options, etc) are tested by Red Hat Enterprise Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Enterprise Virtualization Manager, fencing is likely to fail when most needed.
  13. Click OK to save the changes and close the window.
Result
You are returned to the list of hosts. Note that the exclamation mark next to the host's name has now disappeared, signifying that power management has been successfully configured.

7.7.4. fence_kdump Advanced Configuration

kdump
The kdump service is available by default on new Red Hat Enterprise Linux 6.6 and 7.1 hosts and Hypervisors. On older hosts, Kdump integration cannot be enabled; these hosts must be upgraded in order to use this feature.
Select a host to view the status of the kdump service in the General tab of the details pane:
  • Enabled: kdump is configured properly and the kdump service is running.
  • Disabled: the kdump service is not running (in this case kdump integration will not work properly).
  • Unknown: happens only for hosts with an older VDSM version that does not report kdump status.
For more information on installing and using kdump, see the Kernel Crash Dump Guide for Red Hat Enterprise Linux 7, or the kdump Crash Recovery Service section of the Deployment Guide for Red Hat Enterprise Linux 6.
fence_kdump
Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment's network configuration is simple and the Manager's FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use.
However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Manager, fence_kdump listener, or both. For example, if the Manager's FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config:
engine-config -s FenceKdumpDestinationAddress=A.B.C.D
The following example cases may also require configuration changes:
  • The Manager has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages.
  • You need to execute the fence_kdump listener on a different IP or port.
  • You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss.
Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups. For configuration options for the fence_kdump listener see Section 7.7.4.1, “fence_kdump listener Configuration”. For configuration of kdump on the Manager see Section 7.7.4.2, “Configuring fence_kdump on the Manager”.

7.7.4.1. fence_kdump listener Configuration

Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient.

Procedure 7.22. Manually Configuring the fence_kdump Listener

  1. Create a new file (for example, my-fence-kdump.conf) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/
  2. Enter your customization with the syntax OPTION=value and save the file.

    Important

    The edited values must also be changed in engine-config as outlined in the fence_kdump Listener Configuration Options table in Section 7.7.4.2, “Configuring fence_kdump on the Manager”.
  3. Restart the fence_kdump listener:
    # service ovirt-fence-kdump-listener restart
The following options can be customized if required:

Table 7.6. fence_kdump Listener Configuration Options

Variable Description Default Note
LISTENER_ADDRESS Defines the IP address to receive fence_kdump messages on. 0.0.0.0 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationAddress in engine-config.
LISTENER_PORT Defines the port to receive fence_kdump messages on. 7410 If the value of this parameter is changed, it must match the value of FenceKdumpDestinationPort in engine-config.
HEARTBEAT_INTERVAL Defines the interval in seconds of the listener's heartbeat updates. 30 If the value of this parameter is changed, it must be half the size or smaller than the value of FenceKdumpListenerTimeout in engine-config.
SESSION_SYNC_INTERVAL Defines the interval in seconds to synchronize the listener's host kdumping sessions in memory to the database. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KdumpStartedTimeout in engine-config.
REOPEN_DB_CONNECTION_INTERVAL Defines the interval in seconds to reopen the database connection which was previously unavailable. 30 -
KDUMP_FINISHED_TIMEOUT Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED. 60 If the value of this parameter is changed, it must be double the size or higher than the value of FenceKdumpMessageInterval in engine-config.

7.7.4.2. Configuring fence_kdump on the Manager

Edit the Manager's kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using:
# engine-config -g OPTION

Procedure 7.23. Manually Configuring Kdump with engine-config

  1. Edit kdump's configuration using the engine-config command:
    # engine-config -s OPTION=value

    Important

    The edited values must also be changed in the fence_kdump listener configuration file as outlined in the Kdump Configuration Options table. See Section 7.7.4.1, “fence_kdump listener Configuration”.
  2. Restart the ovirt-engine service:
    # service ovirt-engine restart
  3. Reinstall all hosts with Kdump integration enabled, if required (see the table below).
The following options can be configured using engine-config:

Table 7.7. Kdump Configuration Options

Variable Description Default Note
FenceKdumpDestinationAddress Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Manager's FQDN is used. Empty string (Manager FQDN is used) If the value of this parameter is changed, it must match the value of LISTENER_ADDRESS in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpDestinationPort Defines the port to send fence_kdump messages to. 7410 If the value of this parameter is changed, it must match the value of LISTENER_PORT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpMessageInterval Defines the interval in seconds between messages sent by fence_kdump. 5 If the value of this parameter is changed, it must be half the size or smaller than the value of KDUMP_FINISHED_TIMEOUT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.
FenceKdumpListenerTimeout Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive. 90 If the value of this parameter is changed, it must be double the size or higher than the value of HEARTBEAT_INTERVAL in the fence_kdump listener configuration file.
KdumpStartedTimeout Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started). 30 If the value of this parameter is changed, it must be double the size or higher than the value of SESSION_SYNC_INTERVAL in the fence_kdump listener configuration file, and FenceKdumpMessageInterval.

7.7.5. Soft-Fencing Hosts

Sometimes a host becomes non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.
Red Hat Enterprise Virtualization 3.3 introduces "soft-fencing over SSH". Prior to Red Hat Enterprise Virtualization 3.3, non-responsive hosts were fenced only by external fencing devices. In Red Hat Enterprise Virtualization 3.3, the fencing process has been expanded to include "SSH Soft Fencing", a process whereby the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.
Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens:
  1. On the first network failure, the status of the host changes to "connecting".
  2. The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running VMs on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
  3. If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH.
  4. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent.

Note

Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured.

7.7.6. Using Host Power Management Functions

Summary
When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host.

Procedure 7.24. Using Host Power Management Functions

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Power Management drop-down menu.
  3. Select one of the following options:
    • Restart: This option stops the host and waits until the host's status changes to Down. When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up.
    • Start: This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up.
    • Stop: This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational.

    Important

    When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used.
  4. Selecting one of the above options opens a confirmation window. Click OK to confirm and proceed.
Result
The selected action is performed.

7.7.7. Manually Fencing or Isolating a Non Responsive Host

Summary
If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure; it can significantly affect the performance of the environment. If you do not have a power management device, or it is incorrectly configured, you can reboot the host manually.

Warning

Do not use the Confirm host has been rebooted option unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption.

Procedure 7.25. Manually fencing or isolating a non-responsive host

  1. On the Hosts tab, select the host. The status must display as non-responsive.
  2. Manually reboot the host. This could mean physically entering the lab and rebooting the host.
  3. On the Administration Portal, right-click the host entry and select the Confirm Host has been rebooted button.
  4. A message displays prompting you to ensure that the host has been shut down or rebooted. Select the Approve Operation check box and click OK.
Result
You have manually rebooted your host, allowing highly available virtual machines to be started on active hosts. You confirmed your manual fencing action in the Administrator Portal, and the host is back online.

7.8. Hosts and Permissions

7.8.1. Managing System Permissions for a Host

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment.
The host administrator role permits the following actions:
  • Edit the configuration of the host.
  • Set up the logical networks.
  • Remove the host.
You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator.

7.8.2. Host Administrator Roles Explained

Host Permission Roles
The table below describes the administrator roles and privileges applicable to host administration.

Table 7.8. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
HostAdmin Host Administrator Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host.

7.8.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 7.26. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

7.8.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 7.27. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 8. Storage

Red Hat Enterprise Virtualization uses a centralized storage system for virtual machine disk images, ISO files and snapshots. Storage networking can be implemented using:
  • Network File System (NFS)
  • GlusterFS exports
  • Other POSIX compliant file systems
  • Internet Small Computer System Interface (iSCSI)
  • Local storage attached directly to the virtualization hosts
  • Fibre Channel Protocol (FCP)
  • Parallel NFS (pNFS)
Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated.
As a Red Hat Enterprise Virtualization system administrator, you need to create, configure, attach and maintain storage for the virtualized enterprise. You should be familiar with the storage types and their use. Read your storage array vendor's guides, and refer to the Red Hat Enterprise Linux Storage Administration Guide for more information on the concepts, protocols, requirements or general usage of storage.
Red Hat Enterprise Virtualization enables you to assign and manage storage using the Administration Portal's Storage tab. The Storage results list displays all the storage domains, and the details pane shows general information about the domain.
To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up.
Red Hat Enterprise Virtualization has three types of storage domains:
  • Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.
    The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains.
    You must attach a data domain to a data center before you can attach domains of other types to it.
  • ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center.
  • Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Enterprise Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center.

Important

Only commence configuring and attaching storage for your Red Hat Enterprise Virtualization environment once you have determined the storage needs of your data center(s).

8.1. Understanding Storage Domains

A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).
On NFS, all virtual disks, templates, and snapshots are files.
On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM.
Virtual disks can have one of two formats, either Qcow2 or RAW. The type of storage can be either Sparse or Preallocated. Snapshots are always sparse but can be taken for disks created either as RAW or sparse.
Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.

8.2. Storage Metadata Versions in Red Hat Enterprise Virtualization

Red Hat Enterprise Virtualization stores information about storage domains as metadata on the storage domains themselves. Each major release of Red Hat Enterprise Virtualization has seen improved implementations of storage metadata.
  • V1 metadata (Red Hat Enterprise Virtualization 2.x series)
    Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual machine disk images.
    Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 KB, limiting the number of storage domains that can be in a pool.
    Template and virtual machine base images are read only.
    V1 metadata is applicable to NFS, iSCSI, and FC storage domains.
  • V2 metadata (Red Hat Enterprise Virtualization 3.0)
    All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains.
    Physical volume names are no longer included in the metadata.
    Template and virtual machine base images are read only.
    V2 metadata is applicable to iSCSI, and FC storage domains.
  • V3 metadata (Red Hat Enterprise Virtualization 3.1+)
    All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains.
    Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot.
    Support for unicode metadata is added, for non-English volume names.
    V3 metadata is applicable to NFS, GlusterFS, POSIX, iSCSI, and FC storage domains.

8.3. Preparing and Adding NFS Storage

8.3.1. Preparing NFS Storage

Set up NFS shares that will serve as a data domain and an export domain on a Red Hat Enterprise Linux 6 server. It is not necessary to create an ISO domain if one was created during the Red Hat Enterprise Virtualization Manager installation procedure.
  1. Install nfs-utils, the package that provides NFS tools:
    # yum install nfs-utils
  2. Configure the boot scripts to make shares available every time the system boots:
    # chkconfig --add rpcbind
    # chkconfig --add nfs
    # chkconfig rpcbind on
    # chkconfig nfs on
  3. Start the rpcbind service and the nfs service:
    # service rpcbind start
    # service nfs start
    
  4. Create the data directory and the export directory:
    # mkdir -p /exports/data
    # mkdir -p /exports/export
  5. Add the newly created directories to the /etc/exports file. Add the following to /etc/exports:
    /exports/data *(rw)
    /exports/export *(rw)
  6. Export the storage domains:
    # exportfs -r
  7. Reload the NFS service:
    # service nfs reload
  8. Create the group kvm:
    # groupadd kvm -g 36
  9. Create the user vdsm in the group kvm:
    # useradd vdsm -u 36 -g 36
  10. Set the ownership of your exported directories to 36:36, which gives vdsm:kvm ownership. This makes it possible for the Manager to store data in the storage domains represented by these exported directories:
    # chown -R 36:36 /exports/data
    # chown -R 36:36 /exports/export
  11. Change the mode of the directories so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users:
    # chmod 0755 /exports/data
    # chmod 0755 /exports/export

8.3.2. Attaching NFS Storage

Attach an NFS storage domain to the data center in your Red Hat Enterprise Virtualization environment. This storage domain provides storage for virtualized guest images and ISO boot media. This procedure assumes that you have already exported shares. You must create the data domain before creating the export domain. Use the same procedure to create the export domain, selecting Export / NFS in the Domain Function / Storage Type list.
  1. In the Red Hat Enterprise Virtualization Manager Administration Portal, click the Storage resource tab.
  2. Click New Domain.
    The New Domain Window

    Figure 8.1. The New Domain Window

  3. Enter a Name for the storage domain.
  4. Accept the default values for the Data Center, Domain Function / Storage Type, Format, and Use Host lists.
  5. Enter the Export Path to be used for the storage domain.
    The export path should be in the format of 192.168.0.10:/data or domain.example.com:/data.
  6. Click OK.
    The new NFS data domain is displayed in the Storage tab with a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center.

8.3.3. Increasing NFS Storage

To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Section 8.3.2, “Attaching NFS Storage”. The following procedure explains how to increase the available free space on the existing NFS server.

Procedure 8.1. Increasing an Existing NFS Storage Domain

  1. Click the Storage resource tab and select an NFS storage domain.
  2. In the details pane, click the Data Center tab and click the Maintenance button to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain.
  3. On the NFS server, resize the storage. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide. For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide.
  4. In the details pane, click the Data Center tab and click the Activate button to mount the storage domain.

8.4. Preparing and Adding Local Storage

8.4.1. Preparing Local Storage

Summary
A local storage domain can be set up on a host. When you set up a host to use local storage, the host automatically gets added to a new data center and cluster that no other hosts can be added to. Multiple host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single host cluster cannot be migrated, fenced or scheduled.

Important

On Red Hat Enterprise Virtualization Hypervisors the only path permitted for use as local storage is /data/images. This directory already exists with the correct permissions on Hypervisor installations. The steps in this procedure are only required when preparing local storage on Red Hat Enterprise Linux virtualization hosts.

Procedure 8.2. Preparing Local Storage

  1. On the virtualization host, create the directory to be used for the local storage.
    # mkdir -p /data/images
  2. Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36).
    # chown 36:36 /data /data/images
    # chmod 0755 /data /data/images
Result
Your local storage is ready to be added to the Red Hat Enterprise Virtualization environment.

8.4.2. Adding Local Storage

Summary
Storage local to your host has been prepared. Now use the Manager to add it to the host.
Adding local storage to a host in this manner causes the host to be put in a new data center and cluster. The local storage configuration window combines the creation of a data center, a cluster, and storage into a single process.

Procedure 8.3. Adding Local Storage

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to initiate maintenance mode.
  4. Click Configure Local Storage to open the Configure Local Storage window.
    Configure Local Storage Window

    Figure 8.2. Configure Local Storage Window

  5. Click the Edit buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
  6. Set the path to your local storage in the text entry field.
  7. If applicable, select the Optimization tab to configure the memory optimization policy for the new local storage cluster.
  8. Click OK to save the settings and close the window.
Result
Your host comes online in a data center of its own.

8.5. Preparing and Adding POSIX Compliant File System Storage

Red Hat Enterprise Virtualization 3.1 and higher supports the use of POSIX (native) file systems for storage. POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP.
Any POSIX compliant filesystem used as a storage domain in Red Hat Enterprise Virtualization MUST support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Enterprise Virtualization.

Important

Do not mount NFS storage by creating a POSIX compliant file system Storage Domain. Always create an NFS Storage Domain instead.

8.5.1. Attaching POSIX Compliant File System Storage

Summary
You want to use a POSIX compliant file system that is not exposed using NFS, iSCSI, or FCP as a storage domain.

Procedure 8.4. Attaching POSIX Compliant File System Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click New Domain to open the New Domain window.
    POSIX Storage

    Figure 8.3. POSIX Storage

  3. Enter the Name for the storage domain.
  4. Select the Data Center to be associated with the storage domain. The Data Center selected must be of type POSIX (POSIX compliant FS). Alternatively, select (none).
  5. Select Data / POSIX compliant FS from the Domain Function / Storage Type drop-down menu.
    If applicable, select the Format from the drop-down menu.
  6. Select a host from the Use Host drop-down menu. Only hosts within the selected data center will be listed. The host that you select will be used to connect the storage domain.
  7. Enter the Path to the POSIX file system, as you would normally provide it to the mount command.
  8. Enter the VFS Type, as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types.
  9. Enter additional Mount Options, as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options.
  10. Click OK to attach the new Storage Domain and close the window.
Result
You have used a supported mechanism to attach an unsupported file system as a storage domain.

8.6. Preparing and Adding Block Storage

8.6.1. Preparing iSCSI Storage

Use the following steps to export an iSCSI storage device from a server running Red Hat Enterprise Linux 6 to use as a storage domain with Red Hat Enterprise Virtualization.

Procedure 8.5. Preparing iSCSI Storage

  1. Install the scsi-target-utils package using the yum command as root on your storage server.
    # yum install -y scsi-target-utils
  2. Add the devices or files you want to export to the /etc/tgt/targets.conf file. Here is a generic example of a basic addition to the targets.conf file:
    <target iqn.YEAR-MONTH.com.EXAMPLE:SERVER.targetX>
              backing-store /PATH/TO/DEVICE1 # Becomes LUN 1
              backing-store /PATH/TO/DEVICE2 # Becomes LUN 2
              backing-store /PATH/TO/DEVICE3 # Becomes LUN 3
    </target>
    Targets are conventionally defined using the year and month they are created, the reversed fully qualified domain that the server is in, the server name, and a target number.
  3. Start the tgtd service.
    # service tgtd start
  4. Make the tgtd start persistently across reboots.
    # chkconfig tgtd on
  5. Open an iptables firewall port to allow clients to access your iSCSI export. By default, iSCSI uses port 3260. This example inserts a firewall rule at position 6 in the INPUT table.
    # iptables -I INPUT 6 -p tcp --dport 3260 -j ACCEPT
  6. Save the iptables rule you just created.
    # service iptables save
You have created a basic iSCSI export. You can use it as an iSCSI data domain.

8.6.2. Adding iSCSI Storage

Red Hat Enterprise Virtualization platform supports iSCSI storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
For information regarding the setup and configuration of iSCSI on Red Hat Enterprise Linux, see Set up an iSCSI Target and Initiator in the Red Hat Enterprise Linux Storage Administration Guide.

Procedure 8.6. Adding iSCSI Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click the New Domain button to open the New Domain window.
  3. Enter the Name of the new storage domain.
    New iSCSI Domain

    Figure 8.4. New iSCSI Domain

  4. Use the Data Center drop-down menu to select an data center.
    If you do not yet have an appropriate iSCSI data center, select (none).
  5. Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen domain function are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. The Red Hat Enterprise Virtualization Manager is able to map either iSCSI targets to LUNs, or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.

    iSCSI Target Discovery

    1. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.

      Note

      LUNs used externally to the environment are also displayed.
      You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.
    2. Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
    3. Enter the port to connect to the host on when browsing for targets in the Port field. The default is 3260.
    4. If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
    5. Click the Discover button.
    6. Select the target to use from the discovery results and click the Login button.
      Alternatively, click the Login All to log in to all of the discovered targets.

      Important

      If more than one path access is required, ensure to discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
  8. Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
  9. Select the check box for each LUN that you are using to create the storage domain.
  10. Click OK to create the storage domain and close the window.
If you have configured multiple storage connection paths to the same target, follow the procedure in Section 8.6.3, “Configuring iSCSI Multipathing” to complete iSCSI bonding.

8.6.3. Configuring iSCSI Multipathing

The iSCSI Multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. To prevent host downtime due to network path failure, configure multiple network paths between hosts and iSCSI storage. Once configured, the Manager connects each host in the data center to each bonded target via NICs/VLANs related to logical networks of the same iSCSI Bond. You can also specify which networks to use for storage traffic, instead of allowing hosts to route traffic through a default network. This option is only available in the Administration Portal after at least one iSCSI storage domain has been attached to a data center.

Prerequisites

  • Ensure you have created an iSCSI storage domain and discovered and logged into all the paths to the iSCSI target(s).
  • Ensure you have created Non-Required logical networks to bond with the iSCSI storage connections. You can configure multiple logical networks or bond networks to allow network failover.

Procedure 8.7. Configuring iSCSI Multipathing

  1. Click the Data Centers tab and select a data center from the results list.
  2. In the details pane, click the iSCSI Multipathing tab.
  3. Click Add.
  4. In the Add iSCSI Bond window, enter a Name and a Description for the bond.
  5. Select the networks to be used for the bond from the Logical Networks list. The networks must be Non-Required networks.
  6. Select the storage domain to be accessed via the chosen networks from the Storage Targets list. Ensure to select all paths to the same target.
  7. Click OK.
All hosts in the data center are connected to the selected iSCSI target through the selected logical networks.

8.6.4. Adding FCP Storage

Red Hat Enterprise Virtualization platform supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
Red Hat Enterprise Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information regarding the setup and configuration of FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.

Note

You can only add an FCP storage domain to a data center that is set up for FCP storage type.

Procedure 8.8. Adding FCP Storage

  1. Click the Storage resource tab to list all storage domains in the virtualized environment.
  2. Click New Domain to open the New Domain window.
  3. Enter the Name of the storage domain.
    Adding FCP Storage

    Figure 8.5. Adding FCP Storage

  4. Use the Data Center drop-down menu to select an FCP data center.
    If you do not yet have an appropriate FCP data center, select (none).
  5. Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen data center are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. The New Domain window automatically displays known targets with unused LUNs when Data / Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
  8. Click OK to create the storage domain and close the window.
The new FCP data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.

8.6.5. Increasing iSCSI or FCP Storage

To increase the iSCSI or FCP storage size, you can either create a new storage domain with new LUNs and add it to an existing datacenter, or create new LUNs and add it to an existing storage domain. For the former option, see Section 8.6.2, “Adding iSCSI Storage”. The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain.

Note

It is also possible to expand the storage domain by resizing the underlying LUNs. For more information, see the knowledgebase article on the Red Hat Customer Portal: https://access.redhat.com/solutions/376873.

Procedure 8.9. Increasing an Existing iSCSI or FCP Storage Domain

  1. Create a new LUN on the SAN. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide. For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide.
  2. Click the Storage resource tab and select an iSCSI or FCP domain. Click the Edit button.
  3. Click on Targets > LUNs, and click the Discover Targets expansion button.
  4. Enter the connection information for the storage server and click the Discover button to initiate the connection.
  5. Click on LUNs > Targets and select the check box of the newly available LUN.
  6. Click OK to add the LUN to the selected storage domain.
This will increase the storage domain by the size of the added LUN.

8.6.6. Unusable LUNs in Red Hat Enterprise Virtualization

In certain circumstances, the Red Hat Enterprise Virtualization Manager will not allow you to use a LUN to create a storage domain or virtual machine hard disk.
  • LUNs that are already part of the current Red Hat Enterprise Virtualization environment are automatically prevented from being used.
    Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

    Figure 8.6. Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

  • LUNs that are already being used by the SPM host will also display as in use. You can choose to forcefully over ride the contents of these LUNs, but the operation is not guaranteed to succeed.

8.7. Importing Existing Storage Domains

8.7.1. Overview of Importing Existing Storage Domains

In addition to adding new storage domains that contain no data, you can also import existing storage domains and access the data they contain. The ability to import storage domains allows you to recover data in the event of a failure in the engine database, and to migrate data from one data center or environment to another.
The following is an overview of importing each storage domain type:
Data
Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import each virtual machine and template into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments.

Important

You can only import existing data storage domains that were attached to data centers with a compatibility level of 3.5 or higher.
ISO
Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required.
Export
Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Section 10.14, “Exporting and Importing Virtual Machines and Templates”.

8.7.2. Importing a Storage Domain

Summary
Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is not attached to a data center in any environment. Moreover, to import and attach an existing data storage domain to a data center, the target data center must be initialized, and must have a compatibility level of 3.5 or higher.

Procedure 8.10. Importing a Storage Domain

  1. Click the Storage resource tab.
  2. Click Import Domain.
    The Import Pre-Configured Domain window

    Figure 8.7. The Import Pre-Configured Domain window

  3. Select the data center to which to attach the storage domain from the Data Center list.
  4. Optionally, select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center.
  5. Select the function and type of the domain from the Domain Function / Storage Type list.
  6. Select the SPM host from the Use host list.

    Important

    All communication to the storage domain is through the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  7. Enter the details of the storage domain.

    Note

    The fields for specifying the details of the storage domain change in accordance with the value you select in the Domain Function / Storage Type list. These options are the same as those available for adding a new storage domain. For more information on these options, see Section 8.1, “Understanding Storage Domains”.
  8. Click OK.
Result
The storage domain is imported, and is displayed in the Storage tab.

8.7.3. Importing Virtual Machines from an Imported Data Storage Domain

Summary
Import a virtual machine from a data storage domain you have imported into your Red Hat Enterprise Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.

Procedure 8.11. Importing Virtual Machines from an Imported Data Storage Domain

  1. Click the Storage resource tab.
  2. Click the imported data storage domain.
  3. Click the VM Import tab in the details pane.
  4. Select one or more virtual machines to import.
  5. Click Import.
  6. Select the cluster into which the virtual machines are imported from the Cluster list.
  7. Click OK.
Result
You have imported one or more virtual machines into your environment. The imported virtual machines no longer appear in the list under the VM Import tab.

8.7.4. Importing Templates from Imported Data Storage Domains

Summary
Import a template from a data storage domain you have imported into your Red Hat Enterprise Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.

Procedure 8.12. Importing Templates from an Imported Data Storage Domain

  1. Click the Storage resource tab.
  2. Click the imported data storage domain.
  3. Click the Template Import tab in the details pane.
  4. Select one or more templates to import.
  5. Click Import.
  6. Select the cluster into which the templates are imported from the Cluster list.
  7. Click OK.
Result
You have imported one or more templates into your environment. The imported templates no longer appear in the list under the Template Import tab.

8.8. Storage Tasks

8.8.1. Populating the ISO Storage Domain

Summary
An ISO storage domain is attached to a data center. ISO images must be uploaded to it. Red Hat Enterprise Virtualization provides an ISO uploader tool that ensures that the images are uploaded into the correct directory path, with the correct user permissions.
The creation of ISO images from physical media is not described in this document. It is assumed that you have access to the images required for your environment.

Procedure 8.13. Populating the ISO Storage Domain

  1. Copy the required ISO image to a temporary directory on the system running Red Hat Enterprise Virtualization Manager.
  2. Log in to the system running Red Hat Enterprise Virtualization Manager as the root user.
  3. Use the engine-iso-uploader command to upload the ISO image. This action will take some time. The amount of time varies depending on the size of the image being uploaded and available network bandwidth.

    Example 8.1. ISO Uploader Usage

    In this example the ISO image RHEL6.iso is uploaded to the ISO domain called ISODomain using NFS. The command will prompt for an administrative user name and password. The user name must be provided in the form user name@domain.
    # engine-iso-uploader --iso-domain=ISODomain upload RHEL6.iso
Result
The ISO image is uploaded and appears in the ISO storage domain specified. It is also available in the list of available boot media when creating virtual machines in the data center to which the storage domain is attached.

8.8.2. Moving Storage Domains to Maintenance Mode

Detaching and removing storage domains requires that they be in maintenance mode. This is required to redesignate another data domain as the master data domain.
Expanding iSCSI domains by adding more LUNs can only be done when the domain is active.

Procedure 8.14. Moving storage domains to maintenance mode

  1. Shut down all the virtual machines running on the storage domain.
  2. Click the Storage resource tab and select a storage domain.
  3. Click the Data Centers tab in the details pane.
  4. Click Maintenance to open the Maintenance Storage Domain(s) confirmation window.
  5. Click OK to initiate maintenance mode. The storage domain is deactivated and has an Inactive status in the results list.
You can now edit, detach, remove, or reactivate the inactive storage domains from the data center.

Note

You can also activate, detach and place domains into maintenance mode using the Storage tab on the details pane of the data center it is associated with.

8.8.3. Editing Storage Domains

You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center, Domain Function / Storage Type, and Format cannot be changed.
  • Active: When the storage domain is in an active state, the Name, Description, and Comment fields can be edited. This is supported for all storage types.
  • Inactive: When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types.

    Note

    iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating an iSCSI Storage Connection.

Procedure 8.15. Editing an Active Storage Domain

  1. Click the Storage resource tab and select a storage domain in the results list.
  2. Click Edit.
  3. Edit the Name, Description, and Comment fields as required.
  4. Click OK.

Procedure 8.16. Editing an Inactive Storage Domain

  1. Click the Storage resource tab and select a storage domain in the results list. if the storage domain is active, put it into maintenance mode.
  2. Click Edit.
  3. Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection.
  4. Click OK.
  5. Click the Data Center tab in the details pane and click Activate.

8.8.4. Activating Storage Domains

If you have been making changes to a data center's storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it.
  1. Click the Storage resource tab and select an inactive storage domain in the results list.
  2. Click the Data Centers tab in the details pane.
  3. Select the appropriate storage domain and click Activate.

    Important

    If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated.

8.8.5. Removing a Storage Domain

Summary
You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure 8.17. Removing a Storage Domain

  1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list.
  2. Move the domain into maintenance mode to deactivate it.
  3. Detach the domain from the data center.
  4. Click Remove to open the Remove Storage confirmation window.
  5. Select a host from the list.
  6. Click OK to remove the storage domain and close the window.
Summary
The storage domain is permanently removed from the environment.

8.8.6. Destroying a Storage Domain

Summary
A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain will forcibly remove the storage domain from the virtualized environment without reference to the export directory.
When the storage domain is destroyed, you are required to manually fix the export directory of the storage domain before it can be used again.

Procedure 8.18. Destroying a Storage Domain

  1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list.
  2. Right-click the storage domain and select Destroy to open the Destroy Storage Domain confirmation window.
  3. Select the Approve operation check box and click OK to destroy the storage domain and close the window.
Result
The storage domain has been destroyed. Manually clean the export directory for the storage domain to recycle it.

8.8.7. Detaching the Export Domain

Summary
Detach the export domain from the data center to import the templates to another data center.

Procedure 8.19. Detaching an Export Domain from the Data Center

  1. Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list.
  2. Click the Data Centers tab in the details pane and select the export domain.
  3. Click Maintenance to open the Maintenance Storage Domain(s) confirmation window.
  4. Click OK to initiate maintenance mode.
  5. Click Detach to open the Detach Storage confirmation window.
  6. Click OK to detach the export domain.
Result
The export domain has been detached from the data center, ready to be attached to another data center.

8.8.8. Attaching an Export Domain to a Data Center

Summary
Attach the export domain to a data center.

Procedure 8.20. Attaching an Export Domain to a Data Center

  1. Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list.
  2. Click the Data Centers tab in the details pane.
  3. Click Attach to open the Attach to Data Center window.
  4. Select the radio button of the appropriate data center.
  5. Click OK to attach the export domain.
Result
The export domain is attached to the data center and is automatically activated.

8.8.9. Disk Profiles

Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect.

8.8.9.1. Creating a Disk Profile

Create a disk profile. This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs.

Procedure 8.21. Creating a Disk Profile

  1. Click the Storage resource tab and select a data storage domain.
  2. Click the Disk Profiles sub tab in the details pane.
  3. Click New.
  4. Enter a name for the disk profile in the Name field.
  5. Enter a description for the disk profile in the Description field.
  6. Select the quality of service to apply to the disk profile from the QoS list.
  7. Click OK.
You have created a disk profile, and that disk profile can be applied to new virtual disks hosted in the data storage domain.

8.8.9.2. Removing a Disk Profile

Remove an existing disk profile from your Red Hat Enterprise Virtualization environment.

Procedure 8.22. Removing a Disk Profile

  1. Click the Storage resource tab and select a data storage domain.
  2. Click the Disk Profiles sub tab in the details pane.
  3. Select the disk profile to remove.
  4. Click Remove.
  5. Click OK.
You have removed a disk profile, and that disk profile is no longer available. If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks.

8.9. Storage and Permissions

8.9.1. Managing System Permissions for a Storage Domain

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment.
The storage domain administrator role permits the following actions:
  • Edit the configuration of the storage domain.
  • Move the storage domain into maintenance mode.
  • Remove the storage domain.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator.

8.9.2. Storage Administrator Roles Explained

Storage Domain Permission Roles
The table below describes the administrator roles and privileges applicable to storage domain administration.

Table 8.1. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
StorageAdmin Storage Administrator Can create, delete, configure and manage a specific storage domain.
GlusterAdmin Gluster Storage Administrator Can create, delete, configure and manage Gluster storage volumes.

8.9.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 8.23. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

8.9.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 8.24. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 9. Working with Red Hat Gluster Storage

9.1. Red Hat Gluster Storage Nodes

9.1.1. Adding Red Hat Gluster Storage Nodes

Add Red Hat Gluster Storage nodes to Gluster-enabled clusters and incorporate GlusterFS volumes and bricks into your Red Hat Enterprise Virtualization environment.
This procedure presumes that you have a Gluster-enabled cluster of the appropriate Compatibility Version and a Red Hat Gluster Storage node already set up. For information on setting up a Red Hat Gluster Storage node, see the Red Hat Gluster Storage Installation Guide. For more information on the compatibility matrix, see the Configuring Red Hat Enterprise Virtualization with Red Hat Storage Guide.

Procedure 9.1. Adding a Red Hat Gluster Storage Node

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the Red Hat Gluster Storage node.
  4. Enter the Name, Address, and SSH Port of the Red Hat Gluster Storage node.
  5. Select an authentication method to use with the Red Hat Gluster Storage node.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the Red Hat Gluster Storage node to use public key authentication.
  6. Click OK to add the node and close the window.
You have added a Red Hat Gluster Storage node to your Red Hat Enterprise Virtualization environment. You can now use the volume and brick resources of the node in your environment.

9.1.2. Removing a Red Hat Gluster Storage Node

Remove a Red Hat Gluster Storage node from your Red Hat Enterprise Virtualization environment.

Procedure 9.2. Removing a Red Hat Gluster Storage Node

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the Red Hat Gluster Storage node in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to move the host to maintenance mode.
  4. Click Remove to open the Remove Host(s) confirmation window.
  5. Select the Force Remove check box if the node has volume bricks on it, or if the node is non-responsive.
  6. Click OK to remove the node and close the window.
Your Red Hat Gluster Storage node has been removed from the environment and is no longer visible in the Hosts tab.

9.2. Using Red Hat Gluster Storage as a Storage Domain

9.2.1. Introduction to Red Hat Gluster Storage (GlusterFS) Volumes

Red Hat Gluster Storage volumes combine storage from more than one Red Hat Gluster Storage server into a single global namespace. A volume is a collection of bricks, where each brick is a mountpoint or directory on a Red Hat Gluster Storage Server in the trusted storage pool.
Most of the management operations of Red Hat Gluster Storage happen on the volume.
You can use the Administration Portal to create and start new volumes. You can monitor volumes in your Red Hat Gluster Storage cluster from the Volumes tab.
While volumes can be created and managed from the Administration Portal, bricks must be created on the individual Red Hat Gluster Storage nodes before they can be added to volumes using the Administration Portal

9.2.2. Gluster Storage Terminology

Table 9.1. Data Center Properties

Term
Definition
Brick
A brick is the GlusterFS basic unit of storage, represented by an export directory on a server in the trusted storage pool. A Brick is expressed by combining a server with an export directory in the following format:
SERVER:EXPORT
For example:
myhostname:/exports/myexportdir/
Block Storage
Block special files or block devices correspond to devices through which the system moves data in the form of blocks. These device nodes often represent addressable devices such as hard disks, CD-ROM drives, or memory-regions. Red Hat Gluster Storage supports XFS file system with extended attributes.
Cluster
A trusted pool of linked computers, working together closely thus in many respects forming a single computer. In Red Hat Gluster Storage terminology a cluster is called a trusted storage pool.
Client
The machine that mounts the volume (this may also be a server).
Distributed File System
A file system that allows multiple clients to concurrently access data spread across multiple servers/bricks in a trusted storage pool. Data sharing among multiple locations is fundamental to all distributed file systems.
Geo-Replication
Geo-replication provides a continuous, asynchronous, and incremental replication service from site to another over Local Area Networks (LAN), Wide Area Network (WAN), and across the Internet.
glusterd
The Gluster management daemon that needs to run on all servers in the trusted storage pool.
Metadata
Metadata is data providing information about one or more other pieces of data.
N-way Replication
Local synchronous data replication typically deployed across campus or Amazon Web Services Availability Zones.
Namespace
Namespace is an abstract container or environment created to hold a logical grouping of unique identifiers or symbols. Each Red Hat Gluster Storage trusted storage pool exposes a single namespace as a POSIX mount point that contains every file in the trusted storage pool.
POSIX
Portable Operating System Interface (for Unix) is the name of a family of related standards specified by the IEEE to define the application programming interface (API), along with shell and utilities interfaces for software compatible with variants of the UNIX operating system. Red Hat Gluster Storage exports a fully POSIX compatible file system.
RAID
Redundant Array of Inexpensive Disks (RAID) is a technology that provides increased storage reliability through redundancy, combining multiple low-cost, less-reliable disk drives components into a logical unit where all drives in the array are interdependent.
RRDNS
Round Robin Domain Name Service (RRDNS) is a method to distribute load across application servers. RRDNS is implemented by creating multiple A records with the same name and different IP addresses in the zone file of a DNS server.
Server
The machine (virtual or bare-metal) which hosts the actual file system in which data will be stored.
Scale-Up Storage
Increases the capacity of the storage device, but only in a single dimension. An example might be adding additional disk capacity to a single computer in a trusted storage pool.
Scale-Out Storage
Increases the capability of a storage device in multiple dimensions. For example adding a server to a trusted storage pool increases CPU, disk capacity, and throughput for the trusted storage pool.
Subvolume
A subvolume is a brick after being processed by at least one translator.
Translator
A translator connects to one or more subvolumes, does something with them, and offers a subvolume connection.
Trusted Storage Pool
A storage pool is a trusted network of storage servers. When you start the first server, the storage pool consists of that server alone.
User Space
Applications running in user space do not directly interact with hardware, instead using the kernel to moderate access. User Space applications are generally more portable than applications in kernel space. Gluster is a user space application.
Virtual File System (VFS)
VFS is a kernel software layer that handles all system calls related to the standard Linux file system. It provides a common interface to several kinds of file systems.
Volume File
The volume file is a configuration file used by GlusterFS process. The volume file will usually be located at: /var/lib/glusterd/vols/VOLNAME.
Volume
A volume is a logical collection of bricks. Most of the Gluster management operations happen on the volume.

9.2.3. Creating a Storage Volume

You can create new volumes using the Administration Portal. When creating a new volume, you must specify the bricks that comprise the volume and specify whether the volume is to be distributed, replicated, or striped.
You must create brick directories or mountpoints before you can add them to volumes.

Important

It is recommended that you use replicated volumes, where bricks exported from different hosts are combined into a volume. Replicated volumes create copies of files across multiple bricks in the volume, preventing data loss when a host is fenced.

Procedure 9.3. Creating A Storage Volume

  1. Click the Volumes resource tab to list existing volumes in the results list.
  2. Click New to open the New Volume window.
  3. Use the drop-down menus to select the Data Center and Volume Cluster.
  4. Enter the Name of the volume.
  5. Use the drop-down menu to select the Type of the volume.
  6. If active, select the appropriate Transport Type check box.
  7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Red Hat Gluster Storage nodes.
  8. If active, use the Gluster, NFS, and CIFS check boxes to select the appropriate access protocols used for the volume.
  9. Enter the volume access control as a comma-separated list of IP addresses or hostnames in the Allow Access From field.
    You can use the * wildcard to specify ranges of IP addresses or hostnames.
  10. Select the Optimize for Virt Store option to set the parameters to optimize your volume for virtual machine storage. Select this if you intend to use this volume as a storage domain.
  11. Click OK to create the volume. The new volume is added and displays on the Volume tab.
You have added a Red Hat Gluster Storage volume. You can now use it for storage.

9.2.4. Adding Bricks to a Volume

Summary
You can expand your volumes by adding new bricks. You need to add at least one brick to a distributed volume, multiples of two bricks to replicated volumes, and multiples of four bricks to striped volumes when expanding your storage space.

Procedure 9.4. Adding Bricks to a Volume

  1. On the Volumes tab on the navigation pane, select the volume to which you want to add bricks.
  2. Click the Bricks tab from the Details pane.
  3. Click Add Bricks to open the Add Bricks window.
  4. Use the Server drop-down menu to select the server on which the brick resides.
  5. Enter the path of the Brick Directory. The directory must already exist.
  6. Click Add. The brick appears in the list of bricks in the volume, with server addresses and brick directory names.
  7. Click OK.
Result
The new bricks are added to the volume and the bricks display in the volume's Bricks tab.

9.2.5. Explanation of Settings in the Add Bricks Window

Table 9.2. Add Bricks Tab Properties

Field Name
Description
Volume Type
Displays the type of volume. This field cannot be changed; it was set when you created the volume.
Server
The server where the bricks are hosted.
Brick Directory
The brick directory or mountpoint.

9.2.6. Optimizing Red Hat Gluster Storage Volumes to Store Virtual Machine Images

Optimize a Red Hat Gluster Storage volume to store virtual machine images using the Administration Portal.
To optimize a volume for storing virtual machines, the Manager sets a number of virtualization-specific parameters for the volume.

Important

Red Hat Gluster Storage currently supports Red Hat Enterprise Virtualization 3.1 and above. All Gluster clusters and hosts must be attached to data centers that are compatible with versions higher than 3.0.
Volumes can be optimized to store virtual machines during creation by selecting the Optimize for Virt Store check box, or after creation using the Optimize for Virt Store button from the Volumes resource tab.

Important

If a volume is replicated across three or more nodes, ensure the volume is optimized for virtual storage to avoid data inconsistencies across the nodes.
An alternate method is to access one of the Red Hat Gluster Storage nodes and set the volume group to virt. This sets the cluster.quorum-type parameter to auto, and the cluster.server-quorum-type parameter to server.
# gluster volume set VOLUME_NAME group virt
Verify the status of the volume by listing the volume information:
# gluster volume info VOLUME_NAME

9.2.7. Starting Volumes

Summary
After a volume has been created or an existing volume has been stopped, it needs to be started before it can be used.

Procedure 9.5. Starting Volumes

  1. In the Volumes tab, select the volume to be started.
    You can select multiple volumes to start by using Shift or Ctrl key.
  2. Click the Start button.
The volume status changes to Up.
Result
You can now use your volume for virtual machine storage.

9.2.8. Tuning Volumes

Summary
Tuning volumes allows you to affect their performance. To tune volumes, you add options to them.

Procedure 9.6. Tuning Volumes

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume that you want to tune, and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Click Add to set an option. The Add Option dialog box displays. Select the Option Key from the drop down list and enter the option value.
  4. Click OK.
    The option is set and displays in the Volume Options tab.
Result
You have tuned the options for your storage volume.

9.2.9. Editing Volume Options

Summary
You have tuned your volume by adding options to it. You can change the options for your storage volume.

Procedure 9.7. Editing Volume Options

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume that you want to edit, and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Select the option you want to edit. Click Edit. The Edit Option dialog box displays. Enter a new value for the option.
  4. Click OK.
    The edited option displays in the Volume Options tab.
Result
You have changed the options on your volume.

9.2.10. Reset Volume Options

Summary
You can reset options to revert them to their default values.
  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume and click the Volume Options tab from the Details pane.
    The Volume Options tab displays a list of options set for the volume.
  3. Select the option you want to reset. Click Reset. A dialog box displays, prompting to confirm the reset option.
  4. Click OK.
    The selected option is reset.

Note

You can reset all volume options by clicking Reset All button. A dialog box displays, prompting to confirm the reset option. Click OK. All volume options are reset for the selected volume.
Result
You have reset volume options to default.

9.2.11. Removing Bricks from a Volume

Summary
You can shrink volumes, as needed, while the cluster is online and available. For example, you might need to remove a brick that has become inaccessible in a distributed volume due to hardware or network failure.

Procedure 9.8. Removing Bricks from a Volume

  1. On the Volumes tab on the navigation pane, select the volume from which you wish to remove bricks.
  2. Click the Bricks tab from the Details pane.
  3. Select the bricks you wish to remove. Click Remove Bricks.
  4. A window opens, prompting to confirm the deletion. Click OK to confirm.
Result
The bricks are removed from the volume.

9.2.12. Stopping Red Hat Gluster Storage Volumes

After a volume has been started, it can be stopped.

Procedure 9.9. Stopping Volumes

  1. In the Volumes tab, select the volume to be stopped.
    You can select multiple volumes to stop by using Shift or Ctrl key.
  2. Click Stop.

9.2.13. Deleting Red Hat Gluster Storage Volumes

You can delete a volume or multiple volumes from your cluster.
  1. In the Volumes tab, select the volume to be deleted.
  2. Click Remove. A dialog box displays, prompting to confirm the deletion. Click OK.

9.2.14. Rebalancing Volumes

Summary
If a volume has been expanded or shrunk by adding or removing bricks to or from that volume, the data on the volume must be rebalanced amongst the servers.

Procedure 9.10. Rebalancing a Volume

  1. Click the Volumes tab.
    A list of volumes displays.
  2. Select the volume to rebalance.
  3. Click Rebalance.
Result
The selected volume is rebalanced.

9.3. Clusters and Gluster Hooks

9.3.1. Managing Gluster Hooks

Gluster hooks are volume life cycle extensions. You can manage Gluster hooks from the Manager. The content of the hook can be viewed if the hook content type is Text.
Through the Manager, you can perform the following:
  • View a list of hooks available in the hosts.
  • View the content and status of hooks.
  • Enable or disable hooks.
  • Resolve hook conflicts.

9.3.2. Listing Hooks

Summary
List the Gluster hooks in your environment.

Procedure 9.11. Listing a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
Result
You have listed the Gluster hooks in your environment.

9.3.3. Viewing the Content of Hooks

Summary
View the content of a Gluster hook in your environment.

Procedure 9.12. Viewing the Content of a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select a hook with content type Text and click the View Content button to open the Hook Content window.
Result
You have viewed the content of a hook in your environment.

9.3.4. Enabling or Disabling Hooks

Summary
Toggle the activity of a Gluster hook by enabling or disabling it.

Procedure 9.13. Enabling or Disabling a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select a hook and click one of the Enable or Disable buttons. The hook is enabled or disabled on all nodes of the cluster.
Result
You have toggled the activity of a Gluster hook in your environment.

9.3.5. Refreshing Hooks

Summary
By default, the Manager checks the status of installed hooks on the engine and on all servers in the cluster and detects new hooks by running a periodic job every hour. You can refresh hooks manually by clicking the Sync button.

Procedure 9.14. Refreshing a Hook

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Click the Sync button.
Result
The hooks are synchronized and updated in the details pane.

9.3.6. Resolving Conflicts

The hooks are displayed in the Gluster Hooks sub-tab of the Cluster tab. Hooks causing a conflict are displayed with an exclamation mark. This denotes either that there is a conflict in the content or the status of the hook across the servers in the cluster, or that the hook script is missing in one or more servers. These conflicts can be resolved via the Manager. The hooks in the servers are periodically synchronized with engine database and the following conflicts can occur for the hooks:
  • Content Conflict - the content of the hook is different across servers.
  • Missing Conflict - one or more servers of the cluster do not have the hook.
  • Status Conflict - the status of the hook is different across servers.
  • Multiple Conflicts - a hook has a combination of two or more of the aforementioned conflicts.

9.3.7. Resolving Content Conflicts

Summary
A hook that is not consistent across the servers and engine will be flagged as having a conflict. To resolve the conflict, you must select a version of the hook to be copied across all servers and the engine.

Procedure 9.15. Resolving a Content Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Select the engine or a server from the list of sources to view the content of that hook and establish which version of the hook to copy.

    Note

    The content of the hook will be overwritten in all servers and in the engine.
  5. Use the Use content from drop-down menu to select the preferred server or the engine.
  6. Click OK to resolve the conflict and close the window.
Result
The hook from the selected server is copied across all servers and the engine to be consistent across the environment.

9.3.8. Resolving Missing Hook Conflicts

Summary
A hook that is not present on all the servers and the engine will be flagged as having a conflict. To resolve the conflict, either select a version of the hook to be copied across all servers and the engine, or remove the missing hook entirely.

Procedure 9.16. Resolving a Missing Hook Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Select any source with a status of Enabled to view the content of the hook.
  5. Select the appropriate radio button, either Copy the hook to all the servers or Remove the missing hook. The latter will remove the hook from the engine and all servers.
  6. Click OK to resolve the conflict and close the window.
Result
Depending on your chosen resolution, the hook has either been removed from the environment entirely, or has been copied across all servers and the engine to be consistent across the environment.

9.3.9. Resolving Status Conflicts

Summary
A hook that does not have a consistent status across the servers and engine will be flagged as having a conflict. To resolve the conflict, select a status to be enforced across all servers in the environment.

Procedure 9.17. Resolving a Status Conflict

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Set Hook Status to Enable or Disable.
  5. Click OK to resolve the conflict and close the window.
Result
The selected status for the hook is enforced across the engine and the servers to be consistent across the environment.

9.3.10. Resolving Multiple Conflicts

Summary
A hook may have a combination of two or more conflicts. These can all be resolved concurrently or independently through the Resolve Conflicts window. This procedure will resolve all conflicts for the hook so that it is consistent across the engine and all servers in the environment.

Procedure 9.18. Resolving Multiple Conflicts

  1. Use the Cluster resource tab, tree mode, or the search function to find and select a cluster in the results list.
  2. Select the Gluster Hooks sub-tab to list the hooks in the details pane.
  3. Select the conflicting hook and click the Resolve Conflicts button to open the Resolve Conflicts window.
  4. Choose a resolution to each of the affecting conflicts, as per the appropriate procedure.
  5. Click OK to resolve the conflicts and close the window.
Result
You have resolved all of the conflicts so that the hook is consistent across the engine and all servers.

9.3.11. Managing Gluster Sync

The Gluster Sync feature periodically fetches the latest cluster configuration from GlusterFS and syncs the same with the engine DB. This process can be performed through the Manager. When a cluster is selected, the user is provided with the option to import hosts or detach existing hosts from the selected cluster. You can perform Gluster Sync if there is a host in the cluster.

Note

The Manager continuously monitors if hosts are added to or removed from the storage cluster. If the addition or removal of a host is detected, an action item is shown in the General tab for the cluster, where you can either to choose to Import the host into or Detach the host from the cluster.

Chapter 10. Virtual Machines

10.1. Introduction to Virtual Machines

A virtual machine is a software implementation of a computer. The Red Hat Enterprise Virtualization environment enables you to create virtual desktops and virtual servers.
Virtual machines consolidate computing tasks and workloads. In traditional computing environments, workloads usually run on individually administered and upgraded servers. Virtual machines reduce the amount of hardware and administration required to run the same computing tasks and workloads.

10.2. Supported Virtual Machine Operating Systems

The operating systems that can be virtualized as guest operating systems in Red Hat Enterprise Virtualization are as follows:

Table 10.1. Operating systems that can be used as guest operating systems

Operating System Architecture SPICE option support
Red Hat Enterprise Linux 3
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 4
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 5
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 6
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 7
64-bit
Yes
SUSE Linux Enterprise Server 10 (select Other Linux for the guest type in the user interface)
32-bit, 64-bit
No
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide SPICE drivers as part of their distribution.)
32-bit, 64-bit
No
Ubuntu 12.04 (Precise Pangolin LTS)
32-bit, 64-bit
Yes
Ubuntu 12.10 (Quantal Quetzal)
32-bit, 64-bit
Yes
Ubuntu 13.04 (Raring Ringtail)
32-bit, 64-bit
No
Ubuntu 13.10 (Saucy Salamander)
32-bit, 64-bit
Yes
Windows XP Service Pack 3 and newer
32-bit
Yes
Windows 7
32-bit, 64-bit
Yes
Windows 8
32-bit, 64-bit
Yes
Windows 8.1
32-bit, 64-bit
Yes
Windows Server 2003 Service Pack 2 and newer
32-bit, 64-bit
Yes
Windows Server 2003 R2
32-bit, 64-bit
Yes
Windows Server 2008
32-bit, 64-bit
Yes
Windows Server 2008 R2
64-bit
Yes
Windows Server 2012
64-bit
Yes
Windows Server 2012 R2
64-bit
No
Of the operating systems that can be virtualized as guest operating systems in Red Hat Enterprise Virtualization, the operating systems that are supported by Global Support Services are as follows:

Table 10.2. Guest operating systems that are supported by Global Support Services

Operating System Architecture
Red Hat Enterprise Linux 3
32-bit, 64-bit
Red Hat Enterprise Linux 4
32-bit, 64-bit
Red Hat Enterprise Linux 5
32-bit, 64-bit
Red Hat Enterprise Linux 6
32-bit, 64-bit
Red Hat Enterprise Linux 7
64-bit
SUSE Linux Enterprise Server 10 (select Other Linux for the guest type in the user interface)
32-bit, 64-bit
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide SPICE drivers as part of their distribution.)
32-bit, 64-bit
Windows XP Service Pack 3 and newer
32-bit
Windows 7
32-bit, 64-bit
Windows 8
32-bit, 64-bit
Windows 8.1
32-bit, 64-bit
Windows Server 2003 Service Pack 2 and newer
32-bit, 64-bit
Windows Server 2003 R2
32-bit, 64-bit
Windows Server 2008
32-bit, 64-bit
Windows Server 2008 R2
64-bit
Windows Server 2012
64-bit
Windows Server 2012 R2
64-bit
Remote Desktop Protocol (RDP) is the default connection protocol for accessing Windows 8 and Windows 2012 guests from the user portal as Microsoft introduced changes to the Windows Display Driver Model that prevent SPICE from performing optimally.

Note

While Red Hat Enterprise Linux 3 and Red Hat Enterprise Linux 4 are supported, virtual machines running the 32-bit version of these operating systems cannot be shut down gracefully from the administration portal because there is no ACPI support in the 32-bit x86 kernel. To terminate virtual machines running the 32-bit version of Red Hat Enterprise Linux 3 or Red Hat Enterprise Linux 4, right-click the virtual machine and select the Power Off option.

Note

10.3. Virtual Machine Performance Parameters

Red Hat Enterprise Virtualization virtual machines can support the following parameters:

Table 10.3. Supported virtual machine parameters

Parameter Number Note
Virtualized CPUs 160 Per virtual machine running on a Red Hat Enterprise Linux 6 host.
Virtualized CPUs 240 Per virtual machine running on a Red Hat Enterprise Linux 7 host.
Virtualized RAM 4000 GB For a 64 bit virtual machine.
Virtualized RAM 4GB Per 32 bit virtual machine. Note, the virtual machine may not register the entire 4 GB. The amount of RAM that the virtual machine recognizes is limited by its operating system.
Virtualized storage devices 8 Per virtual machine.
Virtualized network interface controllers 8 Per virtual machine.
Virtualized PCI devices 32 Per virtual machine.

10.4. Creating Virtual Machines

10.4.1. Creating a Virtual Machine

Summary
You can create a virtual machine using a blank template and configure all of its settings.

Procedure 10.1. Creating a Virtual Machine

  1. Click the Virtual Machines tab.
  2. Click the New VM button to open the New Virtual Machine window.
    The New Virtual Machine Window

    Figure 10.1. The New Virtual Machine Window

  3. On the General tab, fill in the Name and Operating System fields. You can accept the default settings for other fields, or change them if required.
  4. Alternatively, click the Initial Run, Console, Host, Resource Allocation, Boot Options, Random Generator, and Custom Properties tabs in turn to define options for your virtual machine.
  5. Click OK to create the virtual machine and close the window.
    The New Virtual Machine - Guide Me window opens.
  6. Use the Guide Me buttons to complete configuration or click Configure Later to close the window.
Result
The new virtual machine is created and displays in the list of virtual machines with a status of Down. Before you can use this virtual machine, add at least one network interface and one virtual disk, and install an operating system.

10.4.2. Creating a Virtual Machine Based on a Template

Create virtual machines based on templates. This allows you to create virtual machines that are pre-configured with an operating system, network interfaces, applications and other resources.

Note

Virtual machines created based on a template depend on that template. This means that you cannot remove that template from the Manager if there is a virtual machine that was created based on that template. However, you can clone a virtual machine from a template to remove the dependency on that template.

Procedure 10.2. Creating a Virtual Machine Based on a Template

  1. Click the Virtual Machines tab.
  2. Click the New VM button to open the New Virtual Machine window.
  3. Select the Cluster on which the virtual machine will run.
  4. Select a template from the Based on Template list.
  5. Select a template sub version from the Template Sub Version list.
  6. Enter a Name, Description, and any Comments, and accept the default values inherited from the template in the rest of the fields. You can change them if needed.
  7. Click the Resource Allocation tab.
  8. Select the Thin radio button in the Storage Allocation area.
  9. Select the disk provisioning policy from the Allocation Policy list. This policy affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires.
    • Selecting Thin Provision results in a faster clone operation and provides optimized usage of storage capacity. Disk space is allocated only as it is required. This is the default selection.
    • Selecting Preallocated results in a slower clone operation and provides optimized virtual machine read and write operations. All disk space requested in the template is allocated at the time of the clone operation.
  10. Use the Target list to select the storage domain on which the virtual machine's virtual disk will be stored.
  11. Click OK.
The virtual machine is displayed in the list in the Virtual Machines tab.

10.4.3. Creating a Cloned Virtual Machine Based on a Template

Summary
Cloned virtual machines are similar to virtual machines based on templates. However, while a cloned virtual machine inherits settings in the same way as a virtual machine based on a template, a cloned virtual machine does not depend on the template on which it was based after it has been created.

Note

If you clone a virtual machine from a template, the name of the template on which that virtual machine was based is displayed in the General tab of the Edit Virtual Machine window for that virtual machine. If you change the name of that template, the name of the template in the General tab will also be updated. However, if you delete the template from the Manager, the original name of that template will be displayed instead.

Procedure 10.3. Cloning a Virtual Machine Based on a Template

  1. Click the Virtual Machines tab.
  2. Click the New VM button to open the New Virtual Machine window.
  3. Select the Cluster on which the virtual machine will run.
  4. Select a template from the Based on Template drop-down menu.
  5. Select a template sub version from the Template Sub Version drop-down menu.
  6. Enter a Name, Description and any Comments. You can accept the default values inherited from the template in the rest of the fields, or change them if required.
  7. Click the Resource Allocation tab.
  8. Select the Clone radio button in the Storage Allocation area.
  9. Select the disk provisioning policy from the Allocation Policy drop-down menu. This policy affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires.
    • Selecting Thin Provision results in a faster clone operation and provides optimized usage of storage capacity. Disk space is allocated only as it is required. This is the default selection.
    • Selecting Preallocated results in a slower clone operation and provides optimized virtual machine read and write operations. All disk space requested in the template is allocated at the time of the clone operation.
  10. Use the Target drop-down menu to select the storage domain on which the virtual machine's virtual disk will be stored.
  11. Click OK.

Note

Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked, then Down.
Result
The virtual machine is created and displayed in the list in the Virtual Machines tab. You can now assign users to it, and can begin using it when the clone operation is complete.

10.5. Explanation of Settings and Controls in the New Virtual Machine and Edit Virtual Machine Windows

10.5.1. Virtual Machine General Settings Explained

The following table details the options available on the General tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.4. Virtual Machine: General Settings

Field Name
Description
Cluster
The name of the host cluster to which the virtual machine is attached. Virtual machines are hosted on any physical machine in that cluster in accordance with policy rules.
Based on Template
The template on which the virtual machine can be based. This field is set to Blank by default, which allows you to create a virtual machine on which an operating system has not yet been installed.
Template Sub Version
The version of the template on which the virtual machine can be based. This field is set to the most recent version for the given template by default. If no versions other than the base template are available, this field is set to base template by default. Each version is marked by a number in brackets that indicates the relative order of the versions, with higher numbers indicating more recent versions.
Operating System
The operating system. Valid values include a range of Red Hat Enterprise Linux and Windows variants.
Instance Type
The instance type on which the virtual machine's hardware configuration can be based. This field is set to Custom by default, which means the virtual machine is not connected to an instance type. The other options available from this drop down menu are Large, Medium, Small, Tiny, XLarge, and any custom instance types that the Administrator has created.
Other settings that have a chain link icon next to them are pre-filled by the selected instance type. If one of these values is changed, the virtual machine will be detached from the instance type and the chain icon will appear broken. However, if the changed setting is restored to its original value, the virtual machine will be reattached to the instance type and the links in the chain icon will rejoin.
Optimized for
The type of system for which the virtual machine is to be optimized. There are two options: Server, and Desktop; by default, the field is set to Server. Virtual machines optimized to act as servers have no sound card, use a cloned disk image, and are not stateless. In contrast, virtual machines optimized to act as desktop machines do have a sound card, use an image (thin allocation), and are stateless.
Name
The name of the virtual machine. Names must not contain any spaces, and must contain at least one character from A-Z or 0-9. The maximum length of a virtual machine name is 255 characters.
Description
A meaningful description of the new virtual machine.
Comment
A field for adding plain text human-readable comments regarding the virtual machine.
Stateless
Select this check box to run the virtual machine in stateless mode. This mode is used primarily for desktop VMs. Running a stateless desktop or server creates a new COW layer on the VM hard disk image where new and changed data is stored. Shutting down the stateless VM deletes the new COW layer, which returns the VM to its original state. Stateless VMs are useful when creating machines that need to be used for a short time, or by temporary staff.
Start in Pause Mode
Select this check box to always start the virtual machine in pause mode. This option is suitable for VMs which require a long time to establish a SPICE connection; for example, VMs in remote locations.
Delete Protection
Select this check box to make it impossible to delete the virtual machine. It is only possible to delete the virtual machine if this check box is not selected.
At the bottom of the General tab is a drop-down box that allows you to assign network interfaces to the new virtual machine. Use the plus and minus buttons to add or remove additional network interfaces.

10.5.2. Virtual Machine System Settings Explained

The following table details the options available on the System tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.5. Virtual Machine: System Settings

Field Name
Description
Memory Size
The amount of memory assigned to the virtual machine. When allocating memory, consider the processing and storage needs of the applications that are intended to run on the virtual machine.
Maximum guest memory is constrained by the selected guest architecture and the cluster compatibility level.
Total Virtual CPUs
The processing power allocated to the virtual machine as CPU Cores. Do not assign more cores to a virtual machine than are present on the physical host.
Cores per Virtual Socket
The number of cores assigned to each virtual socket.
Virtual Sockets
The number of CPU sockets for the virtual machine. Do not assign more sockets to a virtual machine than are present on the physical host.
Time Zone
This option sets the time zone offset of the guest hardware clock. For Windows, this should correspond to the time zone set in the guest. Most default Linux installations expect the hardware clock to be GMT+00:00.
Provide custom serial number policy
This checkbox allows you to specify a serial number for the virtual machine. Select either:
  • Host ID: Sets the host's UUID as the virtual machine's serial number.
  • Vm ID: Sets the virtual machine's UUID as its serial number.
  • Custom serial number: Allows you to specify a custom serial number.

10.5.3. Virtual Machine Initial Run Settings Explained

The following table details the options available on the Initial Run tab of the New Virtual Machine and Edit Virtual Machine windows. The settings in this table are only visible if the Use Cloud-Init/Sysprep check box is selected, and certain options are only visible when either a Linux-based or Windows-based option has been selected in the Operating System list in the General tab, as outlined below.

Table 10.6. Virtual Machine: Initial Run Settings

Field Name
Operating System
Description
Use Cloud-Init/Sysprep
Linux, Windows
This check box toggles whether Cloud-Init or Sysprep will be used to initialize the virtual machine.
VM Hostname
Linux, Windows
The host name of the virtual machine.
Domain
Windows
The Active Directory domain to which the virtual machine belongs.
Organization Name
Windows
The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time.
Active Directory OU
Windows
The organizational unit in the Active Directory domain to which the virtual machine belongs.
Configure Time Zone
Linux, Windows
The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list.
Admin Password
Windows
The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option.
  • Use already configured password: This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password.
  • Admin Password: The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password.
Authentication
Linux
The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option.
  • Use already configured password: This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password.
  • Password: The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password.
  • SSH Authorized Keys: SSH keys to be added to the authorized keys file of the virtual machine. You can specify multiple SSH keys by entering each SSH key on a new line.
  • Regenerate SSH Keys: Regenerates SSH keys for the virtual machine.
Custom Locale
Windows
Custom locale options for the virtual machine. Locales must be in a format such as en-US. Click the disclosure arrow to display the settings for this option.
  • Input Locale: The locale for user input.
  • UI Language: The language used for user interface elements such as buttons and menus.
  • System Locale: The locale for the overall system.
  • User Locale: The locale for users.
Networks
Linux
Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option.
  • DNS Servers: The DNS servers to be used by the virtual machine.
  • DNS Search Domains: The DNS search domains to be used by the virtual machine.
  • Network: Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click +, a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot.
Custom Script
Linux
Custom scripts that will be run on the virtual machine when it starts. The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation.
Sysprep
Windows
A custom Sysprep definition. The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Enterprise Virtualization Manager is installed and alter the fields as required.

10.5.4. Virtual Machine Console Settings Explained

The following table details the options available on the Console tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.7. Virtual Machine: Console Settings

Field Name
Description
Protocol
Defines which display protocol to use. SPICE is the recommended protocol for Linux and Windows virtual machines. Optionally, select VNC for Linux virtual machines. A VNC client is required to connect to a virtual machine using the VNC protocol.
VNC Keyboard Layout
Defines the keyboard layout for the virtual machine. This option is only available when using the VNC protocol.
USB Support
Defines whether USB devices can be used on the virtual machine. This option is only available for virtual machines using the SPICE protocol. Select either:
  • Disabled - Does not allow USB redirection from the client machine to the virtual machine.
  • Legacy - Enables the SPICE USB redirection policy used in Red Hat Enterprise Virtualization 3.0. This option can only be used on Windows virtual machines, and will not be supported in future versions of Red Hat Enterprise Virtualization.
  • Native - Enables native KVM/SPICE USB redirection for Linux and Windows virtual machines. Virtual machines do not require any in-guest agents or drivers for native USB. This option can only be used if the virtual machine's cluster compatibility version is set to 3.1 or higher.
Monitors
The number of monitors for the virtual machine. This option is only available for virtual desktops using the SPICE display protocol. You can choose 1, 2 or 4. Note that multiple monitors are not supported for Windows 8 and Windows Server 2012 virtual machines.
Smartcard Enabled
Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Enterprise Virtualization virtual machines. Tick or untick the check box to activate and deactivate Smart card authentication for individual virtual machines.
Disable strict user checking
Click the Advanced Parameters arrow and select the check box to use this option. With this option selected, the virtual machine does not need to be rebooted when a different user connects to it.
By default, strict checking is enabled so that only one user can connect to the console of a virtual machine. No other user is able to open a console to the same virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted.
Disable strict checking with caution, because you can expose the previous user's session to the new user.
Soundcard Enabled
A sound card device is not necessary for all virtual machine use cases. If it is for yours, enable a sound card here.
VirtIO Console Device Enabled
The VirtIO console device is a console over VirtIO transport for communication between the host user space and guest user space. It has two parts: device emulation in QEMU that presents a virtio-pci device to the guest, and a guest driver that presents a character device interface to user space applications. Tick the check box to attach a VirtIO console device to your virtual machine.
Enable SPICE clipboard copy and paste
Defines whether a user is able to copy and paste content from an external host into the virtual machine's SPICE console. This option is only available for virtual machines using the SPICE protocol. This check box is selected by default.

10.5.5. Virtual Machine Host Settings Explained

The following table details the options available on the Host tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.8. Virtual Machine: Host Settings

Field Name
Description
Start Running On
Defines the preferred host on which the virtual machine is to run. Select either:
  • Any Host in Cluster - The virtual machine can start and run on any available host in the cluster.
  • Specific - The virtual machine will start running on a particular host in the cluster. However, the Manager or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host from the drop-down list of available hosts.
Migration Options
Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy.
  • Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator.
  • Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator.
  • Do not allow migration - The virtual machine cannot be migrated, either automatically or manually.
The Use custom migration downtime check box allows you to specify the maximum number of milliseconds the virtual machine can be down during live migration. Configure different maximum downtimes for each virtual machine according to its workload and SLA requirements. The VDSM default value is 0.
The Pass-Through Host CPU check box allows virtual machines to take advantage of the features of the physical CPU of the host on which they are situated. This option can only be enabled when Do not allow migration is selected.
Configure NUMA
Defines options for virtual NUMA nodes. These options are only available if the virtual machine's host has at least two NUMA nodes.
NUMA Node Count allows you to specify the number of virtual NUMA nodes to assign to the virtual machine. If the Tune Mode is Preferred, this value must be set to 1.
Tune Mode defines the method used to allocate memory. Choose one of the following from the drop-down list:
  • Strict: Memory allocation will fail if the memory cannot be allocated on the target node.
  • Preferred: Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes.
  • Interleave: Memory is allocated across nodes in a round-robin algorithm.
The NUMA Pinning button opens the NUMA Topology window. This window shows the host's total CPUs, memory, and NUMA nodes, and the virtual machine's virtual NUMA nodes. Pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left.

10.5.6. Virtual Machine High Availability Settings Explained

The following table details the options available on the High Availability tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.9. Virtual Machine: High Availability Settings

Field Name
Description
Highly Available
Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance, all virtual machines are automatically live migrated to another host. If the host crashed and is in a non-responsive state, only virtual machines with high availability are restarted on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically live migrated to another host.
Note that this option is unavailable if the Migration Options setting in the Hosts tab is set to either Allow manual migration only or Do not allow migration. For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary.
Priority for Run/Migration queue
Sets the priority level for the virtual machine to be migrated or restarted on another host.
Watchdog
Allows users to attach a watchdog card to a virtual machine. A watchdog is a timer that is used to automatically detect and recover from failures. Once set, a watchdog timer continually counts down to zero while the system is in operation, and is periodically restarted by the system to prevent it from reaching zero. If the timer reaches zero, it signifies that the system has been unable to reset the timer and is therefore experiencing a failure. Corrective actions are then taken to address the failure. This functionality is especially useful for servers that demand high availability.
Watchdog Model: The model of watchdog card to assign to the virtual machine. At current, the only supported model is i6300esb.
Watchdog Action: The action to take if the watchdog timer reaches zero. The following actions are available:
  • none - No action is taken. However, the watchdog event is recorded in the audit log.
  • reset - The virtual machine is reset and the Manager is notified of the reset action.
  • poweroff - The virtual machine is immediately shut down.
  • dump - A dump is performed and the virtual machine is paused.
  • pause - The virtual machine is paused, and can be resumed by users.

10.5.7. Virtual Machine Resource Allocation Settings Explained

The following table details the options available on the Resource Allocation tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.10. Virtual Machine: Resource Allocation Settings

Field Name
Sub-element
Description
CPU Allocation
CPU Profile
The CPU profile assigned to the virtual machine. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined on the cluster level based on quality of service entries created for data centers.
CPU Shares
Allows users to set the level of CPU resources a virtual machine can demand relative to other virtual machines.
  • Low - 512
  • Medium - 1024
  • High - 2048
  • Custom - A custom level of CPU shares defined by the user.
 
CPU Pinning topology
Enables the virtual machine's virtual CPU (vCPU) to run on a specific physical CPU (pCPU) in a specific host. This option is not supported if the virtual machine's cluster compatibility version is set to 3.0. The syntax of CPU pinning is v#p[_v#p], for example:
  • 0#0 - Pins vCPU 0 to pCPU 0.
  • 0#0_1#3 - Pins vCPU 0 to pCPU 0, and pins vCPU 1 to pCPU 3.
  • 1#1-4,^2 - Pins vCPU 1 to one of the pCPUs in the range of 1 to 4, excluding pCPU 2.
In order to pin a virtual machine to a host, you must also select the following on the Host tab:
  • Start Running On: Specific
  • Migration Options: Do not allow migration
  • Pass-Through Host CPU
Memory Allocation
The amount of physical memory guaranteed for this virtual machine.
Storage Allocation
The Template Provisioning option is only available when the virtual machine is created from a template.
Thin
Provides optimized usage of storage capacity. Disk space is allocated only as it is required.
Clone
Optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation.
VirtIO-SCSI Enabled
Allows users to enable or disable the use of VirtIO-SCSI on the virtual machines.

10.5.8. Virtual Machine Boot Options Settings Explained

The following table details the options available on the Boot Options tab of the New Virtual Machine and Edit Virtual Machine windows

Table 10.11. Virtual Machine: Boot Options Settings

Field Name
Description
First Device
After installing a new virtual machine, the new virtual machine must go into Boot mode before powering up. Select the first device that the virtual machine must try to boot:
  • Hard Disk
  • CD-ROM
  • Network (PXE)
Second Device
Select the second device for the virtual machine to use to boot if the first device is not available. The first device selected in the previous option does not appear in the options.
Attach CD
If you have selected CD-ROM as a boot device, tick this check box and select a CD-ROM image from the drop-down menu. The images must be available in the ISO domain.

10.5.9. Virtual Machine Random Generator Settings Explained

The following table details the options available on the Random Generator tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.12. Virtual Machine: Random Generator

Field Name
Description
Random Generator enabled
Selecting this check box enables a paravirtualized Random Number Generator PCI device (virtio-rng). This device allows entropy to be passed from the host to the virtual machine in order to generate a more sophisticated random number. Note that this check box can only be selected if the RNG device exists on the host and is enabled in the host's cluster.
Period duration (ms)
Specifies the duration of a period in milliseconds. If omitted, the libvirt default of 1000 milliseconds (1 second) is used. If this field is filled, Bytes per period must be filled also.
Bytes per period
Specifies how many bytes are permitted to be consumed per period.
Device source:
The source of the random number generator. This is automatically selected depending on the source supported by the host's cluster.
  • /dev/random source - The Linux-provided random number generator.
  • /dev/hwrng source - An external hardware generator.

Important

This feature is only supported with a host running Red Hat Enterprise Linux 6.6 and later or Red Hat Enterprise Linux 7.0 and later.

10.5.10. Virtual Machine Custom Properties Settings Explained

The following table details the options available on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 10.13. Virtual Machine: Custom Properties Settings

Field Name
Description
Recommendations and Limitations
sap_agent
Enables SAP monitoring on the virtual machine. Set to true or false.
-
sndbuf
Enter the size of the buffer for sending the virtual machine's outgoing data over the socket. Default value is 0.
-
vhost
Disables vhost-net, which is the kernel-based virtio network driver on virtual network interface cards attached to the virtual machine. To disable vhost, the format for this property is:
LogicalNetworkName: false
This will explicitly start the virtual machine without the vhost-net setting on the virtual NIC attached to LogicalNetworkName.
vhost-net provides better performance than virtio-net, and if it is present, it is enabled on all virtual machine NICs by default. Disabling this property makes it easier to isolate and diagnose performance issues, or to debug vhost-net errors; for example, if migration fails for virtual machines on which vhost does not exist.
viodiskcache
Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching.
For Red Hat Enterprise Virtualization 3.1, if viodiskcache is enabled, the virtual machine cannot be live migrated.

Warning

Increasing the value of the sndbuf custom property results in increased occurrences of communication failure between hosts and unresponsive virtual machines.

10.6. Configuring Virtual Machines

10.6.1. Completing the Configuration of a Virtual Machine by Defining Network Interfaces and Hard Disks

Summary
Before you can use your newly created virtual machine, the Guide Me window prompts you to configure at least one network interface and one virtual disk for the virtual machine.

Procedure 10.4. Completing the Configuration of a Virtual Machine by Defining Network Interfaces and Hard Disks

  1. On the New Virtual Machine - Guide Me window, click the Configure Network Interfaces button to open the New Network Interface window. You can accept the default values or change them as necessary.
    New Network Interface window

    Figure 10.2. New Network Interface window

    Enter the Name of the network interface.
  2. Use the drop-down menus to select the Network and the Type of network interface for the new virtual machine. The Link State is set to Up by default when the NIC is defined on the virtual machine and connected to the network.

    Note

    The options on the Network and Type fields are populated by the networks available to the cluster, and the NICs available to the virtual machine.
  3. If applicable, select the Specify custom MAC address check box and enter the network interface's MAC address.
  4. Click the arrow next to Advanced Parameters to configure the Port Mirroring and Card Status fields, if necessary.
  5. Click OK to close the New Network Interface window and open the New Virtual Machine - Guide Me window.
  6. Click the Configure Virtual Disk button to open the Add Virtual Disk window.
  7. Add either an Internal virtual disk or an External LUN to the virtual machine.
    Add Virtual Disk Window

    Figure 10.3. Add Virtual Disk Window

  8. Click OK to close the Add Virtual Disk window. The New Virtual Machine - Guide Me window opens with changed context. There is no further mandatory configuration.
  9. Click Configure Later to close the window.
Result
You have added a network interface and a virtual disk to your virtual machine.

10.6.2. Installing Windows on VirtIO-Optimized Hardware

Summary
The virtio-win.vfd diskette image contains Windows drivers for VirtIO-optimized disk and network devices. These drivers provide a performance improvement over emulated device drivers.
The virtio-win.vfd is placed automatically on ISO storage domains that are hosted on the Manager server. It must be manually uploaded using the engine-iso-uploader tool to other ISO storage domains.
You can install the VirtIO-optimized device drivers during your Windows installation by attaching a diskette to your virtual machine.
This procedure presumes that you added a Red Hat VirtIO network interface and a disk that uses the VirtIO interface to your virtual machine.

Procedure 10.5. Installing VirtIO Drivers during Windows Installation

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Run Once button, and the Run Once window displays.
  3. Click Boot Options to expand the Boot Options configuration options.
  4. Click the Attach Floppy check box, and select virtio-win.vfd from the drop down selection box.
  5. Click the Attach CD check box, and select from the drop down selection box the ISO containing the version of Windows you want to install.
  6. Move CD-ROM UP in the Boot Sequence field.
  7. Configure the rest of your Run Once options as required, and click OK to start your virtual machine, and then click the Console button to open a graphical console to your virtual machine.
Result
Windows installations include an option to load additional drivers early in the installation process. Use this option to load drivers from the virtio-win.vfd diskette that was attached to your virtual machine as A:.
For each supported virtual machine architecture and Windows version, there is a folder on the disk containing optimized hardware device drivers.

10.6.3. Virtual Machine Run Once Settings Explained

The Run Once window defines one-off boot options for a virtual machine. For persistent boot options, use the Boot Options tab in the New Virtual Machine window.

Boot Options

Defines the virtual machine's boot sequence, running options, and source images for installing the operating system and required drivers.
Attach Floppy
Attaches a diskette image to the virtual machine. Use this option to install Windows drivers. The diskette image must reside in the ISO domain.
Attach CD
Attaches an ISO image to the virtual machine. Use this option to install the virtual machine's operating system and applications. The CD image must reside in the ISO domain.
Boot Sequence
Determines the order in which the boot devices are used to boot the virtual machine. Select either Hard Disk, CD-ROM or Network, and use Up and Down to move the option up or down in the list.
Run Stateless
Deletes all changes to the virtual machine upon shutdown. This option is only available if a virtual disk is attached to the virtual machine.
Start in Pause Mode
Starts then pauses the virtual machine to enable connection to the console, suitable for virtual machines in remote locations.

Linux Boot Options

The following options boot a Linux kernel directly instead of through the BIOS bootloader.
kernel path
A fully qualified path to a kernel image to boot the virtual machine. The kernel image must be stored on either the ISO domain (path name in the format of iso://path-to-image) or on the host's local storage domain (path name in the format of /data/images).
initrd path
A fully qualified path to a ramdisk image to be used with the previously specified kernel. The ramdisk image must be stored on the ISO domain (path name in the format of iso://path-to-image) or on the host's local storage domain (path name in the format of /data/images).
kernel parameters
Kernel command line parameter strings to be used with the defined kernel on boot.

Initial Run

Specifies whether to use Cloud-Init or Sysprep to initialize the virtual machine. For Linux-based virtual machines, you must select the Use Cloud-Init check box in the Initial Run tab to view the available options. For Windows-based virtual machines, you must attach the [sysprep] floppy by selecting the Attach Floppy check box in the Boot Options tab and selecting the floppy from the list. Certain options are only available on Linux-based or Windows-based virtual machines, as noted below.
VM Hostname
The host name of the virtual machine.
Domain
The Active Directory domain to which the virtual machine belongs. (Windows)
Organization Name
The name of the organization to which the virtual machine belongs. This option corresponds to the text field for setting the organization name displayed when a machine running Windows is started for the first time. (Windows)
Active Directory OU
The organizational unit in the Active Directory domain to which the virtual machine belongs. (Windows)
Configure Time Zone
The time zone for the virtual machine. Select this check box and select a time zone from the Time Zone list.
Alternate Credentials
Selecting this check box allows you to set a User Name and Password as alternative credentials. (Windows)

Admin Password

The administrative user password for the virtual machine. Click the disclosure arrow to display the settings for this option. (Windows)
Use already configured password
This check box is automatically selected after you specify an initial administrative user password. You must clear this check box to enable the Admin Password and Verify Admin Password fields and specify a new password.
Admin Password
The administrative user password for the virtual machine. Enter the password in this text field and the Verify Admin Password text field to verify the password.

Authentication

The authentication details for the virtual machine. Click the disclosure arrow to display the settings for this option. (Linux)
User Name
Creates a new user account on the virtual machine. If this field is not filled in, the default user is root.
Use already configured password
This check box is automatically selected after you specify an initial root password. You must clear this check box to enable the Password and Verify Password fields and specify a new password.
Password
The root password for the virtual machine. Enter the password in this text field and the Verify Password text field to verify the password.
SSH Authorized Keys
SSH keys to be added to the authorized keys file of the virtual machine.
Regenerate SSH Keys
Regenerates SSH keys for the virtual machine.

Custom Locale

Custom locale options for the virtual machine. Locales must be in a format such as en-US. Click the disclosure arrow to display the settings for this option. (Windows)
Input Locale
The locale for user input.
UI Language
The language used for user interface elements such as buttons and menus.
System Locale
The locale for the overall system.
User Locale
The locale for users.

Networks

Network-related settings for the virtual machine. Click the disclosure arrow to display the settings for this option. (Linux)
DNS Servers
The DNS servers to be used by the virtual machine.
DNS Search Domains
The DNS search domains to be used by the virtual machine.
Network
Configures network interfaces for the virtual machine. Select this check box and click + or - to add or remove network interfaces to or from the virtual machine. When you click +, a set of fields becomes visible that can specify whether to use DHCP, and configure an IP address, netmask, and gateway, and specify whether the network interface will start on boot.

Custom Script

Custom scripts that will be run on the virtual machine when it starts.
Custom Scripts
The scripts entered in this field are custom YAML sections that are added to those produced by the Manager, and allow you to automate tasks such as creating users and files, configuring yum repositories and running commands. For more information on the format of scripts that can be entered in this field, see the Custom Script documentation. (Linux)

Sysprep

A custom Sysprep definition.
Sysprep
The definition must be in the format of a complete unattended installation answer file. You can copy and paste the default answer files in the /usr/share/ovirt-engine/conf/sysprep/ directory on the machine on which the Red Hat Enterprise Virtualization Manager is installed and alter the fields as required. (Windows)

Host

Defines the virtual machine's host.
Any host in cluster
Allocates the virtual machine to any available host.
Specific
Specifies a user-defined host for the virtual machine.

Display Protocol

Defines the protocol to connect to virtual machines.
VNC
Can be used for Linux virtual machines. Requires a VNC client to connect to a virtual machine using VNC. Optionally, specify VNC Keyboard Layout from the drop-down list.
SPICE
Recommended protocol for Linux and Windows virtual machines. Using SPICE protocol without QXL drivers is a Tech Preview feature for Windows 8 and Server 2012 virtual machines (BZ#1217494).

Custom Properties

Additional VDSM options for running virtual machines.
sap_agent
Enables SAP monitoring on the virtual machine. Set to true or false.
sndbuf
Enter the size of the buffer for sending the virtual machine's outgoing data over the socket.
vhost
Enter the name of the virtual host on which this virtual machine should run. The name can contain any combination of letters and numbers.
viodiskcache
Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching.

10.6.4. Configuring a Watchdog

10.6.4.1. Adding a Watchdog Card to a Virtual Machine

Summary
Add a watchdog card to a virtual machine.

Procedure 10.6. Adding a Watchdog Card to a Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Click Show Advanced Options to display all tabs and click the High Availability tab.
  4. Select the watchdog model to use from the Watchdog Model drop-down menu.
  5. Select an action from the Watchdog Action drop-down menu. This is the action that the virtual machine takes when the watchdog is triggered.
  6. Click OK.
Result
You have added a watchdog card to the virtual machine.

10.6.4.2. Installing a Watchdog

Summary
To activate a watchdog card attached to a virtual machine, you must install the watchdog package on that virtual machine and start the watchdog service.

Procedure 10.7. Installing a Watchdog

  1. Log on to the virtual machine on which the watchdog card is attached.
  2. Run the following command to install the watchdog package and dependencies:
    # yum install watchdog
  3. Edit the /etc/watchdog.conf file and uncomment the following line:
    watchdog-device = /dev/watchdog
  4. Save the changes.
  5. Run the following commands to start the watchdog service and ensure this service starts on boot:
    # service watchdog start
    # chkconfig watchdog on
Result
You have installed and started the watchdog service on a virtual machine.

10.6.4.3. Confirming Watchdog Functionality

Summary
Confirm that a watchdog card has been attached to a virtual machine and that the watchdog service is active.

Warning

This procedure is provided for testing the functionality of watchdogs only and must not be run on production machines.

Procedure 10.8. Confirming Watchdog Functionality

  1. Log on to the virtual machine on which the watchdog card is attached.
  2. Run the following command to confirm that the watchdog card has been identified by the virtual machine:
    # lspci | grep watchdog -i
  3. Run one of the following commands to confirm that the watchdog is active:
    • Run the following command to trigger a kernel panic:
      # echo c > /proc/sysrq-trigger
    • Run the following command to terminate the watchdog service:
      # kill -9 `pgrep watchdog`
Result
The watchdog timer can no longer be reset, so the watchdog counter reaches zero after a short period of time. When the watchdog counter reaches zero, the action specified in the Watchdog Action drop-down menu for that virtual machine is performed.

10.6.4.4. Parameters for Watchdogs in watchdog.conf

The following is a list of options for configuring the watchdog service available in the /etc/watchdog.conf file. To configure an option, you must ensure that option is uncommented and restart the watchdog service after saving the changes.

Note

For a more detailed explanation of options for configuring the watchdog service and using the watchdog command, see the watchdog man page.

Table 10.14. watchdog.conf variables

Variable name Default Value Remarks
ping N/A An IP address that the watchdog attempts to ping to verify whether that address is reachable. You can specify multiple IP addresses by adding additional ping lines.
interface N/A A network interface that the watchdog will monitor to verify the presence of network traffic. You can specify multiple network interfaces by adding additional interface lines.
file /var/log/messages A file on the local system that the watchdog will monitor for changes. You can specify multiple files by adding additional file lines.
change 1407 The number of watchdog intervals after which the watchdog checks for changes to files. A change line must be specified on the line directly after each file line, and applies to the file line directly above that change line.
max-load-1 24 The maximum average load that the virtual machine can sustain over a one-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature.
max-load-5 18 The maximum average load that the virtual machine can sustain over a five-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately three quarters that of max-load-1.
max-load-15 12 The maximum average load that the virtual machine can sustain over a fifteen-minute period. If this average is exceeded, then the watchdog is triggered. A value of 0 disables this feature. By default, the value of this variable is set to a value approximately one half that of max-load-1.
min-memory 1 The minimum amount of virtual memory that must remain free on the virtual machine. This value is measured in pages. A value of 0 disables this feature.
repair-binary /usr/sbin/repair The path and file name of a binary file on the local system that will be run when the watchdog is triggered. If the specified file resolves the issues preventing the watchdog from resetting the watchdog counter, then the watchdog action is not triggered.
test-binary N/A The path and file name of a binary file on the local system that the watchdog will attempt to run during each interval. A test binary allows you to specify a file for running user-defined tests.
test-timeout N/A The time limit, in seconds, for which user-defined tests can run. A value of 0 allows user-defined tests to continue for an unlimited duration.
temperature-device N/A The path to and name of a device for checking the temperature of the machine on which the watchdog service is running.
max-temperature 120 The maximum allowed temperature for the machine on which the watchdog service is running. The machine will be halted if this temperature is reached. Unit conversion is not taken into account, so you must specify a value that matches the watchdog card being used.
admin root The email address to which email notifications are sent.
interval 10 The interval, in seconds, between updates to the watchdog device. The watchdog device expects an update at least once every minute, and if there are no updates over a one-minute period, then the watchdog is triggered. This one-minute period is hard-coded into the drivers for the watchdog device, and cannot be configured.
logtick 1 When verbose logging is enabled for the watchdog service, the watchdog service periodically writes log messages to the local system. The logtick value represents the number of watchdog intervals after which a message is written.
realtime yes Specifies whether the watchdog is locked in memory. A value of yes locks the watchdog in memory so that it is not swapped out of memory, while a value of no allows the watchdog to be swapped out of memory. If the watchdog is swapped out of memory and is not swapped back in before the watchdog counter reaches zero, then the watchdog is triggered.
priority 1 The schedule priority when the value of realtime is set to yes.
pidfile /var/run/syslogd.pid The path and file name of a PID file that the watchdog monitors to see if the corresponding process is still active. If the corresponding process is not active, then the watchdog is triggered.

10.6.5. Configuring Virtual NUMA

You can configure virtual NUMA nodes on a virtual machine and pin them to physical NUMA nodes on a host. The host’s default policy is to schedule and run virtual machines on any available resources on the host. As a result, the resources backing a large virtual machine that cannot fit within a single host socket could be spread out across multiple NUMA nodes, and over time may be moved around, leading to poor and unpredictable performance. Configure and pin virtual NUMA nodes to avoid this outcome and improve performance.
Configuring virtual NUMA requires a NUMA-enabled host. To confirm whether NUMA is enabled on a host, log in to the host and run numactl --hardware. The output of this command should show at least two NUMA nodes. You can also view the host's NUMA topology by selecting the host from the Hosts tab and clicking NUMA Support. This button is only available when the selected host has at least two NUMA nodes.

Procedure 10.9. Configuring Virtual NUMA

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit.
  3. Click the Host tab.
  4. Select the Specific radio button and select a host from the list. The selected host must have at least two NUMA nodes.
  5. Select Do not allow migration from the Migration Options drop-down list.
  6. Enter a number into the NUMA Node Count field to assign virtual NUMA nodes to the virtual machine.
  7. Select Strict, Preferred, or Interleave from the Tune Mode drop-down list. If the selected mode is Preferred, the NUMA Node Count must be set to 1.
  8. Click NUMA Pinning.
    The NUMA Topology Window

    Figure 10.4. The NUMA Topology Window

  9. In the NUMA Topology window, click and drag virtual NUMA nodes from the box on the right to host NUMA nodes on the left as required, and click OK.
  10. Click OK.

Note

Automatic NUMA balancing is available in Red Hat Enterprise Linux 7, but is not currently configurable through the Red Hat Enterprise Virtualization Manager.

10.7. Editing Virtual Machines

10.7.1. Editing Virtual Machine Properties

Changes to storage, operating system, or networking parameters can adversely affect the virtual machine. Ensure that you have the correct details before attempting to make any changes. Virtual machines can be edited while running, and some changes (listed in the procedure below) will be applied immediately. To apply all changes, the virtual machine must be restarted or shut down.

Procedure 10.10. Editing a virtual machine:

  1. Select the virtual machine to be edited.
  2. Click the Edit button to open the Edit Virtual Machine window.
  3. Change the General, System, Initial Run, Console, Host, High Availability, Resource Allocation, Boot Options, Random Generator, Custom Properties, and Icon fields as required.
    Changes to the following fields are applied immediately:
    • Name
    • Description
    • Comment
    • Optimized for (Desktop/Server)
    • Delete Protection
    • Network Interfaces
    • Use custom migration downtime
    • Highly Available
    • Priority for Run/Migration queue
    • Disable strict user checking
    • Virtual Sockets (On supported guest operating systems only. For more information on hot plugging CPUs, see https://access.redhat.com/articles/1339413)
    To apply changes to all other settings, the virtual machine must be restarted or shut down.
  4. If the Next Restart Configuration pop-up window appears, click OK.
  5. Click OK.
Changes from the list in step 3 are applied immediately. All other changes are applied when you restart your virtual machine. Until then, an orange icon ( ) appears as a reminder of the pending changes.

10.7.2. Network Interfaces

10.7.2.1. Adding and Editing Virtual Machine Network Interfaces

Add network interfaces to virtual machines. Doing so allows you to put your virtual machine on multiple logical networks. You can also edit a virtual machine's network interface card to change the details of that network interface card. This procedure can be performed on virtual machines that are running, but some actions can be performed only on virtual machines that are not running.

Procedure 10.11. Adding Network Interfaces to Virtual Machines

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Network Interfaces tab in the details pane to display a list of network interfaces that are currently associated with the virtual machine.
  3. Click New to open the New Network Interface window.
    New Network Interface window

    Figure 10.5. New Network Interface window

  4. Enter the Name of the network interface.
  5. Use the drop-down lists to select the Profile and the Type of network interface for the new network interface.The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network.

    Note

    The Profile and Type fields are populated in accordance with the profiles and network types available to the cluster and the network interface cards available to the virtual machine.
  6. Select the Custom MAC address check box and enter a MAC address for the network interface card as required.
  7. Click OK.
The new network interface is listed in the Network Interfaces tab in the details pane of the virtual machine.

10.7.2.2. Editing a Network Interface

Summary
This procedure describes editing a network interface. In order to change any network settings, you must edit the network interface.

Procedure 10.12. Editing a Network Interface

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Network Interfaces tab of the details pane and select the network interface to edit.
  3. Click Edit to open the Edit Network Interface window. This dialog contains the same fields as the New Network Interface dialog.
  4. Click OK to save your changes once you are finished.
Result
You have now changed the network interface by editing it.

10.7.2.3. Removing a Network Interface

Summary
This procedure describes how to remove a network interface.

Procedure 10.13. Removing a Network Interface

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Network Interfaces tab of the details pane and select the network interface to remove.
  3. Click Remove and click OK when prompted.
Result
You have removed a network interface from a virtual machine.

10.7.2.4. Explanation of Settings in the Virtual Machine Network Interface Window

These settings apply when you are adding or editing a virtual machine network interface. If you have more than one network interface attached to a virtual machine, you can put the virtual machine on more than one logical network.

Table 10.15. Add a network interface to a virtual machine entries

Field Name
Description
Name
The name of the network interface. This text field has a 21-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Network
Logical network that the network interface is placed on. By default, all network interfaces are put on the rhevm management network.
Link State
Whether or not the network interface is connected to the logical network.
  • Up: The network interface is located on its slot.
    • When the Card Status is Plugged, it means the network interface is connected to a network cable, and is active.
    • When the Card Status is Unplugged, the network interface will be automatically connected to the network and become active.
  • Down: The network interface is located on its slot, but it is not connected to any network. Virtual machines will not be able to run in this state.
Type
The virtual interface the network interface presents to virtual machines. VirtIO is faster but requires VirtIO drivers. Red Hat Enterprise Linux 5 and higher includes VirtIO drivers. Windows does not include VirtIO drivers, but they can be installed from the guest tools ISO or virtual floppy disk. rtl8139 and e1000 device drivers are included in most operating systems.
Specify custom MAC address
Choose this option to set a custom MAC address. The Red Hat Enterprise Virtualization Manager automatically generates a MAC address that is unique to the environment to identify the network interface. Having two devices with the same MAC address online in the same network causes networking conflicts.
Port Mirroring
A security feature that allows all network traffic going to or leaving from virtual machines on a given logical network and host to be copied (mirrored) to the network interface. If the host also uses the network, then traffic going to or leaving from the host is also copied.
Port mirroring only works on network interfaces with IPv4 IP addresses.
Card Status
Whether or not the network interface is defined on the virtual machine.
  • Plugged: The network interface has been defined on the virtual machine.
    • If its Link State is Up, it means the network interface is connected to a network cable, and is active.
    • If its Link State is Down, the network interface is not connected to a network cable.
  • Unplugged: The network interface is only defined on the Manager, and is not associated with a virtual machine.
    • If its Link State is Up, when the network interface is plugged it will automatically be connected to a network and become active.
    • If its Link State is Down, the network interface is not connected to any network until it is defined on a virtual machine.

10.7.2.5. Hot Plugging Network Interfaces

Summary
You can hot plug network interfaces. Hot plugging means enabling and disabling network interfaces while a virtual machine is running.

Procedure 10.14. Hot plugging network interfaces

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Network Interfaces tab from the details pane of the virtual machine.
  3. Select the network interface you would like to hot plug and click Edit to open the Edit Network Interface window.
  4. Click the Advanced Parameters arrow to access the Card Status option. Set the Card Status to Plugged to enable the network interface, or set it to Unplugged to disable the network interface.
Result
You have enabled or disabled a virtual network interface.

10.7.2.6. Removing Network Interfaces From Virtual Machines

Summary
You can remove network interfaces from virtual machines.

Procedure 10.15. Removing Network Interfaces From Virtual Machines

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Network Interfaces tab in the virtual machine details pane.
  3. Select the network interface to remove.
  4. Click the Remove button and click OK when prompted.
Result
The network interface is no longer attached to the virtual machine.

10.7.3. Virtual Disks

10.7.3.1. Adding and Editing Virtual Machine Disks

Summary
It is possible to add disks to virtual machines. You can add new disks, or previously created floating disks to a virtual machine. This allows you to provide additional space to and share disks between virtual machines. You can also edit disks to change some of their details.
An Internal disk is the default type of disk. You can also add an External(Direct Lun) disk. Internal disk creation is managed entirely by the Manager; external disks require externally prepared targets that already exist. Existing disks are either floating disks or shareable disks attached to virtual machines.

Procedure 10.16. Adding Disks to Virtual Machines

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Disks tab in the details pane to display a list of virtual disks currently associated with the virtual machine.
  3. Click Add to open the Add Virtual Disk window.
    Add Virtual Disk Window

    Figure 10.6. Add Virtual Disk Window

  4. Use the appropriate radio buttons to switch between Internal and the External (Direct Lun) disks.
  5. Select the Attach Disk check box to choose an existing disk from the list and select the Activate check box.
    Alternatively, enter the Size, Alias, and Description of a new disk and use the drop-down menus and check boxes to configure the disk.
  6. Click OK to add the disk and close the window.
Result
Your new disk is listed in the Virtual Disks tab in the details pane of the virtual machine.

10.7.3.2. Hot Plugging Virtual Machine Disks

Summary
You can hot plug virtual machine disks. Hot plugging means enabling or disabling devices while a virtual machine is running.

Procedure 10.17. Hot Plugging Virtual Machine Disks

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Disks tab from the details pane of the virtual machine.
  3. Select the virtual machine disk you would like to hot plug.
  4. Click the Activate button, or click the Deactivate button and click OK when prompted.
Result
You have enabled or disabled a virtual machine disk.

10.7.3.3. Removing Virtual Disks From Virtual Machines

Summary
You can remove virtual disks from virtual machines.

Procedure 10.18. Removing Virtual Disks From Virtual Machines

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Disks tab in the virtual machine details pane.
  3. Select the virtual disk to remove.
  4. Click the Deactivate button and click OK when prompted.
  5. Click the Remove button and click OK when prompted. Optionally, select the Remove Permanently option to completely remove the virtual disk from the environment. If you do not select this option - for example, because the disk is a shared disk - the virtual disk will remain in the Disks resource tab.
Result
The disk is no longer attached to the virtual machine.

10.7.4. Extending the Available Size of a Virtual Disk

This procedure explains how to extend the available size of a virtual disk while the virtual disk is attached to a virtual machine. Resizing a virtual disk does not resize the underlying partitions or file systems on that virtual disk. Use the fdisk utility to resize the partitions and file systems as required. See How to Resize a Partition using fdisk for more information.

Procedure 10.19. Extending the Available Size of a Virtual Disk

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Select the Disks tab in the details pane.
  3. Select a target disk from the list in the details pane.
  4. Click Edit in the details pane.
  5. Enter a value in the Extend size by(GB) field.
  6. Click OK.
The target disk's status becomes locked for a short time, during which the drive is resized. When the resizing of the drive is complete, the status of the drive becomes OK.

10.7.5. Floating Disks

Floating disks are disks that are not associated with any virtual machine.
Floating disks can minimize the amount of time required to set up virtual machines. Designating a floating disk as storage for a VM makes it unnecessary to wait for disk preallocation at the time of a VM's creation.
Floating disks can be attached to virtual machines or designated as shareable disks, which can be used with one or more VMs.

10.7.6. Associating a Virtual Disk with a Virtual Machine

Summary
This procedure explains how to associate a virtual disk with a virtual machine. Once the virtual disk is associated with the virtual machine, the VM is able to access it.

Procedure 10.20. Associating a Virtual Disk with a Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. In the details pane, select the Disks tab.
  3. Click Add in the menu at the top of the Details Pane.
  4. Type the size in GB of the disk into the Size(GB) field.
  5. Type the disk alias into the Alias field.
  6. Click OK in the bottom right corner of the Add Virtual Disk window.
    The disk you have associated with the virtual machine appears in the details pane after a short time.
Result
The virtual disk is associated with the virtual machine.

Note

No Quota resources are consumed by attaching virtual disks to, or detaching virtual disks from, virtual machines.

Note

Using the above procedure, it is now possible to attach a virtual disk to more than one virtual machine.

10.7.7. Changing the CD for a Virtual Machine

Summary
You can change the CD accessible to a virtual machine while that virtual machine is running.

Note

You can only use ISO files that have been added to the ISO domain of the cluster in which the virtual machine is a member. Therefore, you must upload ISO files to that domain before you can make those ISO files accessible to virtual machines.

Procedure 10.21. Changing the CD for a Virtual Machine

  1. From the Virtual Machines tab, select a virtual machine that is currently running.
  2. Click Change CD to open the Change CD window.
  3. In the Change CD window do one of the following:
    • Select an ISO file from the list to eject the CD currently accessible to the virtual machine and mount that ISO file as a CD.
      Or,
    • Select [Eject] from the ISO list to eject the CD currently accessible to the virtual machine.
  4. Click OK.
Result
You have ejected the CD previously accessible to the virtual machine, or ejected the CD previously accessible to the virtual machine and made a new CD accessible to that virtual machine

10.7.8. Smart card Authentication

Smart cards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smart cards can be used to protect Red Hat Enterprise Virtualization virtual machines.

10.7.9. Enabling and Disabling Smart cards

Summary
The following procedures explain how to enable and disable the Smart card feature for virtual machines.

Procedure 10.22. Enabling Smart cards

  1. Ensure that the Smart card hardware is plugged into the client machine and is installed according to manufacturer's directions.
  2. Select the desired virtual machine.
  3. Click the Edit button. The Edit Virtual Machine window appears.
  4. Select the Console tab, and select the check box labeled Smartcard enabled, then click OK.
  5. Run the virtual machine by clicking the Console icon or through the User Portal. Smart card authentication is now passed from the client hardware to the virtual machine.
Result
You have enabled Smart card authentication for the virtual machine.

Important

If the Smart card hardware is not correctly installed, enabling the Smart card feature will result in the virtual machine failing to load properly.

Procedure 10.23. Disabling Smart cards

  1. Select the desired virtual machine.
  2. Click the Edit button. The Edit Virtual Machine window appears.
  3. Select the Console tab, and clear the check box labeled Smartcard enabled, then click OK.
Result
You have disabled Smart card authentication for the virtual machine.

10.8. Running Virtual Machines

10.8.1. Installing Console Components

10.8.1.1. Console Components

A console is a graphical window that allows you to view the start up screen, shut down screen, and desktop of a virtual machine, and to interact with that virtual machine in a similar way to a physical machine. In Red Hat Enterprise Virtualization, the default application for opening a console to a virtual machine is Remote Viewer, which must be installed on the client machine prior to use.

10.8.1.2. Installing Remote Viewer on Linux

Remote Viewer is a SPICE client that is included the virt-viewer package provided by the Red Hat Enterprise Linux Workstation (v. 6 for x86_64) repository.

Procedure 10.24. Installing Remote Viewer on Linux

  1. Install the spice-xpi package and dependencies:
    # yum install spice-xpi
  2. Check whether the virt-viewer package has already been installed on your system:
    # rpm -q virt-viewer
    virt-viewer-0.5.2-18.el6_4.2.x86_64
  3. If the virt-viewer package has not been installed, install the package and its dependencies:
    # yum install virt-viewer
  4. Restart your browser for the changes to take effect.
Remote Viewer is installed. You can connect to your virtual machines using either the SPICE or VNC protocol.

10.8.1.3. Installing Remote Viewer for Internet Explorer on Windows

The SPICE ActiveX component is required to run Remote Viewer, which opens a graphical console to virtual machines. Remote Viewer is a SPICE client installed together with the SPICE ActiveX component; both are provided in the SpiceX.cab file.

Procedure 10.25. Installing Remote Viewer for Internet Explorer on Windows

  1. Open Internet Explorer and log in to the User Portal.
  2. Start a virtual machine and attempt to connect to the virtual machine using the Browser plugin console option.
  3. Click the warning banner and click Install This Add-on when prompted.
  4. Click Install when prompted.
  5. Restart Internet Explorer for your changes to take effect.
You have installed Remote Viewer for Internet Explorer on Windows, and can now connect to virtual machines using the SPICE protocol from within Internet Explorer.

10.8.1.4. Installing Remote Viewer on Windows

The Remote Viewer application provides users with a graphical console for connecting to virtual machines. Once installed, it is called automatically when attempting to open a SPICE session with a virtual machine. Alternatively, it can also be used as a standalone application.

Procedure 10.26. Installing Remote Viewer on Windows

  1. Open a web browser and download one of the following installers according to the architecture of your system.
    • Virt Viewer for 32-bit Windows:
      https://your-manager-fqdn/ovirt-engine/services/files/spice/virt-viewer-x86.msi
    • Virt Viewer for 64-bit Windows:
      https://your-manager-fqdn/ovirt-engine/services/files/spice/virt-viewer-x64.msi
  2. Open the folder where the file was saved.
  3. Double-click the file.
  4. Click Run if prompted by a security warning.
  5. Click Yes if prompted by User Account Control.
Remote Viewer is installed and can be accessed via Remote Viewer in the VirtViewer folder of All Programs in the start menu.

10.8.2. Guest Agents and Drivers

10.8.2.1. Red Hat Enterprise Virtualization Guest Agents and Drivers

The Red Hat Enterprise Virtualization guest agents and drivers are a set of components that you can install on Red Hat Enterprise Linux and Windows virtual machines in your Red Hat Enterprise Virtualization environment to provide additional information about and functionality for those virtual machines. Key features include the ability to monitor resource usage and gracefully shut down or reboot virtual machines from the User Portal and Administration Portal. To access this functionality, you must install the Red Hat Enterprise Virtualization guest agents and drivers on each virtual machine on which this functionality is to be available.

Table 10.16. Red Hat Enterprise Virtualization Guest Drivers

Driver
Description
Works on
virtio-net
Paravirtualized network driver provides enhanced performance over emulated devices like rtl.
Server and Desktop.
virtio-block
Paravirtualized HDD driver offers increased I/O performance over emulated devices like IDE by optimizing the coordination and communication between the guest and the hypervisor. The driver complements the software implementation of the virtio-device used by the host to play the role of a hardware device.
Server and Desktop.
virtio-scsi
Paravirtualized iSCSI HDD driver offers similar functionality to the virtio-block device, with some additional enhancements. In particular, this driver supports adding hundreds of devices, and names devices using the standard SCSI device naming scheme.
Server and Desktop.
virtio-serial
Virtio-serial provides support for multiple serial ports. The improved performance is used for fast communication between the guest and the host that avoids network complications. This fast communication is required for the guest agents and for other features such as clipboard copy-paste between the guest and the host and logging.
Server and Desktop.
virtio-balloon
Virtio-balloon is used to control the amount of memory a guest actually accesses. It offers improved memory over-commitment. The balloon drivers are installed for future compatibility but not used by default in Red Hat Enterprise Virtualization 3.1 or higher.
Server and Desktop.
qxl
A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads.
Server and Desktop.

Table 10.17. Red Hat Enterprise Virtualization Guest Agents and Tools

Guest agent/tool
Description
Works on
rhevm-guest-agent-common
Allows the Red Hat Enterprise Virtualization Manager to receive guest internal events and information such as IP address and installed applications. Also allows the Manager to execute specific commands, such as shut down or reboot, on a guest.
On Red Hat Enterprise Linux 6 and higher guests, the rhevm-guest-agent-common installs tuned on your virtual machine and configures it to use an optimized, virtualized-guest profile.
Server and Desktop.
spice-agent
The SPICE agent supports multiple monitors and is responsible for client-mouse-mode support to provide a better user experience and improved responsiveness than the QEMU emulation. Cursor capture is not needed in client-mouse-mode. The SPICE agent reduces bandwidth usage when used over a wide area network by reducing the display level, including color depth, disabling wallpaper, font smoothing, and animation. The SPICE agent enables clipboard support allowing cut and paste operations for both text and images between client and guest, and automatic guest display setting according to client-side settings. On Windows guests, the SPICE agent consists of vdservice and vdagent.
Server and Desktop.
rhev-sso
An agent that enables users to automatically log in to their virtual machines based on the credentials used to access the Red Hat Enterprise Virtualization Manager.
Desktop.
rhev-usb
A component that contains drivers and services for Legacy USB support (version 3.0 and earlier) on guests. It is needed for accessing a USB device that is plugged into the client machine. RHEV-USB Client is needed on the client side.
Desktop.

10.8.2.2. Installing the Guest Agents and Drivers on Red Hat Enterprise Linux

The Red Hat Enterprise Virtualization guest agents and drivers are installed on Red Hat Enterprise Linux virtual machines using the rhevm-guest-agent package provided by the Red Hat Enterprise Virtualization Agent repository.

Procedure 10.27. Installing the Guest Agents and Drivers on Red Hat Enterprise Linux

  1. Log in to the Red Hat Enterprise Linux virtual machine.
  2. Enable the Red Hat Enterprise Virtualization Agent repository:
    • For Red Hat Enterprise Linux 6
      # subscription-manager repos --enable=rhel-6-server-rhev-agent-rpms
    • For Red Hat Enterprise Linux 7
      # subscription-manager repos --enable=rhel-7-server-rh-common-rpms
  3. Install the rhevm-guest-agent-common package and dependencies:
    # yum install rhevm-guest-agent-common
  4. Start and enable the service:
    • For Red Hat Enterprise Linux 6
      # service ovirt-guest-agent start
      # chkconfig ovirt-guest-agent on
    • For Red Hat Enterprise Linux 7
      # systemctl start ovirt-guest-agent.service
      # systemctl enable ovirt-guest-agent.service
You have installed the guest agent, which now passes usage information to the Red Hat Enterprise Virtualization Manager. The Red Hat Enterprise Virtualization agent runs as a service called ovirt-guest-agent that you can configure via the ovirt-guest-agent.conf configuration file in the /etc/ directory.

10.8.2.3. Installing the Guest Agents and Drivers on Windows

The Red Hat Enterprise Virtualization guest agents and drivers are installed on Windows virtual machines using the rhev-tools-setup.iso ISO file, which is provided by the rhev-guest-tools-iso package installed as a dependency to the Red Hat Enterprise Virtualization Manager. This ISO file is located in /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso on the system on which the Red Hat Enterprise Virtualization Manager is installed.

Note

The rhev-tools-setup.iso ISO file is automatically copied to the default ISO storage domain, if any, when you run engine-setup, or must be manually uploaded to an ISO storage domain.

Note

Updated versions of the rhev-tools-setup.iso ISO file must be manually attached to running Windows virtual machines to install updated versions of the tools and drivers. If the APT service is enabled on virtual machines, the updated ISO files will be automatically attached.

Note

If you install the guest agents and drivers from the command line or as part of a deployment tool such as Windows Deployment Services, you can append the options ISSILENTMODE and ISNOREBOOT to RHEV-toolsSetup.exe to silently install the guest agents and drivers and prevent the machine on which they have been installed from rebooting immediately after installation. The machine can then be rebooted later once the deployment process is complete.
D:\RHEV-toolsSetup.exe ISSILENTMODE ISNOREBOOT

Procedure 10.28. Installing the Guest Agents and Drivers on Windows

  1. Log in to the virtual machine.
  2. Select the CD Drive containing the rhev-tools-setup.iso file.
  3. Double-click RHEV-toolsSetup.
  4. Click Next at the welcome screen.
  5. Follow the prompts on the RHEV-Tools InstallShield Wizard window. Ensure all check boxes in the list of components are selected.
    Selecting All Components of Red Hat Enterprise Virtualization Tools for Installation

    Figure 10.7. Selecting All Components of Red Hat Enterprise Virtualization Tools for Installation

  6. Once installation is complete, select Yes, I want to restart my computer now and click Finish to apply the changes.
You have installed the guest agents and drivers, which now pass usage information to the Red Hat Enterprise Virtualization Manager and allow you to access USB devices, single sign-on into virual machines and other functionality. The Red Hat Enterprise Virtualization guest agent runs as a service called RHEV Agent that you can configure using the rhev-agent configuration file located in C:\Program Files\Redhat\RHEV\Drivers\Agent.

10.8.2.4. Updating the Guest Agents and Drivers on Red Hat Enterprise Linux

Update the guest agents and drivers on your Red Hat Enterprise Linux virtual machines to use the latest version.

Procedure 10.29. Updating the Guest Agents and Drivers on Red Hat Enterprise Linux

  1. Log in to the Red Hat Enterprise Linux virtual machine.
  2. Update the rhevm-guest-agent-common package:
    # yum update rhevm-guest-agent-common
  3. Restart the service:
    • For Red Hat Enterprise Linux 6
      # service ovirt-guest-agent restart
    • For Red Hat Enterprise Linux 7
      # systemctl restart ovirt-guest-agent.service

10.8.2.5. Updating the Guest Agents and Drivers on Windows

The guest tools comprise software that allows Red Hat Enterprise Virtualization Manager to communicate with the virtual machines it manages, providing information such as the IP addresses, memory usage, and applications installed on those virtual machines. The guest tools are distributed as an ISO file that can be attached to guests. This ISO file is packaged as an RPM file that can be installed and upgraded from the machine on which the Red Hat Enterprise Virtualization Manager is installed.

Procedure 10.30. Updating the Guest Agents and Drivers on Windows

  1. On the Red Hat Enterprise Virtualization Manager, update the Red Hat Enterprise Virtualization Guest Tools to the latest version:
    # yum update -y rhev-guest-tools-iso*
    
  2. Upload the ISO file to your ISO domain, replacing [ISODomain] with the name of your ISO domain:
    engine-iso-uploader --iso-domain=[ISODomain] upload /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
    

    Note

    The rhev-tools-setup.iso file is a symbolic link to the most recently updated ISO file. The link is automatically changed to point to the newest ISO file every time you update the rhev-guest-tools-iso package.
  3. In the Administration or User Portal, if the virtual machine is running, use the Change CD button to attach the latest rhev-tools-setup.iso file to each of your virtual machines. If the virtual machine is powered off, click the Run Once button and attach the ISO as a CD.
  4. Select the CD Drive containing the updated ISO and execute the RHEV-ToolsSetup.exe file.

10.8.2.6. Automating Guest Additions on Windows Guests with Red Hat Enterprise Virtualization Application Provisioning Tool(APT)

Summary
Red Hat Enterprise Virtualization Application Provisioning Tool (APT) is a Windows service that can be installed on Windows virtual machines and templates. When the APT service is installed and running on a virtual machine, attached ISO files are automatically scanned. When the service recognizes a valid Red Hat Enterprise Virtualization guest tools ISO, and no other guest tools are installed, the APT service installs the guest tools. If guest tools are already installed, and the ISO image contains newer versions of the tools, the service performs an automatic upgrade. This procedure assumes you have attached the rhev-tools-setup.iso ISO file to the virtual machine.

Procedure 10.31. Installing the APT Service on Windows

  1. Log in to the virtual machine.
  2. Select the CD Drive containing the rhev-tools-setup.iso file.
  3. Double-click RHEV-Application Provisioning Tool.
  4. Click Yes in the User Account Control window.
  5. Once installation is complete, ensure the Start RHEV-apt Service check box is selected in the RHEV-Application Provisioning Tool InstallShield Wizard window, and click Finish to apply the changes.
Result
You have installed and started the APT service.
Once the APT service has successfully installed or upgraded the guest tools on a virtual machine, the virtual machine is automatically rebooted; this happens without confirmation from the user logged in to the machine. The APT Service will also perform these operations when a virtual machine created from a template that has the APT Service already installed is booted for the first time.

Note

The RHEV-apt service can be stopped immediately after install by clearing the Start RHEV-apt Service check box. You can stop, start, or restart the service at any time using the Services window.

10.8.3. Subscribing to Channels

10.8.3.1. Subscribing to the Required Entitlements

To install packages signed by Red Hat you must register the target system to the Content Delivery Network. Then, use an entitlement from your subscription pool and enable the required repositories.

Procedure 10.32. Subscribing to the Required Entitlements Using Subscription Manager

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
    # subscription-manager register
  2. Locate the relevant subscription pools and note down the pool identifiers.
    # subscription-manager list --available
  3. Use the pool identifiers located in the previous step to attach the required entitlements.
    # subscription-manager attach --pool=pool_id
  4. Disable all existing repositories:
    # subscription-manager repos --disable=*
  5. When a system is subscribed to a subscription pool with multiple repositories, only the main repository is enabled by default. Others are available, but disabled. Enable any additional repositories:
    # subscription-manager repos --enable=repository
  6. Ensure that all packages currently installed are up to date:
    # yum update

10.8.4. Accessing Virtual machines

10.8.4.1. Starting a Virtual Machine

Summary
You can start a virtual machine from the Administration Portal.

Procedure 10.33. Starting a Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine with a status of Down.
  2. Click the run ( ) button.
    Alternatively, right-click the virtual machine and select Run.
Result
The Status of the virtual machine changes to Up, and the console protocol of the selected virtual machine is displayed. If the guest agent is installed on the virtual machine, the IP address of that virtual machine is also displayed.

10.8.4.2. Opening a Console to a Virtual Machine

Use Remote Viewer to connect to a virtual machine.

Procedure 10.34. Connecting to a Virtual Machine

  1. Install Remote Viewer if it is not already installed. See Installing Console Components
  2. Click the Virtual Machines tab and select a virtual machine.
  3. Click the console button or right-click the virtual machine and select Console.
    Connection Icon on the Virtual Machine Menu

    Figure 10.8. Connection Icon on the Virtual Machine Menu

    • If the connection protocol is set to SPICE, a console window will automatically open for the virtual machine.
    • If the connection protocol is set to VNC, a console.vv file will be downloaded. Click on the file and a console window will automatically open for the virtual machine.

10.8.4.3. Shutting Down a Virtual Machine

Summary
If the guest agent is installed on a virtual machine or that virtual machine supports Advanced Configuration and Power Interface (ACPI), you can shut that virtual machine down from within the Administration Portal.

Procedure 10.35. Shutting Down a Virtual Machine

  1. Click the Virtual Machines tab and select a running virtual machine.
  2. Click the shut down ( ) button.
    Alternatively, right-click the virtual machine and select Shutdown.
Result
The virtual machine shuts down gracefully and the Status of the virtual machine changes to Down.

10.8.4.4. Pausing a Virtual Machine

Summary
If the guest agent is installed on a virtual machine or that virtual machine supports Advanced Configuration and Power Interface (ACPI), you can pause that virtual machine from within the Administration Portal. This is equal to placing that virtual machine into Hibernate mode.

Procedure 10.36. Pausing a Virtual Machine

  1. Click the Virtual Machines tab and select a running virtual machine.
  2. Click the Suspend ( ) button.
    Alternatively, right-click the virtual machine and select Suspend.
Result
The Status of the virtual machine changes to Paused.

10.8.4.5. Rebooting a Virtual Machine

Summary
If the guest agent is installed on a virtual machine, you can reboot that virtual machine from within the Administration Portal.

Procedure 10.37. Rebooting a Virtual Machine

  1. Click the Virtual Machines tab and select a running virtual machine.
  2. Click the Reboot ( ) button.
    Alternatively, right-click the virtual machine and select Reboot.
  3. Click OK in the Reboot Virtual Machine(s) confirmation window.
Result
The Status of the virtual machine changes to Reboot In Progress before returning to Up.

10.8.5. Console Options

10.8.5.1. Introduction to Connection Protocols

Connection protocols are the underlying technology used to provide graphical consoles for virtual machines and allow users to work with virtual machines in a similar way as they would with physical machines. Red Hat Enterprise Virtualization currently supports the following connection protocols:
SPICE
Simple Protocol for Independent Computing Environments (SPICE) is the recommended connection protocol for both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using SPICE, use Remote Viewer.
VNC
Virtual Network Computing (VNC) can be used to open consoles to both Linux virtual machines and Windows virtual machines. To open a console to a virtual machine using VNC, use Remote Viewer or a VNC client.
RDP
Remote Desktop Protocol (RDP) can only be used to open consoles to Windows virtual machines, and is only available when you access a virtual machines from a Windows machine on which Remote Desktop has been installed. Before you can connect to a Windows virtual machine using RDP, you must set up remote sharing on the virtual machine and configure the firewall to allow remote desktop connections.

Note

SPICE is not currently supported on virtual machines running Windows 8. If a Windows 8 virtual machine is configured to use the SPICE protocol, it will detect the absence of the required SPICE drivers and automatically fall back to using RDP.

10.8.5.2. Accessing Console Options

In the Administration Portal, you can configure several options for opening graphical consoles for virtual machines, such as the method of invocation and whether to enable or disable USB redirection.

Procedure 10.38. Accessing Console Options

  1. Select a running virtual machine.
  2. Right-click the virtual machine and select Console Options to open the Console Options window.

Note

Further options specific to each of the connection protocols, such as the keyboard layout when using the VNC connection protocol, can be configured in the Console tab of the Edit Virtual Machine window.

10.8.5.3. SPICE Console Options

When the SPICE connection protocol is selected, the following options are available in the Console Options window.
The Console Options window

Figure 10.9. The Console Options window

Console Invocation

  • Auto: The Manager automatically selects the method for invoking the console.
  • Native client: When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Viewer.
  • Browser plugin: When you connect to the console of the virtual machine, you are connected directly via Remote Viewer.
  • SPICE HTML5 browser client (Tech preview): When you connect to the console of the virtual machine, a browser tab is opened that acts as the console.

SPICE Options

  • Map control-alt-del shortcut to ctrl+alt+end: Select this check box to map the Ctrl+Alt+Del key combination to Ctrl+Alt+End inside the virtual machine.
  • Enable USB Auto-Share: Select this check box to automatically redirect USB devices to the virtual machine. If this option is not selected, USB devices will connect to the client machine instead of the guest virtual machine. To use the USB device on the guest machine, manually enable it in the SPICE client menu.
  • Open in Full Screen: Select this check box for the virtual machine console to automatically open in full screen when you connect to the virtual machine. Press SHIFT+F11 to toggle full screen mode on or off.
  • Enable SPICE Proxy: Select this check box to enable the SPICE proxy.
  • Enable WAN options: Select this check box to set the parameters WANDisableEffects and WANColorDepth to animation and 16 bits respectively on Windows virtual machines. Bandwidth in WAN environments is limited and this option prevents certain Windows settings from consuming too much bandwidth.

Important

The Browser plugin console option is only available when accessing the Administration and User Portals through Internet Explorer. This console options uses the version of Remote Viewer provided by the SpiceX.cab installation program. For all other browsers, the Native client console option is the default. This console option uses the version of Remote Viewer provided by the virt-viewer-x86.msi and virt-viewer-x64.msi installation files.

10.8.5.4. VNC Console Options

When the VNC connection protocol is selected, the following options are available in the Console Options window.
The Console Options window

Figure 10.10. The Console Options window

Console Invocation

  • Native Client: When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Viewer.
  • noVNC: When you connect to the console of the virtual machine, a browser tab is opened that acts as the console.

VNC Options

  • Map control-alt-delete shortcut to ctrl+alt+end: Select this check box to map the Ctrl+Alt+Del key combination to Ctrl+Alt+End inside the virtual machine.

10.8.5.5. RDP Console Options

When the RDP connection protocol is selected, the following options are available in the Console Options window.
The Console Options window

Figure 10.11. The Console Options window

Console Invocation

  • Auto: The Manager automatically selects the method for invoking the console.
  • Native client: When you connect to the console of the virtual machine, a file download dialog provides you with a file that opens a console to the virtual machine via Remote Desktop.

RDP Options

  • Use Local Drives: Select this check box to make the drives on the client machine accessible on the guest virtual machine.

10.8.6. Remote Viewer Options

10.8.6.1. Remote Viewer Options

When you specify the Native client or Browser plugin console invocation options, you will connect to virtual machines using Remote Viewer. The Remote Viewer window provides a number of options for interacting with the virtual machine to which it is connected.
The Remote Viewer connection menu

Figure 10.12. The Remote Viewer connection menu

Table 10.18. Remote Viewer Options

Option Hotkey
File
  • Screenshot: Takes a screen capture of the active window and saves it in a location of your specification.
  • USB device selection: If USB redirection has been enabled on your virtual machine, the USB device plugged into your client machine can be accessed from this menu.
  • Quit: Closes the console. The hot key for this option is Shift+Ctrl+Q.
View
  • Full screen: Toggles full screen mode on or off. When enabled, full screen mode expands the virtual machine to fill the entire screen. When disabled, the virtual machine is displayed as a window. The hot key for enabling or disabling full screen is SHIFT+F11.
  • Zoom: Zooms in and out of the console window. Ctrl++ zooms in, Ctrl+- zooms out, and Ctrl+0 returns the screen to its original size.
  • Automatically resize: Tick to enable the guest resolution to automatically scale according to the size of the console window.
  • Displays: Allows users to enable and disable displays for the guest virtual machine.
Send key
  • Ctrl+Alt+Del: On a Red Hat Enterprise Linux virtual machine, it displays a dialog with options to suspend, shut down or restart the virtual machine. On a Windows virtual machine, it displays the task manager or Windows Security dialog.
  • Ctrl+Alt+Backspace: On a Red Hat Enterprise Linux virtual machine, it restarts the X sever. On a Windows virtual machine, it does nothing.
  • Ctrl+Alt+F1
  • Ctrl+Alt+F2
  • Ctrl+Alt+F3
  • Ctrl+Alt+F4
  • Ctrl+Alt+F5
  • Ctrl+Alt+F6
  • Ctrl+Alt+F7
  • Ctrl+Alt+F8
  • Ctrl+Alt+F9
  • Ctrl+Alt+F10
  • Ctrl+Alt+F11
  • Ctrl+Alt+F12
  • Printscreen: Passes the Printscreen keyboard option to the virtual machine.
Help The About entry displays the version details of Virtual Machine Viewer that you are using.
Release Cursor from Virtual Machine SHIFT+F12

10.8.6.2. Remote Viewer Hotkeys

You can access the hotkeys for a virtual machine in both full screen mode and windowed mode. If you are using full screen mode, you can display the menu containing the button for hotkeys by moving the mouse pointer to the middle of the top of the screen. If you are using windowed mode, you can access the hotkeys via the Send key menu on the virtual machine window title bar.

Note

If vdagent is not running on the client machine, the mouse can become captured in a virtual machine window if it is used inside a virtual machine and the virtual machine is not in full screen. To unlock the mouse, press Shift+F12.

10.9. Removing Virtual Machines

10.9.1. Removing a Virtual Machine

Summary
Remove a virtual machine from the Red Hat Enterprise Virtualization environment.

Important

The Remove button is disabled while virtual machines are running; you must shut down a virtual machine before you can remove it.

Procedure 10.39. Removing a Virtual Machine

  1. Click the Virtual Machines tab and select the virtual machine to remove.
  2. Click Remove to open the Remove Virtual Machine(s) window.
  3. Optionally, select the Remove Disk(s) check box to remove the virtual disks attached to the virtual machine together with the virtual machine. If the Remove Disk(s) check box is cleared, then the virtual disks remain in the environment as floating disks.
  4. Click OK.
Result
The virtual machine is removed from the environment and is no longer listed in the Virtual Machines resource tab. If you selected the Remove Disk(s) check box, then the virtual disks attached to the virtual machine are also removed.

10.10. Cloning Virtual Machines

10.10.1. Cloning a Virtual Machine

Summary
In Red Hat Enterprise Virtualization 3.5, you can now clone virtual machines without having to create a template or a snapshot first.

Important

The Clone VM button is disabled while virtual machines are running; you must shut down a virtual machine before you can clone it.

Procedure 10.40. Cloning a Virtual Machine

  1. Click the Virtual Machines tab and select the virtual machine to clone.
  2. Click the Clone VM button to open the Clone Virtual Machine window.
  3. Enter a name for the new virtual machine.
  4. Click OK.
Result
A duplicate virtual machine is generated.

10.11. Virtual Machines and Permissions

10.11.1. Managing System Permissions for a Virtual Machine

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A UserVmManager is a system administration role for virtual machines in a data center. This role can be applied to specific virtual machines, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual resources.
The user virtual machine administrator role permits the following actions:
  • Create, edit, and remove virtual machines.
  • Run, suspend, shutdown, and stop virtual machines.

Note

You can only assign roles and permissions to existing users.
Many end users are concerned solely with the virtual machine resources of the virtualized environment. As a result, Red Hat Enterprise Virtualization provides several user roles which enable the user to manage virtual machines specifically, but not other resources in the data center.

10.11.2. Virtual Machines Administrator Roles Explained

Virtual Machine Administrator Permission Roles
The table below describes the administrator roles and privileges applicable to virtual machine administration.

Table 10.19. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DataCenterAdmin Data Center Administrator Possesses administrative permissions for all objects underneath a specific data center except for storage.
ClusterAdmin Cluster Administrator Possesses administrative permissions for all objects underneath a specific cluster.
NetworkAdmin Network Administrator Possesses administrative permissions for all operations on a specific logical network. Can configure and manage networks attached to virtual machines. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine.

10.11.3. Virtual Machine User Roles Explained

Virtual Machine User Permission Roles
The table below describes the user roles and privileges applicable to virtual machine users. These roles allow access to the User Portal for managing and accessing virtual machines, but they do not confer any permissions for the Administration Portal.

Table 10.20. Red Hat Enterprise Virtualization System User Roles

Role Privileges Notes
UserRole Can access and use virtual machines and pools. Can log in to the User Portal and use virtual machines and pools.
PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center. Having a PowerUserRole is equivalent to having the VmCreator, DiskCreator, and TemplateCreator roles.
UserVmManager System administrator of a virtual machine. Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the User Portal is automatically assigned the UserVmManager role on the machine.
UserTemplateBasedVm Limited privileges to only use Templates. Level of privilege to create a virtual machine by means of a template.
VmCreator Can create virtual machines in the User Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains.
NetworkUser Logical network and network interface user for virtual machines. If the Allow all users to use this Network option was selected when a logical network is created, NetworkUser permissions are assigned to all users for the logical network. Users can then attach or detach virtual machine network interfaces to or from the logical network.

10.11.4. Assigning Virtual Machines to Users

If you are creating virtual machines for users other than yourself, you have to assign roles to the users before they can use the virtual machines. Note that permissions can only be assigned to existing users. See the Red Hat Enterprise Virtualization Installation Guide for details on creating user accounts.
The Red Hat Enterprise Virtualization User Portal supports three default roles: User, PowerUser and UserVmManager. However, customized roles can be configured via the Red Hat Enterprise Virtualization Manager Administration Portal. The default roles are described below.
  • A User can connect to and use virtual machines. This role is suitable for desktop end users performing day-to-day tasks.
  • A PowerUser can create virtual machines and view virtual resources. This role is suitable if you are an administrator or manager who needs to provide virtual resources for your employees.
  • A UserVmManager can edit and remove virtual machines, assign user permissions, use snapshots and use templates. It is suitable if you need to make configuration changes to your virtual environment.
When you create a virtual machine, you automatically inherit UserVmManager privileges. This enables you to make changes to the virtual machine and assign permissions to the users you manage, or users who are in your Identity Management (IdM) or RHDS group.
See Red Hat Enterprise Virtualization Installation Guide for more information on directory services support in Red Hat Enterprise Virtualization.
Summary
This procedure explains how to add permissions to users.

Procedure 10.41. Assigning Permissions to Users

  1. Click the Virtual Machines tab and select a virtual machine.
  2. On the details pane, select the Permissions tab.
  3. Click New. The Add Permission to User dialog displays. Enter a Name, or User Name, or part thereof in the Search text box, and click Go. A list of possible matches display in the results list.
  4. Select the check box of the user to be assigned the permissions. Scroll through the Role to Assign list and select UserRole. Click OK.
  5. The user's name and role display in the list of users permitted to access this virtual machine.
Result
You have added permissions to a user.

Note

If a user is assigned permissions to only one virtual machine, single sign-on (SSO) can be configured for the virtual machine. With single sign-on enabled, when a user logs in to the User Portal, and then connects to a virtual machine through, for example, a SPICE console, users are automatically logged in to the virtual machine and do not need to type in the username and password again. Single sign-on can be enabled or disabled via the User Portal on a per virtual machine basis. See the User Guide for more information on how to enable and disable single sign-on for virtual machines.

10.11.5. Removing Access to Virtual Machines from Users

Summary
This procedure explains how to remove user permissions.

Procedure 10.42. Removing Access to Virtual Machines from Users

  1. Click the Virtual Machines tab and select a virtual machine.
  2. On the details pane, select the Permissions tab.
  3. Click Remove. A warning message displays, asking you to confirm removal of the selected permissions.
  4. To proceed, click OK. To abort, click Cancel.
Result
You have now removed permissions from a user.

10.12. Snapshots

10.12.1. Creating a Snapshot of a Virtual Machine

Summary
A snapshot is a view of a virtual machine's operating system and applications on any or all available disks at a given point in time. Take a snapshot of a virtual machine before you make a change to it that may have unintended consequences. You can use a snapshot to return a virtual machine to a previous state.

Note

Live snapshots can only be created for virtual machines running on 3.1-or-higher-compatible data centers. Virtual machines in 3.0-or-lower-compatible data centers must be shut down before a snapshot can be created.

Procedure 10.43. Creating a Snapshot of a Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Create Snapshot to open the Create Snapshot window.
    Create snapshot

    Figure 10.13. Create snapshot

  3. Enter a description for the snapshot.
  4. Select Disks to include using the check boxes.
  5. Use the Save Memory check box to denote whether to include the virtual machine's memory in the snapshot.
  6. Click OK to create the snapshot and close the window.
Result
The virtual machine's operating system and applications on the selected disk(s) are stored in a snapshot that can be previewed or restored. The snapshot is created with a status of Locked, which changes to Ok. When you click on the snapshot, its details are shown on the General, Disks, Network Interfaces, and Installed Applications tabs in the right side-pane of the details pane.

10.12.2. Using a Snapshot to Restore a Virtual Machine

Summary
A snapshot can be used to restore a virtual machine to its previous state.

Procedure 10.44. Using a snapshot to restore a virtual machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Snapshots tab in the details pane to list the available snapshots.
  3. Select a snapshot to restore in the left side-pane. The snapshot details display in the right side-pane.
  4. Click the drop-down list beside Preview to open the Custom Preview Snapshot window.
    Custom preview snapshot

    Figure 10.14. Custom preview snapshot

  5. Use the check boxes to select the VM Configuration, Memory, and disk(s) you want to restore, then click OK. This allows you to create and restore from a customized snapshot using the configuration and disk(s) from multiple snapshots.
    Custom preview snapshot

    Figure 10.15. Custom preview snapshot

    The status of the snapshot changes to Preview Mode. The status of the virtual machine briefly changes to Image Locked before returning to Down.
  6. Start the virtual machine; it runs using the disk image of the snapshot.
  7. Click Commit to permanently restore the virtual machine to the condition of the snapshot. Any subsequent snapshots are erased.
    Alternatively, click the Undo button to deactivate the snapshot and return the virtual machine to its previous state.
Result
The virtual machine is restored to its state at the time of the snapshot, or returned to its state before the preview of the snapshot.

10.12.3. Creating a Virtual Machine from a Snapshot

Summary
You have created a snapshot from a virtual machine. Now you can use that snapshot to create another virtual machine.

Procedure 10.45. Creating a virtual machine from a snapshot

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Snapshots tab in the details pane to list the available snapshots for the virtual machines.
  3. Select a snapshot in the list displayed and click Clone to open the Clone VM from Snapshot window.
  4. Enter the Name and Description of the virtual machine to be created.
    Clone a Virtual Machine from a Snapshot

    Figure 10.16. Clone a Virtual Machine from a Snapshot

  5. Click OK to create the virtual machine and close the window.
Result
After a short time, the cloned virtual machine appears in the Virtual Machines tab in the navigation pane. It appears in the navigation pane with a status of Image Locked. The virtual machine will remain in this state until Red Hat Enterprise Virtualization completes the creation of the virtual machine. A virtual machine with a preallocated 20 GB hard drive takes about fifteen minutes to create. Sparsely-allocated virtual disks take less time to create than do preallocated virtual disks.
When the virtual machine is ready to use, its status changes from Image Locked to Down in the Virtual Machines tab in the navigation pane.

10.12.4. Deleting a Snapshot

Delete a virtual machine snapshot and permanently remove it from your Red Hat Enterprise Virtualization environment. In data centers with a compatibility version of 3.5 and above and hosts running Red Hat Enterprise Linux 7.1 and above or Red Hat Enterprise Virtualization Hypervisor 7.1 and above, you can delete snapshots from a running virtual machine. This operation does not affect the current state of the virtual machine. Alternatively, shut down the virtual machine before continuing.

Important

When you delete a snapshot from an image chain, one of three things happens:
  • If the snapshot being deleted is contained in a RAW (preallocated) base image, a new volume is created that is the same size as the base image.
  • If the snapshot being deleted is contained in a QCOW2 (thin provisioned) base image, the volume subsequent to the volume containing the snapshot being deleted is extended to the cumulative size of the successor volume and the base volume.
  • If the snapshot being deleted is contained in a QCOW2 (thin provisioned), non-base image hosted on internal storage, the successor volume is extended to the cumulative size of the successor volume and the volume containing the snapshot being deleted.
The data from the two volumes is merged in the new or resized volume. The new or resized volume grows to accommodate the total size of the two merged images; the new volume size will be, at most, the sum of the two merged images. To delete a snapshot, it is recommended that you have sufficient free storage space to accommodate the snapshot being deleted and the subsequent snapshot. For detailed information on snapshot deletion for all disk formats, see https://access.redhat.com/solutions/527613.

Procedure 10.46. Deleting a Snapshot

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Snapshots tab in the details pane to list the snapshots for that virtual machine.
    Snapshot List

    Figure 10.17. Snapshot List

  3. Select the snapshot to delete.
  4. Optionally shut down the running virtual machine associated with the snapshot to be deleted.
  5. Click Delete.
  6. Click OK.

10.13. Affinity Groups

10.13.1. Introduction to Virtual Machine Affinity

Virtual machine affinity allows you to define sets of rules that specify whether certain virtual machines run together on the same host or run separately on different hosts. This allows you to create advanced workload scenarios for addressing challenges such as strict licensing requirements and workloads demanding high availability.
Virtual machine affinity is applied to virtual machines by adding virtual machines to one or more affinity groups. An affinity group is a group of two or more virtual machines for which a set of identical parameters and conditions apply. These parameters include positive (run together) affinity that ensures the virtual machines in an affinity group run on the same host, and negative (run independently) affinity that ensures the virtual machines in an affinity group run on different hosts.
A further set of conditions can then be applied to these parameters. For example, you can apply hard enforcement, which is a condition that ensures the virtual machines in the affinity group run on the same host or different hosts regardless of external conditions, or soft enforcement, which is a condition that indicates a preference for virtual machines in an affinity group to run on the same host or different hosts when possible.
The combination of an affinity group, its parameters, and its conditions is known as an affinity policy.

Note

Affinity groups are applied to virtual machines on the cluster level. When a virtual machine is moved from one cluster to another, that virtual machine is removed from all affinity groups in the source cluster.

Important

Affinity groups will only take effect when the VmAffinityGroups filter module or weights module is enabled in the cluster policy applied to clusters in which affinity groups are defined. The VmAffinityGroups filter module is used to implement hard enforcement, and the VmAffinityGroups weights module is used to implement soft enforcement.

10.13.2. Creating an Affinity Group

Summary
You can create new affinity groups for applying affinity policies to virtual machines.

Procedure 10.47. Creating an Affinity Group

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Affinity Groups tab in the details pane.
  3. Click the New button to open the New Affinity Group window.
  4. Enter a name and description for the affinity group in the Name text field and Description text field.
  5. Select the Positive check box to apply positive affinity, or ensure this check box is cleared to apply negative affinity.
  6. Select the Enforcing check box to apply hard enforcement, or ensure this check box is cleared to apply soft enforcement.
  7. Use the drop-down menu to select the virtual machines to be added to the affinity group. Use the + and - buttons to add or remove additional virtual machines.
  8. Click OK.
Result
You have created a virtual machine affinity group and specified the parameters and conditions to be applied to the virtual machines that are members of that group.

10.13.3. Editing an Affinity Group

Summary
You can edit the settings of existing affinity groups.

Procedure 10.48. Editing an Affinity Group

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Affinity Groups tab in the details pane.
  3. Click the Edit button to open the Edit Affinity Group window.
  4. Change the Positive and Enforcing check boxes to the preferred values and use the + and - buttons to add or remove virtual machines to or from the affinity group.
  5. Click OK.
Result
You have edited an affinity group.

10.13.4. Removing an Affinity Group

Summary
You can remove an existing affinity group.

Procedure 10.49. Removing an Affinity Group

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click the Affinity Groups tab in the details pane.
  3. Click the Remove button and click OK when prompted to remove the affinity group.
Result
You have removed an affinity group, and the affinity policy that applied to the virtual machines that were members of that affinity group no longer applies.

10.14. Exporting and Importing Virtual Machines and Templates

Virtual machines and templates stored in Open Virtual Machine Format (OVF) can be exported from and imported to data centers in the same or different Red Hat Enterprise Virtualization environment.
To export or import virtual machines and templates, an active export domain must be attached to the data center containing the virtual machine or template to be exported or imported. An export domain acts as a temporary storage area containing two directories for each exported virtual machine or template. One directory contains the OVF files for the virtual machine or template. The other directory holds the disk image or images for the virtual machine or template.
There are three stages to exporting and importing virtual machines and templates:
  1. Export the virtual machine or template to an export domain.
  2. Detach the export domain from one data center, and attach it to another. You can attach it to a different data center in the same Red Hat Enterprise Virtualization environment, or attach it to a data center in a separate Red Hat Enterprise Virtualization environment that is managed by another installation of the Red Hat Enterprise Virtualization Manager.

    Note

    An export domain can only be active in one data center at a given time. This means that the export domain must be attached to either the source data center or the destination data center.
  3. Import the virtual machine or template into the data center to which the export domain is attached.
When you export or import a virtual machine or template, properties including basic details such as the name and description, resource allocation, and high availability settings of that virtual machine or template are preserved. However, if a virtual machine or template is imported into a different Red Hat Enterprise Virtualization environment, user roles and permissions may be different and will need to be updated to ensure appropriate access is available.
You can also import virtual machines from other virtualization providers such as Xen, VMware or Windows virtual machines using the V2V feature. V2V converts virtual machines and places them in the export domain.

10.14.1. Graphical Overview for Exporting and Importing Virtual Machines and Templates

This procedure provides a graphical overview of the steps required to export a virtual machine or template from one data center and import that virtual machine or template into another data center.

Procedure 10.50. Exporting and Importing Virtual Machines and Templates

  1. Attach the export domain to the source data center.
    Attach Export Domain

    Figure 10.18. Attach Export Domain

  2. Export the virtual machine or template to the export domain.
    Export the Virtual Resource

    Figure 10.19. Export the Virtual Resource

  3. Detach the export domain from the source data center.
    Detach Export Domain

    Figure 10.20. Detach Export Domain

  4. Attach the export domain to the destination data center.
    Attach the Export Domain

    Figure 10.21. Attach the Export Domain

  5. Import the virtual machine or template into the destination data center.
    Import the virtual resource

    Figure 10.22. Import the virtual resource

10.14.2. Exporting a Virtual Machine to the Export Domain

Export a virtual machine to the export domain so that it can be imported into a different data center. Before you begin, the export domain must be attached to the data center that contains the virtual machine to be exported. The virtual machine must be stopped.

Procedure 10.51. Exporting a Virtual Machine to the Export Domain

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Export to open the Export Virtual Machine window.
  3. Optionally select the following check boxes:
    • Force Override: overrides existing images of the virtual machine on the export domain.
    • Collapse Snapshots: creates a single export volume per disk. This option removes snapshot restore points and includes the template in a template-based virtual machine, and removes any dependencies a virtual machine has on a template. For a virtual machine that is dependent on a template, either select this option, export the template with the virtual machine, or make sure the template exists in the destination data center.

      Note

      When you create a virtual machine from a template, two storage allocation options are available under New Virtual MachineResource AllocationStorage Allocation.
      • If Clone was selected, the virtual machine is not dependent on the template. The template does not have to exist in the destination data center.
      • If Thin was selected, the virtual machine is dependent on the template, so the template must exist in the destination data center or be exported with the virtual machine. Alternatively, select the Collapse Snapshots check box to collapse the template disk and virtual machine disk into a single disk.
      To check which option was selected, select a virtual machine and click the General tab in the details pane.
  4. Click OK to export the virtual machine and close the window.
The export of the virtual machine begins. The virtual machine displays in the Virtual Machines results list with an Image Locked status while it is exported. Depending on the size of your virtual machine hard disk images, and your storage hardware, this can take up to an hour. Use the Events tab to view the progress. When complete, the virtual machine has been exported to the export domain and displays on the VM Import tab of the export domain's details pane.

10.14.3. Importing a Virtual Machine into the Destination Data Center

You have a virtual machine on an export domain. Before the virtual machine can be imported to a new data center, the export domain must be attached to the destination data center.

Procedure 10.52. Importing a Virtual Machine into the Destination Data Center

  1. Click the Storage tab, and select the export domain in the results list. The export domain must have a status of Active.
  2. Select the VM Import tab in the details pane to list the available virtual machines to import.
  3. Select one or more virtual machines to import and click Import.
    Import Virtual Machine

    Figure 10.23. Import Virtual Machine

  4. Select the Default Storage Domain and Cluster.
  5. Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines.
  6. Click the virtual machine to be imported and click on the Disks sub-tab. From this tab, you can use the Allocation Policy and Storage Domain drop-down lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and can also select the storage domain on which the disk will be stored. An icon is also displayed to indicate which of the disks to be imported acts as the boot disk for that virtual machine.
  7. Click OK to import the virtual machines.
    The Import Virtual Machine Conflict window opens if the virtual machine exists in the virtualized environment.
    Import Virtual Machine Conflict Window

    Figure 10.24. Import Virtual Machine Conflict Window

  8. Choose one of the following radio buttons:
    • Don't import
    • Import as cloned and enter a unique name for the virtual machine in the New Name field.
  9. Optionally select the Apply to all check box to import all duplicated virtual machines with the same suffix, and then enter a suffix in the Suffix to add to the cloned VMs field.
  10. Click OK.

Important

During a single import operation, you can only import virtual machines that share the same architecture. If any of the virtual machines to be imported have a different architecture to that of the other virtual machines to be imported, a warning will display and you will be prompted to change your selection so that only virtual machines with the same architecture will be imported.

10.15. Migrating Virtual Machines Between Hosts

10.15.1. What is Live Migration?

Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the the destination host. Storage and network connectivity are not altered.

10.15.2. Live Migration Prerequisites

Live migration is used to seamlessly move virtual machines to support a number of common maintenance tasks. Ensure that your Red Hat Enterprise Virtualization environment is correctly configured to support live migration well in advance of using it.
At a minimum, for successful live migration of virtual machines to be possible:
  • The source and destination host should both be members of the same cluster, ensuring CPU compatibility between them.

    Note

    Live migrating virtual machines between different clusters is generally not recommended. The currently only supported use case is documented at https://access.redhat.com/articles/1390733.
  • The source and destination host must have a status of Up.
  • The source and destination host must have access to the same virtual networks and VLANs.
  • The source and destination host must have access to the data storage domain on which the virtual machine resides.
  • There must be enough CPU capacity on the destination host to support the virtual machine's requirements.
  • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements.
  • The migrating virtual machine must not have the cache!=none custom property set.
In addition, for best performance, the storage and management networks should be split to avoid network saturation. Virtual machine migration involves transferring large amounts of data between hosts.
Live migration is performed using the management network. Each live migration event is limited to a maximum transfer speed of 30 MBps, and the number of concurrent migrations supported is also limited by default. Despite these measures, concurrent migrations have the potential to saturate the management network. It is recommended that separate logical networks are created for storage, display, and virtual machine data to minimize the risk of network saturation.

10.15.3. Automatic Virtual Machine Migration

Red Hat Enterprise Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster.
The Manager automatically initiates live migration of virtual machines in order to maintain load balancing or power saving levels in line with cluster policy. While no cluster policy is defined by default, it is recommended that you specify the cluster policy which best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required.

10.15.4. Preventing Automatic Migration of a Virtual Machine

Summary
Red Hat Enterprise Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host.
The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite.

Procedure 10.53. Preventing automatic migration of a virtual machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit to open the Edit Virtual Machine window.
    Edit Virtual Machine Window

    Figure 10.25. Edit Virtual Machine Window

  3. Click the Host tab.
  4. Use the Start Running On radio buttons to designate the virtual machine to run on Any Host in Cluster or a Specific host. If applicable, select a specific host from the drop-down menu.

    Warning

    Explicitly assigning a virtual machine to a specific host and disabling migration is mutually exclusive with Red Hat Enterprise Virtualization high availability. Virtual machines that are assigned to a specific host can only be made highly available using third party high availability products like Red Hat High Availability.
  5. Use the drop-down menu to affect the Migration Options. Select Do not allow migration to enable the Pass-Through Host CPU check box.
  6. If applicable, enter relevant CPU Pinning topology commands in the text field.
  7. Click OK to save the changes and close the window.
Result
You have changed the migration settings for the virtual machine.

10.15.5. Manually Migrating Virtual Machines

A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Section 10.15.2, “Live Migration Prerequisites”.

Note

When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines.

Note

Live migrating virtual machines between different clusters is generally not recommended. The currently only supported use case is documented at https://access.redhat.com/articles/1390733.

Procedure 10.54. Manually Migrating Virtual Machines

  1. Click the Virtual Machines tab and select a running virtual machine.
  2. Click Migrate to open the Migrate Virtual Machine(s) window.
  3. Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host, specifying the host using the drop-down list.

    Note

    When the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the cluster policy.
  4. Click OK to commence migration and close the window.
During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to.

10.15.6. Setting Migration Priority

Summary
Red Hat Enterprise Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. Every minute the load balancing process runs. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster.
It is possible to influence the ordering of the migration queue, for example setting mission critical virtual machines to migrate before others. The Red Hat Enterprise Virtualization Manager allows you to set the priority of each virtual machine to facilitate this. Virtual machines migrations will be ordered by priority, those virtual machines with the highest priority will be migrated first.

Procedure 10.55. Setting Migration Priority

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Select the High Availability tab.
  4. Use the radio buttons to set the Priority for Run/Migrate Queue of the virtual machine to one of Low, Medium, or High.
  5. Click OK to save changes and close the window.
Result
The virtual machine's migration priority has been modified.

10.15.7. Canceling Ongoing Virtual Machine Migrations

Summary
A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment.

Procedure 10.56. Canceling Ongoing Virtual Machine Migrations

  1. Select the migrating virtual machine. It is displayed in the Virtual Machines resource tab with a status of Migrating from.
  2. Click the Cancel Migration button at the top of the results list. Alternatively, right-click on the virtual machine and select Cancel Migration from the context menu.
Result
The virtual machine status returns from Migrating from status to Up status.

10.15.8. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers

When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples:

Example 10.1. Notification in the Events Tab of the Web Admin Portal

Highly Available Virtual_Machine_Name failed. It will be restarted automatically.
Virtual_Machine_Name was restarted on Host Host_Name

Example 10.2. Notification in the Manager engine.log

This log can be found on the Red Hat Enterprise Virtualization Manager at /var/log/ovirt-engine/engine.log:
Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name, VM Id:Virtual_Machine_ID_Number

10.16. Improving Uptime with Virtual Machine High Availability

10.16.1. Why Use High Availability?

High availability is recommended for virtual machines running critical workloads.
High availability can ensure that virtual machines are restarted in the following scenarios:
  • When a host becomes non-operational due to hardware failure.
  • When a host is put into maintenance mode for scheduled downtime.
  • When a host becomes unavailable because it has lost communication with an external storage resource.
A high availability virtual machine is automatically restarted, either on its original host or another host in the cluster.

10.16.2. What is High Availability?

High availability means that a virtual machine will be automatically restarted if its process is interrupted. This happens if the virtual machine is terminated by methods other than powering off from within the guest or sending the shutdown command from the Manager. When these events occur, the highly available virtual machine is automatically restarted, either on its original host or another host in the cluster.
High availability is possible because the Red Hat Enterprise Virtualization Manager constantly monitors the hosts and storage, and automatically detects hardware failure. If host failure is detected, any virtual machine configured to be highly available is automatically restarted on another host in the cluster.
With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times.

10.16.3. High Availability Considerations

A highly available host requires a power management device and its fencing parameters configured. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines:
  • Power management must be configured for the hosts running the highly available virtual machines.
  • The host running the highly available virtual machine must be part of a cluster which has other available hosts.
  • The destination host must be running.
  • The source and destination host must have access to the data domain on which the virtual machine resides.
  • The source and destination host must have access to the same virtual networks and VLANs.
  • There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements.
  • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements.

10.16.4. Configuring a Highly Available Virtual Machine

Summary
High availability must be configured individually for each virtual machine.

Procedure 10.57. Configuring a Highly Available Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Click the High Availability tab.
    Set virtual machine high availability

    Figure 10.26. Set virtual machine high availability

  4. Select the Highly Available check box to enable high availability for the virtual machine.
  5. Use the radio buttons to set the Priority for Run/Migrate Queue of the virtual machine to one of Low, Medium, or High. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated.
  6. Click OK.
Result
You have configured high availability for a virtual machine. You can check if a virtual machine is highly available by selecting the virtual machine and clicking on the General tab in the details pane.

10.17. Other Virtual Machine Tasks

10.17.1. Enabling SAP monitoring for a virtual machine from the Administration Portal

Summary
Enable SAP monitoring on a virtual machine to be recognized by SAP monitoring systems.

Procedure 10.58. Enabling SAP monitoring for a Virtual Machine from the Administration Portal

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit button to open the Edit Virtual Machine window.
  3. Select the Custom Properties tab.
    Enable SAP

    Figure 10.27. Enable SAP

  4. Use the drop-down menu to select sap_agent. Ensure the secondary drop-down menu is set to True.
    If previous properties have been set, select the plus sign to add a new property rule and select sap_agent.
  5. Click OK to save changes and close the window.
Result
You have enabled SAP monitoring for your virtual machine.

10.17.2. Configuring Red Hat Enterprise Linux 5.4 or Higher Virtual Machines to use SPICE

10.17.2.1. Using SPICE on virtual machines running versions of Red Hat Enterprise Linux released prior to 5.4

SPICE is a remote display protocol designed for virtual environments, which enables you to view a virtualized desktop or server. SPICE delivers a high quality user experience, keeps CPU consumption low, and supports high quality video streaming.
Using SPICE on a Linux machine significantly improves the movement of the mouse cursor on the console of the virtual machine. To use SPICE, the X-Windows system requires additional QXL drivers. The QXL drivers are provided with Red Hat Enterprise Linux 5.4 and newer. Older versions are not supported. Installing SPICE on a virtual machine running Red Hat Enterprise Linux significantly improves the performance of the graphical user interface.

Note

Typically, this is most useful for virtual machines where the user requires the use of the graphical user interface. System administrators who are creating virtual servers may prefer not to configure SPICE if their use of the graphical user interface is minimal.

10.17.2.2. Installing QXL drivers on virtual machines

Summary
This procedure installs QXL drivers on virtual machines running Red Hat Enterprise Linux 5.4 or higher. This is unnecessary for virtual machines running Red Hat Enterprise Linux 6.0 and higher as the QXL drivers are installed by default.

Procedure 10.59. Installing QXL drivers on a virtual machine

  1. Log in to a Red Hat Enterprise Linux virtual machine.
  2. Open a terminal.
  3. Run the following command as root:
    # yum install xorg-x11-drv-qxl
Result
The QXL drivers have been installed and must now be configured.

10.17.2.3. Configuring QXL drivers on virtual machines

Summary
You can configure QXL drivers for using either a graphical interface or the command line. Perform only one of the following procedures.

Note

QXL drivers are installed and configured by default in Red Hat Enterprise Linux 6.0 and higher.

Procedure 10.60. Configuring QXL drivers in GNOME

  1. Click System.
  2. Click Administration.
  3. Click Display.
  4. Click the Hardware tab.
  5. Click Video Cards Configure.
  6. Select qxl and click OK.
  7. Restart X-Windows by logging out of the virtual machine and logging back in.

Procedure 10.61. Configuring QXL drivers on the command line:

  1. Back up /etc/X11/xorg.conf:
    # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.$$.backup
  2. Make the following change to the Device section of /etc/X11/xorg.conf:
    Section 	"Device"
    Identifier	"Videocard0"
    Driver		"qxl"
    Endsection
    
Result
You have configured QXL drivers to enable your virtual machine to use SPICE.

10.17.2.4. Configuring a virtual machine's tablet and mouse to use SPICE

Summary
Edit the /etc/X11/xorg.conf file to enable SPICE for your virtual machine's tablet devices.

Procedure 10.62. Configuring a virtual machine's tablet and mouse to use SPICE

  1. Verify that the tablet device is available on your guest:
    # /sbin/lsusb -v | grep 'QEMU USB Tablet'
    If there is no output from the command, do not continue configuring the tablet.
  2. Back up /etc/X11/xorg.conf by running this command:
    # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.$$.backup
  3. Make the following changes to /etc/X11/xorg.conf:
    Section "ServerLayout"
    Identifier     "single head configuration"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Tablet" "SendCoreEvents"
    InputDevice    "Mouse" "CorePointer"
    EndSection
    							 
    Section "InputDevice"
    Identifier  "Mouse"
    Driver      "void"
    #Option      "Device" "/dev/input/mice"
    #Option      "Emulate3Buttons" "yes"
    EndSection
    							 
    Section "InputDevice"
    Identifier  "Tablet"
    Driver      "evdev"
    Option      "Device" "/dev/input/event2"
    Option "CorePointer" "true"
    EndSection
    
  4. Log out and log back into the virtual machine to restart X-Windows.
Result
You have enabled a tablet and a mouse device on your virtual machine to use SPICE.

10.17.3. KVM virtual machine timing management

Virtualization poses various challenges for virtual machine time keeping. Virtual machines which use the Time Stamp Counter (TSC) as a clock source may suffer timing issues as some CPUs do not have a constant Time Stamp Counter. Virtual machines running without accurate timekeeping can have serious affects on some networked applications as your virtual machine will run faster or slower than the actual time.
KVM works around this issue by providing virtual machines with a paravirtualized clock. The KVM pvclock provides a stable source of timing for KVM guests that support it.
Presently, only Red Hat Enterprise Linux 5.4 and higher virtual machines fully support the paravirtualized clock.
Virtual machines can have several problems caused by inaccurate clocks and counters:
  • Clocks can fall out of synchronization with the actual time which invalidates sessions and affects networks.
  • Virtual machines with slower clocks may have issues migrating.
These problems exist on other virtualization platforms and timing should always be tested.

Important

The Network Time Protocol (NTP) daemon should be running on the host and the virtual machines. Enable the ntpd service:
# service ntpd start
Add the ntpd service to the default startup sequence:
# chkconfig ntpd on
Using the ntpd service should minimize the affects of clock skew in all cases.
The NTP servers you are trying to use must be operational and accessible to your hosts and virtual machines.
Determining if your CPU has the constant Time Stamp Counter
Your CPU has a constant Time Stamp Counter if the constant_tsc flag is present. To determine if your CPU has the constant_tsc flag run the following command:
$ cat /proc/cpuinfo | grep constant_tsc
If any output is given your CPU has the constant_tsc bit. If no output is given follow the instructions below.
Configuring hosts without a constant Time Stamp Counter
Systems without constant time stamp counters require additional configuration. Power management features interfere with accurate time keeping and must be disabled for virtual machines to accurately keep time with KVM.

Important

These instructions are for AMD revision F CPUs only.
If the CPU lacks the constant_tsc bit, disable all power management features (BZ#513138). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append "processor.max_cstate=1" to the kernel boot options in the grub.conf file on the host:
term Red Hat Enterprise Linux Server (2.6.18-159.el5)
        root (hd0,0)
	kernel /vmlinuz-2.6.18-159.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1
Disable cpufreq (only necessary on hosts without the constant_tsc) by editing the /etc/sysconfig/cpuspeed configuration file and change the MIN_SPEED and MAX_SPEED variables to the highest frequency available. Valid limits can be found in the /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies files.
Using the engine-config tool to receive alerts when hosts drift out of sync.
You can use the engine-config tool to configure alerts when your hosts drift out of sync.
There are 2 relevant parameters for time drift on hosts: EnableHostTimeDrift and HostTimeDriftInSec. EnableHostTimeDrift, with a default value of false, can be enabled to receive alert notifications of host time drift. The HostTimeDriftInSec parameter is used to set the maximum allowable drift before alerts start being sent.
Alerts are sent once per hour per host.
Using the paravirtualized clock with Red Hat Enterprise Linux virtual machines
For certain Red Hat Enterprise Linux virtual machines, additional kernel parameters are required. These parameters can be set by appending them to the end of the /kernel line in the /boot/grub/grub.conf file of the virtual machine.

Note

The process of configuring kernel parameters can be automated using the ktune package
The ktune package provides an interactive Bourne shell script, fix_clock_drift.sh. When run as the superuser, this script inspects various system parameters to determine if the virtual machine on which it is run is susceptible to clock drift under load. If so, it then creates a new grub.conf.kvm file in the /boot/grub/ directory. This file contains a kernel boot line with additional kernel parameters that allow the kernel to account for and prevent significant clock drift on the KVM virtual machine. After running fix_clock_drift.sh as the superuser, and once the script has created the grub.conf.kvm file, then the virtual machine's current grub.conf file should be backed up manually by the system administrator, the new grub.conf.kvm file should be manually inspected to ensure that it is identical to grub.conf with the exception of the additional boot line parameters, the grub.conf.kvm file should finally be renamed grub.conf, and the virtual machine should be rebooted.
The table below lists versions of Red Hat Enterprise Linux and the parameters required for virtual machines on systems without a constant Time Stamp Counter.
Red Hat Enterprise Linux Additional virtual machine kernel parameters
5.4 AMD64/Intel 64 with the paravirtualized clock Additional parameters are not required
5.4 AMD64/Intel 64 without the paravirtualized clock notsc lpj=n
5.4 x86 with the paravirtualized clock Additional parameters are not required
5.4 x86 without the paravirtualized clock clocksource=acpi_pm lpj=n
5.3 AMD64/Intel 64 notsc
5.3 x86 clocksource=acpi_pm
4.8 AMD64/Intel 64 notsc
4.8 x86 clock=pmtmr
3.9 AMD64/Intel 64 Additional parameters are not required
3.9 x86 Additional parameters are not required
Using the Real-Time Clock with Windows virtual machines
Windows uses the both the Real-Time Clock (RTC) and the Time Stamp Counter (TSC). For Windows virtual machines the Real-Time Clock can be used instead of the TSC for all time sources which resolves virtual machine timing issues.
To enable the Real-Time Clock for the PMTIMER clocksource (the PMTIMER usually uses the TSC) add the following line to the Windows boot settings. Windows boot settings are stored in the boot.ini file. Add the following line to the boot.ini file:
/use pmtimer
For more information on Windows boot settings and the pmtimer option, refer to Available switch options for the Windows XP and the Windows Server 2003 Boot.ini files.

10.17.4. Monitoring Virtual Machine Login Activity Using the Sessions Tab

Client virtual machines connecting to Red Hat Enterprise Virtualization will require maintenance and/or updates on occasion. The information contained in the Sessions tab allows you to monitor virtual machine login activity to avoid performing maintenance tasks on machines in active use.

Procedure 10.63. Monitoring a virtual machine's activity

  1. Select a virtual machine with any of the following tabs or modes: the Virtual Machines resource tab, Tree mode, or Search function.
  2. Select the Sessions tab in the details pane to display Logged-in User, Console User, and Console Client IP.
Virtual machines sessions tab

Figure 10.28. Virtual machines sessions tab

Chapter 11. Templates

11.1. Introduction to Templates

A template is a copy of a virtual machine that you can use to simplify the subsequent, repeated creation of similar virtual machines. Templates capture the configuration of software, configuration of hardware, and the software installed on the virtual machine on which the template is based. The virtual machine on which a template is based is known as the source virtual machine.
When you create a template based on a virtual machine, a read-only copy of the virtual machine's disk is created. This read-only disk becomes the base disk image of the new template, and of any virtual machines created based on the template. As such, the template cannot be deleted while any virtual machines created based on the template exist in the environment.
Virtual machines created based on a template use the same NIC type and driver as the original virtual machine, but are assigned separate, unique MAC addresses.

11.2. Sealing Virtual Machines in Preparation for Deployment as Templates

This section describes procedures for sealing Linux virtual machines and Windows virtual machines. Sealing is the process of removing all system-specific details from a virtual machine before creating a template based on that virtual machine. Sealing is necessary to prevent the same details from appearing on multiple virtual machines created based on the same template. It is also necessary to ensure the functionality of other features, such as predictable vNIC order.

11.2.1. Sealing a Linux Virtual Machine for Deployment as a Template

There are two main methods for sealing a Linux virtual machine in preparation for using that virtual machine to create a template: manually, or using the sys-unconfig command. Sealing a Linux virtual machine manually requires you to create a file on the virtual machine that acts as a flag for initiating various configuration tasks the next time you start that virtual machine. The sys-unconfig command allows you to automate this process. However, both of these methods also require you to manually delete files on the virtual machine that are specific to that virtual machine or might cause conflicts amongst virtual machines created based on the template you will create based on that virtual machine. As such, both are valid methods for sealing a Linux virtual machine and will achieve the same result.

11.2.1.1. Sealing a Linux Virtual Machine Manually for Deployment as a Template

Summary
You must generalize (seal) a Linux virtual machine before creating a template based on that virtual machine.

Procedure 11.1. Sealing a Linux Virtual Machine

  1. Log in to the virtual machine.
  2. Flag the system for re-configuration by running the following command as root:
    # touch /.unconfigured
  3. Run the following command to remove ssh host keys:
    # rm -rf /etc/ssh/ssh_host_*
  4. Set HOSTNAME=localhost.localdomain in /etc/sysconfig/network for Red Hat Enterprise Linux 6 or /etc/hostname for Red Hat Enterprise Linux 7.
  5. Run the following command to remove /etc/udev/rules.d/70-*:
    # rm -rf /etc/udev/rules.d/70-*
  6. Remove the HWADDR line and UUID line from /etc/sysconfig/network-scripts/ifcfg-eth*.
  7. Optionally, delete all the logs from /var/log and build logs from /root.
  8. Run the following command to shut down the virtual machine:
    # poweroff
Result
The virtual machine is sealed and can be made into a template. You can deploy Linux virtual machines from this template without experiencing configuration file conflicts.

Note

The steps provided are the minimum steps required to seal a Red Hat Enterprise Linux virtual machine for use as a template. Additional host and site-specific custom steps are available.

11.2.1.2. Sealing a Linux Virtual Machine for Deployment as a Template using sys-unconfig

Summary
You must generalize (seal) a Linux virtual machine before creating a template based on that virtual machine.

Procedure 11.2. Sealing a Linux Virtual Machine using sys-unconfig

  1. Log in to the virtual machine.
  2. Run the following command to remove ssh host keys:
    # rm -rf /etc/ssh/ssh_host_*
  3. Set HOSTNAME=localhost.localdomain in /etc/sysconfig/network for Red Hat Enterprise Linux 6 or /etc/hostname for Red Hat Enterprise Linux 7.
  4. Remove the HWADDR line and UUID line from /etc/sysconfig/network-scripts/ifcfg-eth*.
  5. Optionally, delete all the logs from /var/log and build logs from /root.
  6. Run the following command:
    # sys-unconfig
Result
The virtual machine shuts down; it is now sealed and can be made into a template. You can deploy Linux virtual machines from this template without experiencing configuration file conflicts.

11.2.2. Sealing a Windows Virtual Machine for Deployment as a Template

A template created for Windows virtual machines must be generalized (sealed) before being used to deploy virtual machines. This ensures that machine-specific settings are not reproduced in the template.
The Sysprep tool is used to seal Windows templates before use.

Important

Do not reboot the virtual machine during this process.
Before starting the Sysprep process, verify that the following settings are configured:
  • The Windows Sysprep parameters have been correctly defined.
    If not, click Edit and enter the required information in the Operating System and Domain fields.
  • The correct product key has been defined in an override file on the Manager.
    The override file needs to be created under /etc/ovirt-engine/osinfo.conf.d/, have a filename that puts it after /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties, and end in .properties. For example, /etc/ovirt-engine/osinfo.conf.d/10-productkeys.properties. The last file will have precedent and override any other previous file.
    If not, copy the default values for your Windows operating system from /etc/ovirt-engine/osinfo.conf.d/00-defaults.properties into the override file, and input your values in the productKey.value and sysprepPath.value fields.

    Example 11.1. Windows 7 Default Configuration Values

    # Windows7(11, OsType.Windows, false),false
    os.windows_7.id.value = 11
    os.windows_7.name.value = Windows 7
    os.windows_7.derivedFrom.value = windows_xp
    os.windows_7.sysprepPath.value = ${ENGINE_USR}/conf/sysprep/sysprep.w7
    os.windows_7.productKey.value =
    os.windows_7.devices.audio.value = ich6
    os.windows_7.devices.diskInterfaces.value.3.3 = IDE, VirtIO_SCSI, VirtIO
    os.windows_7.devices.diskInterfaces.value.3.4 = IDE, VirtIO_SCSI, VirtIO
    os.windows_7.devices.diskInterfaces.value.3.5 = IDE, VirtIO_SCSI, VirtIO
    os.windows_7.isTimezoneTypeInteger.value = false
    

11.2.2.1. Sealing a Windows XP Template

Summary
Seal a Windows XP template using the Sysprep tool before using the template to deploy virtual machines.

Note

You can also use the procedure above to seal a Windows 2003 template. The Windows 2003 Sysprep tool is available at http://www.microsoft.com/download/en/details.aspx?id=14830.

Procedure 11.3. Sealing a Windows XP Template

  1. Download sysprep to the virtual machine to be used as a template.
    The Windows XP Sysprep tool is available at http://www.microsoft.com/download/en/details.aspx?id=11282
  2. Create a new directory: c:\sysprep.
  3. Open the deploy.cab file and add its contents to c:\sysprep.
  4. Execute sysprep.exe from within the folder and click OK on the welcome message to display the Sysprep tool.
  5. Select the following check boxes:
    • Don't reset grace period for activation
    • Use Mini-Setup
  6. Ensure that the shutdown mode is set to Shut down and click Reseal.
  7. Acknowledge the pop-up window to complete the sealing process; the virtual machine shuts down automatically upon completion.
Result
The Windows XP template is sealed and ready for deploying virtual machines.

11.2.2.2. Sealing a Windows 7, Windows 2008, or Windows 2012 Template

Seal a Windows 7, Windows 2008, or Windows 2012 template before using the template to deploy virtual machines.

Procedure 11.4. Sealing a Windows 7, Windows 2008, or Windows 2012 Template

  1. Launch Sysprep from C:\Windows\System32\sysprep\sysprep.exe.
  2. Enter the following information into the Sysprep tool:
    • Under System Cleanup Action, select Enter System Out-of-Box-Experience (OOBE).
    • Select the Generalize check box if you need to change the computer's system identification number (SID).
    • Under Shutdown Options, select Shutdown.
    Click OK to complete the sealing process; the virtual machine shuts down automatically upon completion.
The Windows 7, Windows 2008, or Windows 2012 template is sealed and ready for deploying virtual machines.

11.2.3. Using Cloud-Init to Automate the Configuration of Virtual Machines

Cloud-Init is a tool for automating the initial setup of virtual machines such as configuring the host name, network interfaces, and authorized keys. It can be used when provisioning virtual machines that have been deployed based on a template to avoid conflicts on the network.
To use this tool, the cloud-init package must first be installed on the virtual machine. Once installed, the Cloud-Init service starts during the boot process to search for instructions on what to configure. You can then use options in the Run Once window to provide these instructions one time only, or options in the New Virtual Machine, Edit Virtual Machine and Edit Template windows to provide these instructions every time the virtual machine starts.

11.2.3.1. Cloud-Init Use Case Scenarios

Cloud-Init can be used to automate the configuration of virtual machines in a variety of scenarios. Several common scenarios are as follows:
Virtual Machines Created Based on Templates
You can use the Cloud-Init options in the Initial Run section of the Run Once window to initialize a virtual machine that was created based on a template. This allows you to customize the virtual machine the first time that virtual machine is started.
Virtual Machine Templates
You can use the Use Cloud-Init/Sysprep options in the Initial Run tab of the New Template and Edit Template windows to specify options for customizing virtual machines created based on that template.
Virtual Machine Pools
You can use the Use Cloud-Init/Sysprep options in the Initial Run tab of the New Pool window to specify options for customizing virtual machines taken from that virtual machine pool. This allows you to specify a set of standard settings that will be applied every time a virtual machine is taken from that virtual machine pool. You can inherit or override the options specified for the template on which the virtual machine is based, or specify options for the virtual machine pool itself.

11.2.3.2. Installing Cloud-Init

This procedure describes how to install Cloud-Init on a virtual machine.

Procedure 11.5. Installing Cloud-Init

  1. Log on to the virtual machine.
  2. Enable the Red Hat Common repository.
    # subscription-manager repos --enable=rhel-6-server-rh-common-rpms
  3. Install the cloud-init package and dependencies:
    # yum install cloud-init

11.2.3.3. Using Cloud-Init to Initialize a Virtual Machine

Summary
Use Cloud-Init to automate the initial configuration of a Linux virtual machine that has been provisioned based on a template.

Procedure 11.6. Using Cloud-Init to Initialize a Virtual Machine

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Run Once to open the Run Virtual Machine(s) window.
  3. Expand the Initial Run section and select the Cloud-Init check box.
  4. Enter a host name in the VM Hostname text field.
  5. Select the Configure Time Zone check box and select a time zone from the Time Zone drop-down menu.
  6. Select the Use already configured password check box to use the existing credentials, or clear that check box and enter a root password in the Password and Verify Password text fields to specify a new root password.
  7. Enter any SSH keys to be added to the authorized hosts file on the virtual machine in the SSH Authorized Keys text area.
  8. Select the Regenerate SSH Keys check box to regenerate SSH keys for the virtual machine.
  9. Enter any DNS servers in the DNS Servers text field.
  10. Enter any DNS search domains in the DNS Search Domains text field.
  11. Select the Network check box and use the + and - buttons to add or remove network interfaces to or from the virtual machine.
  12. Enter any custom scripts in the Custom Script text area.
  13. Click OK.

Important

Cloud-Init is only supported on cluster compatibility version 3.3 and higher.
Result
The virtual machine boots and the specified settings are applied.

11.2.3.4. Using Cloud-Init to Prepare a Template

Summary
Use Cloud-Init to specify a set of standard settings to be included in a template.

Note

While the following procedure outlines how to use Cloud-Init when preparing a template, the same settings are also available in the New Virtual Machine and Edit Template windows.

Procedure 11.7. Using Cloud-Init to Prepare a Template

  1. Click the Virtual Machines tab and select a virtual machine.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box.
  4. Enter a host name in the VM Hostname text field.
  5. Select the Configure Time Zone check box and select a time zone from the Time Zone drop-down menu.
  6. Expand the Authentication section and select the Use already configured password check box to user the existing credentials, or clear that check box and enter a root password in the Password and Verify Password text fields to specify a new root password.
  7. Enter any SSH keys to be added to the authorized hosts file on the virtual machine in the SSH Authorized Keys text area.
  8. Select the Regenerate SSH Keys check box to regenerate SSH keys for the virtual machine.
  9. Expand the Networks section and enter any DNS servers in the DNS Servers text field.
  10. Enter any DNS search domains in the DNS Search Domains text field.
  11. Select the Network check box and use the + and - buttons to add or remove network interfaces to or from the virtual machine.
  12. Expand the Custom Script section and enter any custom scripts in the Custom Script text area.
  13. Click Ok.

Important

Cloud-Init is only supported on cluster compatibility version 3.3 and higher.
Result
The virtual machine boots and the specified settings are applied.

11.3. Template Tasks

11.3.1. Creating a Template

Summary
Create a template from an existing virtual machine to use as a blueprint for creating additional virtual machines.

Important

Before you create a template, you must seal the source virtual machine to ensure all system-specific details are removed from the virtual machine. This is necessary to prevent the same details from appearing on multiple virtual machines created based on the same template.

Procedure 11.8. Creating a Template

  1. Click the Virtual Machines tab.
  2. Select the source virtual machine.
  3. Ensure the virtual machine is powered down and has a status of Down.
  4. Click Make Template.
    The New Template window

    Figure 11.1. The New Template window

  5. Enter a Name, Description, and Comment for the template.
  6. Select the cluster with which to associate the template from the Cluster list. By default, this is the same as that of the source virtual machine.
  7. Optionally, select a CPU profile for the template from the CPU Profile list.
  8. Optionally, select the Create as a Sub Template version check box, select a Root Template, and enter a Sub Version Name to create the new template as a sub template of an existing template.
  9. In the Disks Allocation section, enter an alias for the disk in the Alias text field, and select the storage domain on which to store the disk from the Target list. By default, these are the same as those of the source virtual machine.
  10. Select the Allow all users to access this Template check box to make the template public.
  11. Select the Copy VM permissions check box to copy the permissions of the source virtual machine to the template.
  12. Click OK.
Result
The virtual machine displays a status of Image Locked while the template is being created. The process of creating a template may take up to an hour depending on the size of the virtual machine disk and the capabilities of your storage hardware. When complete, the template is added to the Templates tab. You can now create new virtual machines based on the template.

Note

When a template is made, the virtual machine is copied so that both the existing virtual machine and its template are usable after template creation.

11.3.2. Explanation of Settings and Controls in the New Template Window

The following table details the settings for the New Template window.

Table 11.1. New Template and Edit Template Settings

Field
Description/Action
Name
The name of the template. This is the name by which the template is listed in the Templates tab in the Administration Portal and is accessed via the REST API. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
A description of the template. This field is recommended but not mandatory.
Comment
A field for adding plain text, human-readable comments regarding the template.
Cluster
The cluster with which the template is associated. This is the same as the original virtual machines by default. You can select any cluster in the data center.
CPU Profile The CPU profile assigned to the template. CPU profiles define the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are defined on the cluster level based on quality of service entries created for data centers.
Create as a Sub Template version
Specifies whether the template is created as a new version of an existing template. Select this check box to access the settings for configuring this option.
  • Root Template: The template under which the sub template is added.
  • Sub Version Name: The name of the template. This is the name by which the template is accessed when creating a new virtual machine based on the template.
Disks Allocation
Alias - An alias for the virtual machine disk used by the template. By default, the alias is set to the same value as that of the source virtual machine.
Virtual Size - The total amount of disk space that a virtual machine based on the template can use. This value cannot be edited, and is provided for reference only. This value corresponds with the size, in GB, that was specified when the disk was created or edited.
Target - The storage domain on which the virtual disk used by the template is stored. By default, the storage domain is set to the same value as that of the source virtual machine. You can select any storage domain in the cluster.
Allow all users to access this Template
Specifies whether a template is public or private. A public template can be accessed by all users, whereas a private template can only be accessed by users with the TemplateAdmin or SuperUser roles.
Copy VM permissions
Copies explicit permissions that have been set on the source virtual machine to the template.

11.3.3. Editing a Template

Summary
Once a template has been created, its properties can be edited. Because a template is a copy of a virtual machine, the options available when editing a template are identical to those in the Edit Virtual Machine window.

Procedure 11.9. Editing a Template

  1. Use the Templates resource tab, tree mode, or the search function to find and select the template in the results list.
  2. Click Edit to open the Edit Template window.
  3. Change the necessary properties and click OK.
Result
The properties of the template are updated. The Edit Template window does not close if a property field is invalid.

11.3.4. Deleting a Template

Summary
Delete a template from your Red Hat Enterprise Virtualization environment.

Warning

If you have used a template to create a virtual machine, make sure that you do not delete the template as the virtual machine needs it to continue running.

Procedure 11.10. Deleting a Template

  1. Use the resource tabs, tree mode, or the search function to find and select the template in the results list.
  2. Click Remove to open the Remove Template(s) window.
  3. Click OK to remove the template.
Result
You have removed the template.

11.3.5. Exporting Templates

11.3.5.1. Migrating Templates to the Export Domain

Summary
Export templates into the export domain to move them to another data domain, either in the same Red Hat Enterprise Virtualization environment, or another one.

Procedure 11.11. Exporting Individual Templates to the Export Domain

  1. Use the Templates resource tab, tree mode, or the search function to find and select the template in the results list.
  2. Click Export to open the Export Template window.

    Note

    Select the Force Override check box to replace any earlier version of the template on the export domain.
  3. Click OK to begin exporting the template; this may take up to an hour, depending on the virtual machine disk image size and your storage hardware.
  4. Repeat these steps until the export domain contains all the templates to migrate before you start the import process.
    Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list and click the Template Import tab in the details pane to view all exported templates in the export domain.
Result
The templates have been exported to the export domain.

11.3.5.2. Copying a Template's Virtual Hard Disk

Summary
If you are moving a virtual machine that was created from a template with the thin provisioning storage allocation option selected, the template's disks must be copied to the same storage domain as that of the virtual machine disk.

Procedure 11.12. Copying a Virtual Hard Disk

  1. Select the Disks tab.
  2. Select the template disk or disks to copy.
  3. Click the Copy button to display the Copy Disk window.
  4. Use the drop-down menu or menus to select the Target data domain.
Result
A copy of the template's virtual hard disk has been created, either on the same, or a different, storage domain. If you were copying a template disk in preparation for moving a virtual hard disk, you can now move the virtual hard disk.

11.3.6. Importing Templates

11.3.6.1. Importing a Template into a Data Center

Summary
Import templates from a newly attached export domain.

Procedure 11.13. Importing a Template into a Data Center

  1. Use the resource tabs, tree mode, or the search function to find and select the newly attached export domain in the results list.
  2. Select the Template Import tab of the details pane to display the templates that migrated across with the export domain.
  3. Select a template and click Import to open the Import Template(s) window.
  4. Select the templates to import.
  5. Use the drop-down menus to select the Destination Cluster and Storage domain. Alter the Suffix if applicable.
    Alternatively, clear the Clone All Templates check box.
  6. Click OK to import templates and open a notification window. Click Close to close the notification window.
Result
The template is imported into the destination data center. This can take up to an hour, depending on your storage hardware. You can view the import progress in the Events tab.
Once the importing process is complete, the templates will be visible in the Templates resource tab. The templates can create new virtual machines, or run existing imported virtual machines based on that template.

11.3.6.2. Importing a Virtual Disk Image from an OpenStack Image Service as a Template

Summary
Virtual disk images managed by an OpenStack Image Service can be imported into the Red Hat Enterprise Virtualization Manager if that OpenStack Image Service has been added to the Manager as an external provider.
  1. Click the Storage resource tab and select the OpenStack Image Service domain from the results list.
  2. Select the image to import in the Images tab of the details pane.
  3. Click Import to open the Import Image(s) window.
    The Import Image(s) Window

    Figure 11.2. The Import Image(s) Window

  4. From the Data Center drop-down menu, select the data center into which the virtual disk image will be imported.
  5. From the Domain Name drop-down menu, select the storage domain in which the virtual disk image will be stored.
  6. Optionally, select a quota from the Quota drop-down menu to apply a quota to the virtual disk image.
  7. Select the Import as Template check box.
  8. From the Cluster drop-down menu, select the cluster in which the virtual disk image will be made available as a template.
  9. Click OK to import the virtual disk image.
Result
The image is imported as a template and is displayed in the results list of the Templates resource tab. You can now create virtual machines based on the template.

11.4. Templates and Permissions

11.4.1. Managing System Permissions for a Template

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A template administrator is a system administration role for templates in a data center. This role can be applied to specific virtual machines, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual resources.
The template administrator role permits the following actions:
  • Create, edit, export, and remove associated templates.
  • Import and export templates.

Note

You can only assign roles and permissions to existing users.

11.4.2. Template Administrator Roles Explained

Template Administrator Permission Roles
The table below describes the administrator roles and privileges applicable to template administration.

Table 11.2. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
TemplateAdmin Can perform all operations on templates. Has privileges to create, delete and configure a template's storage domain and network details, and to move templates between domains.
NetworkAdmin Network Administrator Can configure and manage networks attached to templates.

11.4.3. Template User Roles Explained

Template User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating templates in the User Portal.

Table 11.3. Red Hat Enterprise Virtualization Template User Roles

Role Privileges Notes
TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. The TemplateCreator role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.
TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. The TemplateOwner role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template.
UserTemplateBasedVm Can use the template to create virtual machines. Cannot edit template properties.
NetworkUser Logical network and network interface user for templates. If the Allow all users to use this Network option was selected when a logical network is created, NetworkUser permissions are assigned to all users for the logical network. Users can then attach or detach template network interfaces to or from the logical network.

11.4.4. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 11.14. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

11.4.5. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 11.15. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 12. Pools

12.1. Introduction to Virtual Machine Pools

A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users.
Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool.
Virtual machines in a virtual machine pool are stateless, meaning that data is not persistent across reboots. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool.
In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.

Note

Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary.

12.2. Virtual Machine Pool Tasks

12.2.1. Creating a Virtual Machine Pool

Summary
You can create a virtual machine pool that contains multiple virtual machines that have been created based on a common template.

Procedure 12.1. Creating a Virtual Machine Pool

  1. Click the Pools tab.
  2. Click the New button to open the New Pool window.
    • Use the drop down-list to select the Cluster or use the selected default.
    • Use the Based on Template drop-down menu to select a template or use the selected default. If you have selected a template, optionally use the Template Sub Version drop-down menu to select a version of that template. A template provides standard settings for all the virtual machines in the pool.
    • Use the Operating System drop-down list to select an Operating System or use the default provided by the template.
    • Use the Optimized for drop-down list to optimize virtual machines for either Desktop use or Server use.
  3. Enter a Name and Description, any Comments and the Number of VMs for the pool.
  4. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is one.
  5. Optionally, click the Show Advanced Options button and perform the following steps:
    1. Select the Console tab. At the bottom of the tab window, select the Override SPICE Proxy check box to enable the Overridden SPICE proxy address text field and specify the address of a SPICE proxy to override the global SPICE proxy, if any.
    2. Click the Pool tab and select a Pool Type:
      • Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool.
      • Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool.
  6. Click OK.
Result
You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in the Virtual Machines resource tab, or in the details pane of the Pools resource tab; a virtual machine in a pool is distinguished from independent virtual machines by its icon.

12.2.2. Explanation of Settings and Controls in the New Pool Window

12.2.2.1. New Pool General Settings Explained

The following table details the information required on the General tab of the New Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window.

Table 12.1. General settings

Field Name
Description
Number of VMs
Allows you to specify the number of virtual machines to be created and made available in the virtual machine pool. By default, the maximum number of virtual machines you can create in a pool is 1000. This value can be configured using the MaxVmsInPool key of the engine-config command.
Maximum number of VMs per user
Allows you to specify the maximum number of virtual machines a single user can take from the virtual machine pool at any one time. The value of this field must be between 1 and 32,767.

12.2.2.2. New Pool Pool Settings Explained

The following table details the information required on the Pool tab of the New Pool window.

Table 12.2. Console settings

Field Name
Description
Pool Type
This drop-down menu allows you to specify the type of the virtual machine pool. The following options are available:
  • Automatic: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is automatically returned to the virtual machine pool.
  • Manual: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is only returned to the virtual machine pool when an administrator manually returns the virtual machine.

12.2.2.3. New Pool and Edit Pool Console Settings Explained

The following table details the information required on the Console tab of the New Pool or Edit Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine and Edit Virtual Machine windows.

Table 12.3. Console settings

Field Name
Description
Override SPICE proxy
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.
Overridden SPICE proxy address
The proxy by which the SPICE client will connect to virtual machines. This proxy overrides both the global SPICE proxy defined for the Red Hat Enterprise Virtualization environment and the SPICE proxy defined for the cluster to which the virtual machine pool belongs, if any. The address must be in the following format:
protocol://[host]:[port]

12.2.3. Editing a Virtual Machine Pool

After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by.

Procedure 12.2. Editing a Virtual Machine Pool

  1. Click the Pools resource tab and select a virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Edit the properties of the virtual machine pool.
  4. Click Ok.

12.2.4. Explanation of Settings and Controls in the Edit Pool Window

12.2.4.1. Edit Pool General Settings Explained

The following table details the editable fields on the General tab of the Edit Pool window.

Table 12.4. General settings

Field Name
Description
Description
A meaningful description of the virtual machine pool.
Prestarted VMs
Allows you to specify the number of virtual machines in the virtual machine pool that will be started before they are taken and kept in that state to be taken by users. The value of this field must be between 0 and the total number of virtual machines in the virtual machine pool.
Increase number of VMs in pool by
Allows you to increase the number of virtual machines in the virtual machine pool by the specified number.
Maximum number of VMs per user
Allows you to specify the maximum number of virtual machines a single user can take from the virtual machine at any one time. The value of this field must be between 1 and 32,767.

12.2.5. Prestarting Virtual Machines in a Pool

The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool.
Summary
Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines.

Procedure 12.3. Prestarting Virtual Machines in a Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  4. Select the Pool tab. Ensure Pool Type is set to Automatic.
  5. Click OK.
Result
You have set a number of prestarted virtual machines in a pool. The prestarted machines are running and available for use.

12.2.6. Adding Virtual Machines to a Virtual Machine Pool

Summary
If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool.

Procedure 12.4. Adding Virtual Machines to a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of additional virtual machines to add in the Increase number of VMs in pool by field.
  4. Click OK.
Result
You have added more virtual machines to the virtual machine pool.

12.2.7. Detaching Virtual Machines from a Virtual Machine Pool

Summary
You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine.

Procedure 12.5. Detaching Virtual Machines from a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Ensure the virtual machine has a status of Down because you cannot detach a running virtual machine.
    Click the Virtual Machines tab in the details pane to list the virtual machines in the pool.
  3. Select one or more virtual machines and click Detach to open the Detach Virtual Machine(s) confirmation window.
  4. Click OK to detach the virtual machine from the pool.

Note

The virtual machine still exists in the environment and can be viewed and accessed from the Virtual Machines resource tab. Note that the icon changes to denote that the detached virtual machine is an independent virtual machine.
Result
You have detached a virtual machine from the virtual machine pool.

12.2.8. Removing a Virtual Machine Pool

Summary
You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines.

Procedure 12.6. Removing a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Remove to open the Remove Pool(s) confirmation window.
  3. Click OK to remove the pool.
Result
You have removed the pool from the data center.

12.3. Pools and Permissions

12.3.1. Managing System Permissions for a Virtual Machine Pool

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.
The virtual machine pool administrator role permits the following actions:
  • Create, edit, and remove pools.
  • Add and detach virtual machines from the pool.

Note

You can only assign roles and permissions to existing users.

12.3.2. Virtual Machine Pool Administrator Roles Explained

Pool Permission Roles
The table below describes the administrator roles and privileges applicable to pool administration.

Table 12.5. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
VmPoolAdmin System Administrator role of a virtual pool. Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine.
ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machine pools in a specific cluster.

12.3.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 12.7. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

12.3.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 12.8. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

12.4. Trusted Compute Pools

12.4.1. Creating a Trusted Cluster

Note

If no OpenAttestation server is properly configured, this procedure will fail.
Summary
This procedure explains how to set up a trusted computing pool. Trusted computing pools permit the deployment of virtual machines on trusted hosts. With the addition of attestation, administrators can ensure that verified measurement of software is running in the hosts. This provides the foundation of the secure enterprise stack.

Procedure 12.9. Creating a Trusted Cluster

  1. In the navigation pane, select the Clusters tab.
  2. Click the New button.
  3. In the General tab, set the cluster name.
  4. In the General tab, select the Enable Virt Service radio button.
  5. In the Cluster Policy tab, select Enable Trusted Service check box.
  6. Click OK.
Result
You have built a trusted computing pool.

12.4.2. Adding a Trusted Host

Summary
This procedure explains how to add a trusted host to your Red Hat Enterprise Virtualization environment.

Procedure 12.10. 

  1. Select the Hosts tab.
  2. Click the New button.
  3. In the General tab, set the host's name.
  4. In the General tab, set the host's address.

    Note

    The host designated here must be trusted by the attestation server.
  5. In the General tab, in the Host Cluster drop-down menu, select a trusted cluster.
  6. Click OK.
Result
You have added a trusted host to your Red Hat Enterprise Virtualization environment.

Chapter 13. Virtual Machine Disks

13.1. Understanding Virtual Machine Storage

Red Hat Enterprise Virtualization supports three storage types: NFS, iSCSI and FCP.
In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool's metadata. All other hosts can only access virtual machine hard disk image data.
By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system.
In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual machine disks. Virtual machine disks on block-based storage are preallocated by default.
If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Red Hat Enterprise Linux server using kpartx, vgscan, vgchange or mount to investigate the virtual machine's processes or problems.
If the virtual disk is a thin provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-I/O intensive virtual machines. The preallocated format is recommended for virtual machines with high I/O writes. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.

13.2. Understanding Virtual Disks

Red Hat Enterprise Virtualization features Preallocated (thick provisioned) and Sparse (thin provisioned) storage options.
  • Preallocated
    A preallocated virtual disk allocates all the storage required for a virtual machine up front. For example, a 20 GB preallocated logical volume created for the data partition of a virtual machine will take up 20 GB of storage space immediately upon creation.
  • Sparse
    A sparse allocation allows an administrator to define the total storage to be assigned to the virtual machine, but the storage is only allocated when required.
    For example, a 20 GB thin provisioned logical volume would take up 0 GB of storage space when first created. When the operating system is installed it may take up the size of the installed file, and would continue to grow as data is added up to a maximum of 20 GB size.
The size of a disk is listed in the Disks sub-tab for each virtual machine and template. The Virtual Size of a disk is the total amount of disk space that the virtual machine can use; it is the number that you enter in the Size(GB) field when a disk is created or edited. The Actual Size of a disk is the amount of disk space that has been allocated to the virtual machine so far. Preallocated disks show the same value for both fields. Sparse disks may show a different value in the Actual Size field from the value in the Virtual Size field, depending on how much of the disk space has been allocated.
The possible combinations of storage types and formats are described in the following table.

Table 13.1. Permitted Storage Combinations

Storage Format Type Note
NFS or iSCSI/FCP RAW or Qcow2 Sparse or Preallocated  
NFS RAW Preallocated A file with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
NFS RAW Sparse A file with an initial size which is close to zero, and has no formatting.
NFS Qcow2 Sparse A file with an initial size which is close to zero, and has Qcow2 formatting. Subsequent layers will be Qcow2 formatted.
SAN RAW Preallocated A block device with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
SAN Qcow2 Sparse A block device with an initial size which is much smaller than the size defined for the VDisk (currently 1 GB), and has Qcow2 formatting for which space is allocated as needed (currently in 1 GB increments).

13.3. Settings to Wipe Virtual Disks After Deletion

The wipe_after_delete flag, viewed in the Administration Portal as the Wipe After Delete check box, enables the initialization of the virtual disk upon deletion. If it is set to false, which is the default, deleting the disk will open up those blocks for re-use but will not specifically wipe the data. It is possible for this data to be recovered because the blocks have not been returned to zero.
Enabling wipe_after_delete for virtual disks will wipe the blocks when the virtual disk is deleted, reverting the blocks to zero. This is more secure, and is recommended if the virtual disk has contained any sensitive data. This is a more intensive operation and users may experience degradation in performance and prolonged delete times.
The wipe_after_delete flag default can be changed to true using the engine configuration tool on the Red Hat Enterprise Virtualization Manager. Restart the engine for the setting change to take effect.

Procedure 13.1. Setting SANWipeAfterDelete to Default to True Using the Engine Configuration Tool

  1. Run the engine configuration tool with the --set action:
    # engine-config --set SANWipeDelete=true
    
  2. Restart the engine for the change to take effect:
    # service ovirt-engine restart
    

13.4. Shareable Disks in Red Hat Enterprise Virtualization

Some applications require storage to be shared between servers. Red Hat Enterprise Virtualization allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests.
Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated.
You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable.
You can mark a disk shareable either when you create it, or by editing the disk later.

13.5. Read Only Disks in Red Hat Enterprise Virtualization

Some applications require administrators to share data with read-only rights. You can do this when creating or editing a disk attached to a virtual machine via the Disks tab in the details pane of the virtual machine and selecting the Read Only check box. That way, a single disk can be read by multiple cluster-aware guests, while an administrator maintains writing privileges.
You cannot change the read-only status of a disk while the virtual machine is running.

Important

Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual machine disks that contain such file systems (e.g. EXT3, EXT4, or XFS).

13.6. Virtual Disk Tasks

13.6.1. Creating Floating Virtual Disks

Summary
You can create a virtual disk that does not belong to any virtual machines. You can then attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable.

Procedure 13.2. Creating Floating Virtual Disks

  1. Select the Disks resource tab.
  2. Click Add.
    Add Virtual Disk Window

    Figure 13.1. Add Virtual Disk Window

  3. Use the radio buttons to specify whether the virtual disk will be an Internal or External (Direct Lun) disk.
  4. Enter the Size(GB) of the virtual disk.
  5. Enter a name for the virtual disk in the Alias field.
  6. Enter a Description for the virtual disk.
  7. Select the virtual interface that the virtual disk will present to virtual machines from the Interface list.
  8. Select the provisioning policy for the virtual disk from the Allocation Policy list.
  9. Select the data center in which the virtual disk will be available from the Data Center list.
  10. Select the storage domain in which the virtual disk will be stored from the Storage Domain list.
  11. Select the disk profile to apply to the virtual disk from the Disk Profile list.
  12. Select the Wipe After Delete check box if you require the disk to be initialized after it is deleted. This increases security but is a more intensive operation and may prolong delete times.
  13. Select the Bootable check box to enable the bootable flag on the virtual disk.
  14. Select the Shareable check box to allow the virtual disk to be attached to more than one virtual machine at a time.
  15. Click OK.

13.6.2. Explanation of Settings in the Add Virtual Disk Window

Table 13.2. Add Virtual Disk Settings: Internal

Field Name
Description
Size(GB)
The size of the new virtual disk in GB.
Alias
The name of the virtual disk, limited to 40 characters.
Description
A description of the virtual disk. This field is recommended but not mandatory.
Interface
The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.
Allocation Policy
The provisioning policy for the new virtual disk.
  • Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thinly provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.
  • Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thinly provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thinly provisioned virtual disks are recommended for desktops.
Data Center
The data center in which the virtual disk will be available.
Storage Domain
The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given cluster, and also shows the total space and currently available space in the storage domain.
Disk Profile
The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers.
Wipe After Delete
Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted.
Bootable
Allows you to enable the bootable flag on the virtual disk.
Shareable
Allows you to attach the virtual disk to more than one virtual machine at a time.
Read Only
Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another.
The External (Direct Lun) settings can be displayed in either Targets > LUNs or LUNs > Targets. Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs.

Table 13.3. Add Virtual Disk Settings: External (Direct Lun)

Field Name
Description
Alias
The name of the virtual disk, limited to 40 characters.
Description
A description of the virtual disk. This field is recommended but not mandatory.
Interface
The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.
Data Center
The data center in which the virtual disk will be available.
Use Host
The host on which the LUN will be mounted. You can select any host in the data center.
Storage Type
The type of external LUN to add. You can select from either iSCSI or Fibre Channel.
Discover Targets
This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected.
Address - The host name or IP address of the target server.
Port - The port by which to attempt a connection to the target server. The default port is 3260.
User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs.
CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.
CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.
Bootable
Allows you to enable the bootable flag on the virtual disk.
Shareable
Allows you to attach the virtual disk to more than one virtual machine at a time.
Read Only
Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another.
Fill in the fields in the Discover Targets section and click the Discover button to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons next to each LUN, select the LUN to add.
Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data.
The following considerations must be made when using a direct LUN as a virtual machine hard disk image:
  • Live storage migration of direct LUN hard disk images is not supported.
  • Direct LUN disks are not included in virtual machine exports.
  • Direct LUN disks are not included in virtual machine snapshots.

Important

Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual machine disks that contain such file systems (e.g. EXT3, EXT4, or XFS).

13.6.3. Overview of Live Storage Migration

Virtual machine disks can be migrated from one storage domain to another while the virtual machine to which they are attached is running. This is referred to as live storage migration. When a disk attached to a running virtual machine is migrated, a snapshot of that disk's image chain is created in the source storage domain, and the entire image chain is replicated in the destination storage domain. As such, ensure that you have sufficient storage space in both the source storage domain and the destination storage domain to host both the disk image chain and the snapshot. A new snapshot is created on each live storage migration attempt, even when the migration fails.

Important

Live deletion of snapshots is not supported in data centers with a compatibility version below 3.5 and hosts that run on operating systems below Red Hat Enterprise Linux 7.1 and Red Hat Enterprise Virtualization Hypervisor 7.1. To remove the live storage migration snapshot in other data center and host configurations, you must manually delete it while the virtual machine is shut down. For detailed information on snapshot deletion, see the Snapshot Deletion section of the Red Hat Enterprise Virtualization Technical Guide.
Consider the following when using live storage migration:
  • Live storage migration creates a snapshot.
  • You can live migrate multiple disks at one time.
  • Multiple disks for the same virtual machine can reside across more than one storage domain, but the image chain for each disk must reside on a single storage domain.
  • You can live migrate disks only between two file-based domains (NFS, POSIX, and GlusterFS) or between two block-based domains (FCP and iSCSI).
  • You cannot live migrate direct LUN hard disk images or disks marked as shareable.

13.6.4. Moving a Virtual Disk

Move a virtual disk that is attached to a virtual machine or acts as a floating virtual disk from one storage domain to another. You can move a virtual disk that is attached to a running virtual machine; this is referred to as live storage migration. Alternatively, shut down the virtual machine before continuing. For more information on live storage migration, see Section 13.6.3, “Overview of Live Storage Migration”.
Consider the following when moving a disk:
  • You can move multiple disks at the same time.
  • If the virtual machine is shut down, you can move disks between any two storage domains in the same data center. If the virtual machine is running, you can move disks only between two file-based domains (NFS, POSIX, and GlusterFS) or between two block-based domains (FCP and iSCSI).
  • If the virtual disk is attached to a virtual machine that was created based on a template and used the thin provisioning storage allocation option, you must copy the disks for the template on which the virtual machine was based to the same storage domain as the virtual disk.

Procedure 13.3. Moving a Virtual Disk

  1. Select the Disks tab.
  2. Select one or more virtual disks to move.
  3. Click Move to open the Move Disk(s) window.
  4. From the Target list, select the storage domain to which the virtual disk(s) will be moved.
  5. From the Disk Profile list, select a profile for the disk(s), if applicable.
  6. Click OK.
The virtual disks are moved to the target storage domain, and have a status of Locked while being moved. If you moved a disk that is connected to a running virtual machine, a snapshot of that disk is created automatically, and is visible in the Snapshots tab of the details pane for that virtual machine. For information on removing the snapshot, see Section 10.12.4, “Deleting a Snapshot”.

13.6.5. Copying a Virtual Disk

Summary
You can copy a virtual disk attached to a template from one storage domain to another.

Procedure 13.4. Copying a Virtual Disk

  1. Select the Disks tab.
  2. Select the virtual disks to copy.
  3. Click the Copy button to open the Copy Disk(s) window.
  4. Use the Target drop-down menus to select the storage domain to which the virtual disk will be copied.
  5. Click OK.
Result
The virtual disks are copied to the target storage domain, and have a status of Locked while being copied.

13.6.6. Importing a Virtual Disk Image from an OpenStack Image Service

Summary
Virtual disk images managed by an OpenStack Image Service can be imported into the Red Hat Enterprise Virtualization Manager if that OpenStack Image Service has been added to the Manager as an external provider.
  1. Click the Storage resource tab and select the OpenStack Image Service domain from the results list.
  2. Select the image to import in the Images tab of the details pane.
  3. Click Import to open the Import Image(s) window.
  4. From the Data Center drop-down menu, select the data center into which the virtual disk image will be imported.
  5. From the Domain Name drop-down menu, select the storage domain in which the virtual disk image will be stored.
  6. Optionally, select a quota from the Quota drop-down menu to apply a quota to the virtual disk image.
  7. Click OK to import the image.
Result
The image is imported as a floating disk and is displayed in the results list of the Disks resource tab. It can now be attached to a virtual machine.

13.6.7. Exporting a Virtual Machine Disk to an OpenStack Image Service

Summary
Virtual machine disks can be exported to an OpenStack Image Service that has been added to the Manager as an external provider.
  1. Click the Disks resource tab.
  2. Select the disks to export.
  3. Click the Export button to open the Export Image(s) window.
  4. From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported.
  5. From the Quota drop-down list, select a quota for the disks if a quota is to be applied.
  6. Click OK.
Result
The virtual machine disks are exported to the specified OpenStack Image Service where they are managed as virtual machine disk images.

Important

Virtual machine disks can only be exported if they do not have multiple volumes, are not thinly provisioned, and do not have any snapshots.

13.7. Virtual Disks and Permissions

13.7.1. Managing System Permissions for a Virtual Disk

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
Red Hat Enterprise Virtualization Manager provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the User Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources.
The virtual disk creator role permits the following actions:
  • Create, edit, and remove virtual disks associated with a virtual machine or other resources.
  • Edit user permissions for virtual disks.

Note

You can only assign roles and permissions to existing users.

13.7.2. Virtual Disk User Roles Explained

Virtual Disk User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating virtual machine disks in the User Portal.

Table 13.4. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached.
DiskCreator Can create, edit, manage and remove virtual machine disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.

13.7.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 13.5. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

13.7.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 13.6. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 14. External Providers

14.1. Introduction to External Providers in Red Hat Enterprise Virtualization

In addition to resources managed by the Red Hat Enterprise Virtualization Manager itself, Red Hat Enterprise Virtualization can also take advantage of resources managed by external sources. The providers of these resources, known as external providers, can provide resources such as virtualization hosts, virtual machine images, and networks.
Red Hat Enterprise Virtualization currently supports the following external providers:
Foreman for Host Provisioning
Foreman is a tool for managing all aspects of the life cycle of both physical and virtual hosts. In Red Hat Enterprise Virtualization, hosts managed by Foreman can be added to and used by the Red Hat Enterprise Virtualization Manager as virtualization hosts. After you add a Foreman instance to the Manager, the hosts managed by the Foreman instance can be added by searching for available hosts on that Foreman instance when adding a new host.
OpenStack Image Service (Glance) for Image Management
OpenStack Image Service provides a catalog of virtual machine images. In Red Hat Enterprise Virtualization, these images can be imported into the Red Hat Enterprise Virtualization Manager and used as floating disks or attached to virtual machines and converted into templates. After you add an OpenStack Image Service to the Manager, it appears as a storage domain that is not attached to any data center. Virtual machine disks in a Red Hat Enterprise Virtualization environment can also be exported to an OpenStack Image Service as virtual machine disk images.
OpenStack Networking (Neutron) for Network Provisioning
OpenStack Networking provides software-defined networks. In Red Hat Enterprise Virtualization, networks provided by OpenStack Networking can be imported into the Red Hat Enterprise Virtualization Manager and used to carry all types of traffic and create complicated network topologies. After you add OpenStack Networking to the Manager, you can access the networks provided by OpenStack Networking by manually importing them.

Note

Before you can add external providers to your Red Hat Enterprise Virtualization environment, you must set up each of the external providers to be added. See Deploying OpenStack: Enterprise Environments (Red Hat Enterprise Linux OpenStack Platform Installer) for information on how to set up a Foreman instance and provision Red Hat Enterprise Linux OpenStack Platform components that can then be used as external providers.

14.2. Enabling the Authentication of OpenStack Providers

Summary
Before you can access the resources offered by an OpenStack provider, you must specify the location of a Keystone endpoint for that provider in the Manager to enable authentication of the resources the provider will offer.

Procedure 14.1. Configuring the Location of a Keystone Endpoint

  1. Log in to the system running Red Hat Enterprise Virtualization Manager as the root user.
  2. Configure the location of the Keystone server, including the port number and API version:
    # engine-config --set KeystoneAuthUrl=http://[address to the endpoint]:35357/v2.0
  3. Configure the Manager to only consider required networks for VM scheduling:
    # engine-config --set OnlyRequiredNetworksMandatoryForVdsSelection=true
  4. Restart the engine service:
    # service ovirt-engine restart
Result
You have configured the location of a Keystone endpoint against which the credentials of OpenStack providers can be authenticated.

Note

At current, you can only specify a single Keystone endpoint in the Manager. Due to this, only OpenStack providers that are authenticated using the specified Keystone endpoint can be accessed.

14.3. Adding External Providers

14.3.1. Adding an External Provider

All external resource providers are added using a single window that adapts to your input. You must add the resource provider before you can use the resources it provides in your Red Hat Enterprise Virtualization environment.

14.3.2. Adding a Foreman Instance for Host Provisioning

Summary
Add a Foreman instance for host provisioning to the Red Hat Enterprise Virtualization Manager.

Procedure 14.2. Adding a Foreman Instance for Host Provisioning

  1. Select the External Providers entry in the tree pane.
  2. Click the Add button to open the Add Provider window.
    The Add Provider Window

    Figure 14.1. The Add Provider Window

  3. Enter a Name and Description.
  4. From the Type drop-down menu, ensure that Foreman is selected.
  5. Enter the URL or fully qualified domain name of the machine on which the Foreman instance is installed in the Provider URL text field. You do not need to specify a port number.
  6. Enter the Username and Password for the Foreman instance. You must use the same username and password as you would use to log in to the Foreman provisioning portal.
  7. Test the credentials:
    1. Click the Test button to test whether you can authenticate successfully with the Foreman instance using the provided credentials.
    2. If the Foreman instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the Foreman instance provides.

      Important

      You must import the certificate that the Foreman instance provides to ensure the Manager can communicate with the instance.
  8. Click OK.
Result
You have added the Foreman instance to the Red Hat Enterprise Virtualization Manager, and can work with the hosts it provides.

14.3.3. Adding an OpenStack Networking (Neutron) Instance for Network Provisioning

Summary
Add an OpenStack Networking (Neutron) instance for network provisioning to the Red Hat Enterprise Virtualization Manager.

Procedure 14.3. Adding an OpenStack Networking (Neutron) Instance for Network Provisioning

  1. Select the External Providers entry in the tree pane.
  2. Click the Add button to open the Add Provider window.
    The Add Provider Window

    Figure 14.2. The Add Provider Window

  3. Enter a Name and Description.
  4. From the Type drop-down menu, select OpenStack Networking.
  5. Click the text field for Networking Plugin and select Open vSwitch.
  6. Enter the URL or fully qualified domain name of the machine on which the OpenStack Networking instance is installed in the Provider URL text field, followed by the port number.
  7. Optionally, select the Requires Authentication check box and enter the Username, Password and Tenant Name for the OpenStack Networking instance. You must use the username and password for the OpenStack Networking user registered in Keystone, and the tenant of which the OpenStack Networking instance is a member.
  8. Test the credentials:
    1. Click the Test button to test whether you can authenticate successfully with the Neutron instance using the provided credentials.
    2. If the Neutron instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the Neutron instance provides.

      Important

      You must import the certificate that the Neutron instance provides to ensure the Manager can communicate with the instance.
  9. Click the Agent Configuration tab.
    The Agent Configuration Tab

    Figure 14.3. The Agent Configuration Tab

  10. Enter the URL or fully qualified domain name of the host on which the QPID server is hosted in the Host text field.
  11. Enter the port number by which to connect to the QPID instance. This port number will be 5762 by default if QPID is not configured to use SSL, and 5761 if QPID is configured to use SSL.
  12. Enter the Username and Password of the OpenStack Networking user registered in the QPID instance.
  13. Click OK.
Result
You have added the OpenStack Networking instance to the Red Hat Enterprise Virtualization Manager, and can use the networks it provides.

14.3.4. Adding an OpenStack Image Service (Glance) Instance for Image Management

Summary
Add an OpenStack Image Service (Glance) instance for image management to the Red Hat Enterprise Virtualization Manager.

Procedure 14.4. Adding an OpenStack Image Service (Glance) Instance for Image Management

  1. Select the External Providers entry in the tree pane.
  2. Click the Add button to open the Add Provider window.
    The Add Provider Window

    Figure 14.4. The Add Provider Window

  3. Enter a Name and Description.
  4. From the Type drop-down menu, select OpenStack Image.
  5. Enter the URL or fully qualified domain name of the machine on which the Glance instance is installed in the Provider URL text field.
  6. Optionally, select the Requires Authentication check box and enter the Username, Password and Tenant Name for the Glance instance. You must use the username and password for the Glance user registered in Keystone, and the tenant of which the Glance instance is a member.
  7. Test the credentials:
    1. Click the Test button to test whether you can authenticate successfully with the Glance instance using the provided credentials.
    2. If the Glance instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the Glance instance provides.

      Important

      You must import the certificate that the Glance instance provides to ensure the Manager can communicate with the instance.
  8. Click OK.
Result
You have added the Glance instance to the Red Hat Enterprise Virtualization Manager, and can work with the images it provides.

14.3.5. Add Provider General Settings Explained

The General tab in the Add Provider window allows you to register the core details regarding the provider.

Table 14.1. Add Provider: General Settings

Setting
Explanation
Name
A name to represent the provider in the Manager.
Description
A plain text, human-readable description of the provider.
Type
The type of the provider. Changing this setting alters the available fields for configuring the provider.
Foreman
  • Provider URL: The URL or fully qualified domain name of the machine on which the Foreman instance is hosted. You do not need to add the port number to the end of the URL or fully qualified domain name.
  • Requires Authentication: Allows you to specify whether authentication is required for the provider. Authentication is mandatory when Foreman is selected.
  • Username: A username for connecting to the Foreman instance. This username must be the username used to log in to the provisioning portal on the Foreman instance. By default, this username is admin.
  • Password: The password against which the above username is to be authenticated. This password must be the password used to log in to the provisioning portal on the Foreman instance.
OpenStack Image
  • Provider URL: The URL or fully qualified domain name of the machine on which the OpenStack Image Service is hosted. You must add the port number for the OpenStack Image Service to the end of the URL or fully qualified domain name. By default, this port number is 9292.
  • Requires Authentication: Allows you to specify whether authentication is required to access the OpenStack Image Service.
  • Username: A username for connecting to the OpenStack Image Service. This username must be the username for the OpenStack Image Service registered in the Keystone instance of which the OpenStack Image Service is a member. By default, this username is glance.
  • Password: The password against which the above username is to be authenticated. This password must be the password for the OpenStack Image Service registered in the Keystone instance of which the OpenStack Image Service is a member.
  • Tenant Name: The name of the OpenStack tenant of which the OpenStack Image Service is a member. By default, this will be Services.
OpenStack Network
  • Networking Plugin: The networking plugin with which to connect to the OpenStack server. Users can currently select only Open vSwitch.
  • Provider URL: The URL or fully qualified domain name of the machine on which the OpenStack Networking instance is hosted. You must add the port number for the OpenStack Networking instance to the end of the URL or fully qualified domain name. By default, this port number is 9696.
  • Username: A username for connecting to the OpenStack Networking instance. This username must be the username for OpenStack Networking registered in the Keystone instance of which the OpenStack Networking instance is a member. By default, this username is neutron.
  • Password: The password against which the above username is to be authenticated. This password must be the password for OpenStack Networking registered in the Keystone instance of which the OpenStack Networking instance is a member.
  • Tenant Name: The name of the OpenStack tenant of which the OpenStack Networking instance is a member. By default, this will be Services.
Test
Allows users to test the specified credentials. This button is available to all provider types.

14.3.6. Add Provider Agent Configuration Settings Explained

The Agent Configuration tab in the Add Provider window allows users to register details regarding networking plugins. This tab is only available for the OpenStack Network provider type, and only after specifying a plugin via the Networking Plugin setting.

Table 14.2. Add Provider: General Settings

Setting
Explanation
Interface Mappings
A comma-separated list of mappings in the format of label:interface.
Host
The URL or fully qualified domain name of the machine on which the QPID instance is installed.
Port
The remote port by which a connection with the above host is to be made. By default, this port will be 5762 if SSL is not enabled on the host, and 5761 if SSL is enabled.
Username
A username for authenticating the OpenStack Networking instance with the above QPID instance. By default, this username will be neutron
Password
The password against which the above username is to be authenticated.

14.4. Editing External Providers

14.4.1. Editing an External Provider

Summary
This procedure describes how to edit external providers.

Procedure 14.5. Editing an External Provider

  1. Select the External Providers entry in the tree pane.
  2. Select the external provider to edit.
  3. Click the Edit button to open the Edit Provider window.
  4. Change the current values for the provider to the preferred values.
  5. Click OK.
Result
You have updated the details for an external provider.

14.5. Removing External Providers

14.5.1. Removing an External Provider

Summary
This procedure describes how to remove external providers.

Procedure 14.6. Removing an External Provider

  1. Select the External Providers entry in the tree pane.
  2. Select the external provider to remove.
  3. Click Remove.
  4. Click OK in the Remove Provider(s) window to confirm the removal of this provider.
Result
You have removed an external provider.

Part II. Administering the Environment

Table of Contents

15. Updating the Red Hat Enterprise Virtualization Environment
15.1. Updates between Minor Releases
15.2. Upgrading to Red Hat Enterprise Virtualization 3.5
15.3. Upgrading to Red Hat Enterprise Virtualization 3.4
15.4. Upgrading to Red Hat Enterprise Virtualization 3.3
15.5. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
15.6. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
15.7. Post-upgrade Tasks
16. Backups and Migration
16.1. Backing Up and Restoring the Red Hat Enterprise Virtualization Manager
16.2. Backing Up and Restoring Virtual Machines Using the Backup and Restore API
17. Users and Roles
17.1. Introduction to Users
17.2. Directory Users
17.3. User Authorization
17.4. Red Hat Enterprise Virtualization Manager User Tasks
18. Quotas and Service Level Agreement Policy
18.1. Introduction to Quota
18.2. Shared Quota and Individually Defined Quota
18.3. Quota Accounting
18.4. Enabling and Changing a Quota Mode in a Data Center
18.5. Creating a New Quota Policy
18.6. Explanation of Quota Threshold Settings
18.7. Assigning a Quota to an Object
18.8. Using Quota to Limit Resources by User
18.9. Editing Quotas
18.10. Removing Quotas
18.11. Service Level Agreement Policy Enforcement
19. Event Notifications
19.1. Configuring Event Notifications in the Administration Portal
19.2. Canceling Event Notifications in the Administration Portal
19.3. Parameters for Event Notifications in ovirt-engine-notifier.conf
19.4. Configuring the Red Hat Enterprise Virtualization Manager to Send SNMP Traps
20. Utilities
20.1. The oVirt Engine Rename Tool
20.2. The Domain Management Tool
20.3. The Engine Configuration Tool
20.4. The Image Uploader Tool
20.5. The USB Filter Editor
20.6. The Log Collector Tool
20.7. The ISO Uploader Tool

Chapter 15. Updating the Red Hat Enterprise Virtualization Environment

This chapter covers both updating your Red Hat Enterprise Virtualization environment between minor releases, and upgrading to the next major version. Always update to the latest minor version of your current Red Hat Enterprise Virtualization Manager version before you upgrade to the next major version.
For interactive upgrade instructions, you can also use the RHEV Upgrade Helper available at https://access.redhat.com/labs/rhevupgradehelper/. This application asks you to provide information about your upgrade path and your current environment, and presents the relevant steps for upgrade as well as steps to prevent known issues specific to your upgrade scenario.

15.1. Updates between Minor Releases

15.1.1. Checking for Red Hat Enterprise Virtualization Manager Updates

Important

Always update to the latest minor version of your current Red Hat Enterprise Virtualization Manager version before you upgrade to the next major version.

Procedure 15.1. Checking for Red Hat Enterprise Virtualization Manager Updates

  1. Run the following command on the machine on which the Red Hat Enterprise Virtualization Manager is installed:
    # engine-upgrade-check
    • If there are no updates are available, the command will output the text No upgrade:
      # engine-upgrade-check
      VERB: queue package rhevm-setup for update
      VERB: package rhevm-setup queued
      VERB: Building transaction
      VERB: Empty transaction
      VERB: Transaction Summary:
      No upgrade
    • If updates are available, the command will list the packages to be updated:
      # engine-upgrade-check
      VERB: queue package rhevm-setup for update
      VERB: package rhevm-setup queued
      VERB: Building transaction
      VERB: Transaction built
      VERB: Transaction Summary:
      VERB:     updated    - rhevm-lib-3.3.2-0.50.el6ev.noarch
      VERB:     update     - rhevm-lib-3.4.0-0.13.el6ev.noarch
      VERB:     updated    - rhevm-setup-3.3.2-0.50.el6ev.noarch
      VERB:     update     - rhevm-setup-3.4.0-0.13.el6ev.noarch
      VERB:     install    - rhevm-setup-base-3.4.0-0.13.el6ev.noarch
      VERB:     install    - rhevm-setup-plugin-ovirt-engine-3.4.0-0.13.el6ev.noarch
      VERB:     updated    - rhevm-setup-plugins-3.3.1-1.el6ev.noarch
      VERB:     update     - rhevm-setup-plugins-3.4.0-0.5.el6ev.noarch
      Upgrade available
      
      Upgrade available

15.1.2. Updating the Red Hat Enterprise Virtualization Manager

Updates to the Red Hat Enterprise Virtualization Manager are released via the Content Delivery Network. Before installing an update from the Content Delivery Network, ensure you read the advisory text associated with it and the latest version of the Red Hat Enterprise Virtualization Release Notes and Red Hat Enterprise Virtualization Technical Notes. A number of actions must be performed to complete an upgrade, including:
  • Stopping the ovirt-engine service.
  • Downloading and installing the updated packages.
  • Backing up and updating the database.
  • Performing post-installation configuration.
  • Starting the ovirt-engine service.

Procedure 15.2. Updating Red Hat Enterprise Virtualization Manager

  1. Run the following command to update the rhevm-setup package:
    # yum update rhevm-setup
  2. Run the following command to update the Red Hat Enterprise Virtualization Manager:
    # engine-setup

Important

Active hosts are not updated by this process and must be updated separately. As a result, the virtual machines running on those hosts are not affected.

Important

The update process may take some time; allow time for the update process to complete and do not stop the process once initiated. Once the update is complete, you will also be instructed to separately update the Data Warehouse and Reports functionality. These additional steps are only required if you installed these features.

15.1.3. Updating Red Hat Enterprise Virtualization Hypervisors

Updating Red Hat Enterprise Virtualization Hypervisors involves reinstalling the Hypervisor with a newer version of the Hypervisor ISO image. This includes stopping and restarting the Hypervisor. If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that Hypervisor updates are performed at a time when the Hypervisor's usage is relatively low.
Ensure that the cluster to which the host belongs has sufficient memory reserve in order for its hosts to perform maintenance. If a cluster lacks sufficient memory, the virtual machine migration operation will hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before updating the Hypervisor.
It is recommended that administrators update Red Hat Enterprise Virtualization Hypervisors regularly. Important bug fixes and security updates are included in updates. Hypervisors that are not up to date may be a security risk.

Important

Ensure that the cluster contains more than one host before performing an upgrade. Do not attempt to reinstall or upgrade all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure 15.3. Updating Red Hat Enterprise Virtualization Hypervisors

  1. Log in to the system hosting Red Hat Enterprise Virtualization Manager as the root user.
  2. Enable the Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64) repository:
    # subscription-manager repos --enable=rhel-6-server-rhevh-rpms
  3. Ensure that you have the most recent version of the rhev-hypervisor6 package installed:
    # yum update rhev-hypervisor6
  4. From the Administration Portal, click the Hosts tab, and then select the Hypervisor that you intend to upgrade.
    • If the Hypervisor requires updating, an alert message in the details pane indicates that a new version of the Red Hat Enterprise Virtualization Hypervisor is available.
    • If the Hypervisor does not require updating, no alert message is displayed and no further action is required.
  5. Click Maintenance. If automatic migration is enabled, this causes any virtual machines running on the Hypervisor to be migrated to other hosts. If the Hypervisor is the SPM, this function is moved to another host.
  6. Click Upgrade to open the Upgrade Host confirmation window.
  7. Select rhev-hypervisor.iso, which is symbolically linked to the most recent Hypervisor image.
  8. Click OK to update and reinstall the Hypervisor. The details of the Hypervisor are updated in the Hosts tab, and the status will transition through these stages:
    • Maintenance
    • Installing
    • Non Responsive
    • Up
    These are all expected, and each stage will take some time.
  9. Restart the Hypervisor to ensure all updates are correctly applied.
Once successfully updated, the Hypervisor displays a status of Up. Any virtual machines that were migrated off the Hypervisor are, at this point, able to be migrated back to it. Repeat the update procedure for each Hypervisor in the Red Hat Enterprise Virtualization environment.

Important

After a Red Hat Enterprise Virtualization Hypervisor is successfully registered to the Red Hat Enterprise Virtualization Manager and then upgraded, it may erroneously appear in the Administration Portal with the status of Install Failed. Click Activate, and the Hypervisor will change to an Up status and be ready for use.

15.1.4. Updating Red Hat Enterprise Linux Virtualization Hosts

Red Hat Enterprise Linux hosts use the yum command in the same way as regular Red Hat Enterprise Linux systems. It is highly recommended that you use yum to update your systems regularly, to ensure timely application of security and bug fixes. Updating a host includes stopping and restarting the host. If migration is enabled at cluster level, virtual machines are automatically migrated to another host in the cluster; as a result, it is recommended that host updates are performed at a time when the host's usage is relatively low.
The cluster to which the host belongs must have sufficient memory reserve in order for its hosts to perform maintenance. Moving a host with live virtual machines to maintenance in a cluster that lacks sufficient memory causes any virtual machine migration operations to hang and then fail. You can reduce the memory usage of this operation by shutting down some or all virtual machines before moving the host to maintenance.

Important

Ensure that the cluster contains more than one host before performing an upgrade. Do not attempt to reinstall or upgrade all the hosts at the same time, as one host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure 15.4. Updating Red Hat Enterprise Linux Hosts

  1. From the Administration Portal, click the Hosts tab and select the host to be updated.
  2. Click Maintenance to place the host into maintenance mode.
  3. On the Red Hat Enterprise Linux host machine, run the following command:
    # yum update
  4. Restart the host to ensure all updates are correctly applied.
You have successfully updated the Red Hat Enterprise Linux host. Repeat this process for each Red Hat Enterprise Linux host in the Red Hat Enterprise Virtualization environment.

15.2. Upgrading to Red Hat Enterprise Virtualization 3.5

15.2.1. Red Hat Enterprise Virtualization Manager 3.5 Upgrade Overview

Important

Always update to the latest minor version of your current Red Hat Enterprise Virtualization Manager version before you upgrade to the next major version.
The process for upgrading Red Hat Enterprise Virtualization Manager comprises three main steps:
  • Subscribing to entitlements.
  • Updating the required packages.
  • Performing the upgrade.
The command used to perform the upgrade itself is engine-setup, which provides an interactive interface. While the upgrade is in process, virtualization hosts and the virtual machines running on those virtualization hosts continue to operate independently. When the upgrade is complete, you can then upgrade your hosts to the latest versions of Red Hat Enterprise Linux or Red Hat Enterprise Virtualization Hypervisor.

15.2.2. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.5

Some of the features provided by Red Hat Enterprise Virtualization 3.5 are only available if your data centers, clusters, and storage have a compatibility version of 3.5.

Table 15.1. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.5

Feature Description
Paravirtualized random number generator (RNG) device support
This feature adds support for enabling a paravirtualized random number generator in virtual machines. To use this feature, the random number generator source must be set at cluster level to ensure all hosts support and report desired RNG device sources. This feature is supported in Red Hat Enterprise Linux hosts of version 6.6 and higher.
Serial number policy support
This feature adds support for setting a custom serial number for virtual machines. Serial number policy can be specified at cluster level, or for an individual virtual machine.
Save OVF files on any data domain
This feature adds support for Open Virtualization Format files, including virtual machine templates, to be stored on any domain in a supported pool.
Boot menu support
This feature adds support for enabling a boot device menu in a virtual machine.
Import data storage domains
This feature adds support for users to add existing data storage domains to their environment. The Manager then detects and adds all the virtual machines in that storage domain.
SPICE copy and paste support
This feature adds support for users to enable or disable SPICE clipboard copy and paste.
Storage pool metadata removal
This feature adds support for storage pool metadata to be stored and maintained in the engine database only.
Network custom properties support
This feature adds support for users to define custom properties when a network is provisioned on a host.

15.2.3. Red Hat Enterprise Virtualization 3.5 Upgrade Considerations

The following is a list of key considerations that must be made when planning your upgrade.

Important

Upgrading to version 3.5 can only be performed from version 3.4
To upgrade a previous version of Red Hat Enterprise Virtualization earlier than Red Hat Enterprise Virtualization 3.4 to Red Hat Enterprise Virtualization 3.5, you must sequentially upgrade to any newer versions of Red Hat Enterprise Virtualization before upgrading to the latest version. For example, if you are using Red Hat Enterprise Virtualization 3.3, you must upgrade to the latest minor version of Red Hat Enterprise Virtualization 3.4 before you can upgrade to Red Hat Enterprise Virtualization 3.5.
Red Hat Enterprise Virtualization Manager cannot be installed on the same machine as IPA
An error message displays if the ipa-server package is installed. Red Hat Enterprise Virtualization Manager 3.5 does not support installation on the same machine as Identity Management (IdM). To resolve this issue, you must migrate the IdM configuration to another system before re-attempting the upgrade.
Red Hat Enterprise Virtualization Manager 3.5 is supported to run on Red Hat Enterprise Linux 6.6
Upgrading to version 3.5 involves also upgrading the base operating system of the machine that hosts the Manager.

15.2.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.5

The following procedure outlines the process for upgrading Red Hat Enterprise Virtualization Manager 3.4 to Red Hat Enterprise Virtualization Manager 3.5. This procedure assumes that the system on which the Manager is installed is subscribed to the entitlements for receiving Red Hat Enterprise Virtualization 3.4 packages at the start of the procedure.

Important

If the upgrade fails, the engine-setup command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. For this reason, the repositories required by Red Hat Enterprise Virtualization 3.4 must not be removed until after the upgrade is complete as outlined below. If the upgrade fails, detailed instructions display that explain how to restore your installation.

Procedure 15.5. Upgrading to Red Hat Enterprise Virtualization Manager 3.5

  1. Subscribe the system on which the Red Hat Enterprise Virtualization Manager is installed to the required entitlements for receiving Red Hat Enterprise Virtualization Manager 3.5 packages:
    • With RHN Classic:
      # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.5
    • With Subscription Manager:
      # subscription-manager repos --enable=rhel-6-server-rhevm-3.5-rpms
  2. Update the base operating system:
    # yum update
  3. Update the rhevm-setup package:
    # yum update rhevm-setup
  4. Run the following command and follow the prompts to upgrade the Red Hat Enterprise Virtualization Manager:
    # engine-setup
  5. Remove or disable the Red Hat Enterprise Virtualization Manager 3.4 channel to ensure the system does not use any Red Hat Enterprise Virtualization Manager 3.4 packages:
    • With RHN Classic:
      # rhn-channel --remove --channel=rhel-x86_64-server-6-rhevm-3.4
    • With Subscription Manager:
      # subscription-manager repos --disable=rhel-6-server-rhevm-3.4-rpms

15.3. Upgrading to Red Hat Enterprise Virtualization 3.4

15.3.1. Red Hat Enterprise Virtualization Manager 3.4 Upgrade Overview

Important

Always update to the latest minor version of your current Red Hat Enterprise Virtualization Manager version before you upgrade to the next major version.
The process for upgrading Red Hat Enterprise Virtualization Manager comprises three main steps:
  • Subscribing to entitlements.
  • Updating the required packages.
  • Performing the upgrade.
The command used to perform the upgrade itself is engine-setup, which provides an interactive interface. While the upgrade is in process, virtualization hosts and the virtual machines running on those virtualization hosts continue to operate independently. When the upgrade is complete, you can then upgrade your hosts to the latest versions of Red Hat Enterprise Linux or Red Hat Enterprise Virtualization Hypervisor.

15.3.2. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.4

Some of the features provided by Red Hat Enterprise Virtualization 3.4 are only available if your data centers, clusters, and storage have a compatibility version of 3.4.

Table 15.2. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.4

Feature Description
Abort migration on error
This feature adds support for handling errors encountered during the migration of virtual machines.
Forced Gluster volume creation
This feature adds support for allowing the creation of Gluster bricks on root partitions. With this feature, you can choose to override warnings against creating bricks on root partitions.
Management of asynchronous Gluster volume tasks
This feature provides support for managing asynchronous tasks on Gluster volumes, such as rebalancing volumes or removing bricks. To use this feature, you must use GlusterFS version 3.5 or above.
Import Glance images as templates
This feature provides support for importing images from an OpenStack image service as templates.
File statistic retrieval for non-NFS ISO domains
This feature adds support for retrieving statistics on files stored in ISO domains that use a storage format other than NFS, such as a local ISO domain.
Default route support
This feature adds support for ensuring that the default route of the management network is registered in the main routing table and that registration of the default route for all other networks is disallowed. This ensures the management network gateway is set as the default gateway for hosts.
Virtual machine reboot
This feature adds support for rebooting virtual machines from the User Portal or Administration Portal via a new button. To use this action on a virtual machine, you must install the guest tools on that virtual machine.

15.3.3. Red Hat Enterprise Virtualization 3.4 Upgrade Considerations

The following is a list of key considerations that must be made when planning your upgrade.

Important

Upgrading to version 3.4 can only be performed from version 3.3
To upgrade a previous version of Red Hat Enterprise Virtualization earlier than Red Hat Enterprise Virtualization 3.3 to Red Hat Enterprise Virtualization 3.4, you must sequentially upgrade to any newer versions of Red Hat Enterprise Virtualization before upgrading to the latest version. For example, if you are using Red Hat Enterprise Virtualization 3.2, you must upgrade to Red Hat Enterprise Virtualization 3.3 before you can upgrade to Red Hat Enterprise Virtualization 3.4.
Red Hat Enterprise Virtualization Manager cannot be installed on the same machine as IPA
An error message displays if the ipa-server package is installed. Red Hat Enterprise Virtualization Manager 3.4 does not support installation on the same machine as Identity Management (IdM). To resolve this issue, you must migrate the IdM configuration to another system before re-attempting the upgrade.
Upgrading to JBoss Enterprise Application Platform 6.2 is recommended
Although Red Hat Enterprise Virtualization Manager 3.4 supports Enterprise Application Platform 6.1.0, upgrading to the latest supported version of JBoss is recommended.
Reports and the Data Warehouse are now installed via engine-setup
From Red Hat Enterprise Virtualization 3.4, the Reports and Data Warehouse features are configured and upgraded using the engine-setup command. If you have configured the Reports and Data Warehouse features in your Red Hat Enterprise Virtualization 3.3 environment, you must install the rhevm-reports-setup and rhevm-dwh-setup packages prior to upgrading to Red Hat Enterprise Virtualization 3.4 to ensure these features are detected by engine-setup.

15.3.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.4

The following procedure outlines the process for upgrading Red Hat Enterprise Virtualization Manager 3.3 to Red Hat Enterprise Virtualization Manager 3.4. This procedure assumes that the system on which the Manager is installed is subscribed to the entitlements for receiving Red Hat Enterprise Virtualization 3.3 packages at the start of the procedure.

Important

If the upgrade fails, the engine-setup command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. For this reason, the repositories required by Red Hat Enterprise Virtualization 3.3 must not be removed until after the upgrade is complete as outlined below. If the upgrade fails, detailed instructions display that explain how to restore your installation.

Procedure 15.6. Upgrading to Red Hat Enterprise Virtualization Manager 3.4

  1. Subscribe the system on which the Red Hat Enterprise Virtualization Manager is installed to the required entitlements for receiving Red Hat Enterprise Virtualization Manager 3.4 packages.
    • With RHN Classic:
      # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.4
    • With Subscription Manager:
      # subscription-manager repos --enable=rhel-6-server-rhevm-3.4-rpms
  2. Update the base operating system:
    # yum update
  3. Update the rhevm-setup package:
    # yum update rhevm-setup
  4. Run the following command and follow the prompts to upgrade the Red Hat Enterprise Virtualization Manager:
    # engine-setup
  5. Remove or disable the Red Hat Enterprise Virtualization Manager 3.3 repositories to ensure the system does not use any Red Hat Enterprise Virtualization Manager 3.3 packages.
    • With RHN Classic:
      # rhn-channel --remove --channel=rhel-x86_64-server-6-rhevm-3.3
    • With Subscription Manager:
      # subscription-manager repos --disable=rhel-6-server-rhevm-3.3-rpms

15.4. Upgrading to Red Hat Enterprise Virtualization 3.3

15.4.1. Red Hat Enterprise Virtualization Manager 3.3 Upgrade Overview

Upgrading Red Hat Enterprise Virtualization Manager is a straightforward process that comprises three main steps:
  • Subscribing to entitlements.
  • Updating the required packages.
  • Performing the upgrade.
The command used to perform the upgrade itself is engine-setup, which provides an interactive interface. While the upgrade is in process, virtualization hosts and the virtual machines running on those virtualization hosts continue to operate independently. When the upgrade is complete, you can then upgrade your hosts to the latest versions of Red Hat Enterprise Linux or Red Hat Enterprise Virtualization Hypervisor.

15.4.2. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.3

Some of the features in Red Hat Enterprise Virtualization are only available if your data centers, clusters, and storage have a compatibility version of 3.3.

Table 15.3. Features Requiring a Compatibility Upgrade to Red Hat Enterprise Virtualization 3.3

Feature Description
Libvirt-to-libvirt virtual machine migration
Perform virtual machine migration using libvirt-to-libvirt communication. This is safer, more secure, and has less host configuration requirements than native KVM migration, but has a higher overhead on the host CPU.
Isolated network to carry virtual machine migration traffic
Separates virtual machine migration traffic from other traffic types, like management and display traffic. Reduces chances of migrations causing a network flood that disrupts other important traffic types.
Define a gateway per logical network
Each logical network can have a gateway defined as separate from the management network gateway. This allows more customizable network topologies.
Snapshots including RAM
Snapshots now include the state of a virtual machine's memory as well as disk.
Optimized iSCSI device driver for virtual machines
Virtual machines can now consume iSCSI storage as virtual hard disks using an optimized device driver.
Host support for MOM management of memory overcommitment
MOM is a policy-driven tool that can be used to manage overcommitment on hosts. Currently MOM supports control of memory ballooning and KSM.
GlusterFS data domains.
Native support for the GlusterFS protocol was added as a way to create storage domains, allowing Gluster data centers to be created.
Custom device property support
In addition to defining custom properties of virtual machines, you can also define custom properties of virtual machine devices.
Multiple monitors using a single virtual PCI device
Drive multiple monitors using a single virtual PCI device, rather than one PCI device per monitor.
Updatable storage server connections
It is now possible to edit the storage server connection details of a storage domain.
Check virtual hard disk alignment
Check if a virtual disk, the filesystem installed on it, and its underlying storage are aligned. If it is not aligned, there may be a performance penalty.
Extendable virtual machine disk images
You can now grow your virtual machine disk image when it fills up.
OpenStack Image Service integration
Red Hat Enterprise Virtualization supports the OpenStack Image Service. You can import images from and export images to an Image Service repository.
Gluster hook support
You can manage Gluster hooks, which extend volume life cycle events, from Red Hat Enterprise Virtualization Manager.
Gluster host UUID support
This feature allows a Gluster host to be identified by the Gluster server UUID generated by Gluster in addition to identifying a Gluster host by IP address.
Network quality of service (QoS) support
Limit the inbound and outbound network traffic at the virtual NIC level.
Cloud-Init support
Cloud-Init allows you to automate early configuration tasks in your virtual machines, including setting hostnames, authorized keys, and more.

15.4.3. Red Hat Enterprise Virtualization 3.3 Upgrade Considerations

The following is a list of key considerations that must be made when planning your upgrade.

Important

Upgrading to version 3.3 can only be performed from version 3.2
Users of Red Hat Enterprise Virtualization 3.1 must migrate to Red Hat Enterprise Virtualization 3.2 before attempting to upgrade to Red Hat Enterprise Virtualization 3.3.
Red Hat Enterprise Virtualization Manager cannot be installed on the same machine as IPA
An error message displays if the ipa-server package is installed. Red Hat Enterprise Virtualization Manager 3.3 does not support installation on the same machine as Identity Management (IdM). To resolve this issue, you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information, see https://access.redhat.com/knowledge/articles/233143.
Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.3 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
Upgrading to JBoss Enterprise Application Platform 6.1.0 is recommended
Although Red Hat Enterprise Virtualization Manager 3.3 supports Enterprise Application Platform 6.0.1, upgrading to the latest supported version of JBoss is recommended. For more information on upgrading to JBoss Enterprise Application Platform 6.1.0, see Upgrade the JBoss EAP 6 RPM Installation.
The rhevm-upgrade command has been replaced by engine-setup
From Version 3.3, installation of Red Hat Enterprise Virtualization Manager supports otopi, a standalone, plug-in-based installation framework for setting up system components. Under this framework, the rhevm-upgrade command used during the installation process has been updated to engine-setup and is now obsolete.

15.4.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.3

The following procedure outlines the process for upgrading Red Hat Enterprise Virtualization Manager 3.2 to Red Hat Enterprise Virtualization Manager 3.3. This procedure assumes that the system on which the Manager is hosted is subscribed to the entitlements for receiving Red Hat Enterprise Virtualization 3.2 packages.
If the upgrade fails, the engine-setup command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. For this reason, the repositories required by Red Hat Enterprise Virtualization 3.2 must not be removed until after the upgrade is complete as outlined below. If the upgrade fails, detailed instructions display that explain how to restore your installation.

Procedure 15.7. Upgrading to Red Hat Enterprise Virtualization Manager 3.3

  1. Subscribe the system to the required entitlements for receiving Red Hat Enterprise Virtualization Manager 3.3 packages.
    Subscription Manager
    Red Hat Enterprise Virtualization 3.3 packages are provided by the rhel-6-server-rhevm-3.3-rpms repository associated with the Red Hat Enterprise Virtualization entitlement. Use the subscription-manager command to enable the repository in your yum configuration.
    # subscription-manager repos --enable=rhel-6-server-rhevm-3.3-rpms
    Red Hat Network Classic
    The Red Hat Enterprise Virtualization 3.3 packages are provided by the Red Hat Enterprise Virtualization Manager (v.3.3 x86_64) channel. Use the rhn-channel command or the Red Hat Network web interface to subscribe to the Red Hat Enterprise Virtualization Manager (v.3.3 x86_64) channel:
    # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.3
  2. Update the base operating system:
    # yum update
    In particular, if you are using the JBoss Application Server from JBoss Enterprise Application Platform 6.0.1, you must run the above command to upgrade to Enterprise Application Platform 6.1.
  3. Update the rhevm-setup package to ensure you have the most recent version of engine-setup.
    # yum update rhevm-setup
  4. Run the engine-setup command and follow the prompts to upgrade Red Hat Enterprise Virtualization Manager.
    # engine-setup
    [ INFO  ] Stage: Initializing
              
              Welcome to the RHEV 3.3.0 upgrade.
              Please read the following knowledge article for known issues and
              updated instructions before proceeding with the upgrade.
              RHEV 3.3.0 Upgrade Guide: Tips, Considerations and Roll-back Issues
                  https://access.redhat.com/articles/408623
              Would you like to continue with the upgrade? (Yes, No) [Yes]:
  5. Remove Red Hat Enterprise Virtualization Manager 3.2 repositories to ensure the system does not use any Red Hat Enterprise Virtualization Manager 3.2 packages.
    Subscription Manager
    Use the subscription-manager command to disable the Red Hat Enterprise Virtualization 3.2 repository in your yum configuration.
    # subscription-manager repos --disable=rhel-6-server-rhevm-3.2-rpms
    Red Hat Network Classic
    Use the rhn-channel command or the Red Hat Network web interface to remove the Red Hat Enterprise Virtualization Manager (v.3.2 x86_64) channels.
    # rhn-channel --remove --channel=rhel-x86_64-server-6-rhevm-3.2
Red Hat Enterprise Virtualization Manager has been upgraded. To take full advantage of all Red Hat Enterprise Virtualization 3.3 features you must also:
  • Ensure all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
  • Change all of your clusters to use compatibility version 3.3.
  • Change all of your data centers to use compatibility version 3.3.

15.5. Upgrading to Red Hat Enterprise Virtualization Manager 3.2

15.5.1. Upgrading to Red Hat Enterprise Virtualization Manager 3.2

Upgrading Red Hat Enterprise Virtualization Manager to version 3.2 is performed using the rhevm-upgrade command. Virtualization hosts, and the virtual machines running upon them, will continue to operate independently while the Manager is being upgraded. Once the Manager upgrade is complete you will be able to upgrade your hosts, if you haven't already, to the latest versions of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization Hypervisor.

Important

Users of Red Hat Enterprise Virtualization 3.0 must migrate to Red Hat Enterprise Virtualization 3.1 before attempting this upgrade.

Note

In the event that the upgrade fails the rhevm-upgrade command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. Where this also fails detailed instructions for manually restoring the installation are displayed.

Procedure 15.8. Upgrading to Red Hat Enterprise Virtualization Manager 3.2

  1. Ensure that the system is subscribed to the required entitlements to receive Red Hat Enterprise Virtualization Manager 3.2 packages. This procedure assumes that the system is already subscribed to required entitlements to receive Red Hat Enterprise Virtualization 3.1 packages. These must also be available to complete the upgrade process.
    Certificate-based Red Hat Network
    The Red Hat Enterprise Virtualization 3.2 packages are provided by the rhel-6-server-rhevm-3.2-rpms repository associated with the Red Hat Enterprise Virtualization entitlement. Use the subscription-manager command to enable the repository in your yum configuration.
    # subscription-manager repos --enable=rhel-6-server-rhevm-3.2-rpms
    Red Hat Network Classic
    The Red Hat Enterprise Virtualization 3.2 packages are provided by the Red Hat Enterprise Virtualization Manager (v.3.2 x86_64) channel. Use the rhn-channel command, or the Red Hat Network Web Interface, to subscribe to the Red Hat Enterprise Virtualization Manager (v.3.2 x86_64) channel.
    # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3.2
  2. Ensure that the system does not use any Red Hat Enterprise Virtualization Manager 3.1 packages by removing the Red Hat Enterprise Virtualization Manager 3.1 entitlements.
    Certificate-based Red Hat Network
    Use the subscription-manager command to disable the Red Hat Enterprise Virtualization 3.1 repository in your yum configuration. The subscription-manager command must be run while logged in as the root user.
    # subscription-manager repos --disable=rhel-6-server-rhevm-3.1-rpms
    Red Hat Network Classic
    Use the rhn-channel command, or the Red Hat Network Web Interface, to remove the Red Hat Enterprise Virtualization Manager (v.3.1 x86_64) channels.
    # rhn-channel --remove --channel=rhel-6-server-rhevm-3.1
  3. Update the base operating system:
    # yum update
  4. To ensure that you have the most recent version of the rhevm-upgrade command installed you must update the rhevm-setup package.
    # yum update rhevm-setup
  5. To upgrade Red Hat Enterprise Virtualization Manager run the rhevm-upgrade command.
    # rhevm-upgrade
    Loaded plugins: product-id, rhnplugin
    Info: RHEV Manager 3.1 to 3.2 upgrade detected
    Checking pre-upgrade conditions...(This may take several minutes)
  6. If the ipa-server package is installed then an error message is displayed. Red Hat Enterprise Virtualization Manager 3.2 does not support installation on the same machine as Identity Management (IdM).
    Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.2 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
    To resolve this issue you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information see https://access.redhat.com/knowledge/articles/233143.
Your Red Hat Enterprise Virtualization Manager installation has now been upgraded. To take full advantage of all Red Hat Enterprise Virtualization 3.2 features you must also:
  • Ensure that all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
  • Change all of your clusters to use compatibility version 3.2.
  • Change all of your data centers to use compatibility version 3.2.

15.6. Upgrading to Red Hat Enterprise Virtualization Manager 3.1

15.6.1. Upgrading to Red Hat Enterprise Virtualization Manager 3.1

Upgrading Red Hat Enterprise Virtualization Manager to version 3.1 is performed using the rhevm-upgrade command. Virtualization hosts, and the virtual machines running upon them, will continue to operate independently while the Manager is being upgraded. Once the Manager upgrade is complete you will be able to upgrade your hosts, if you haven't already, to the latest versions of Red Hat Enterprise Linux and Red Hat Enterprise Virtualization Hypervisor.

Important

Refer to https://access.redhat.com/knowledge/articles/269333 for an up to date list of tips and considerations to be taken into account when upgrading to Red Hat Enterprise Virtualization 3.1.

Important

Users of Red Hat Enterprise Virtualization 2.2 must migrate to Red Hat Enterprise Virtualization 3.0 before attempting this upgrade. For information on migrating from Red Hat Enterprise Virtualization 2.2 to Red Hat Enterprise Virtualization 3.0, refer to https://access.redhat.com/knowledge/techbriefs/migrating-red-hat-enterprise-virtualization-manager-version-22-30.

Note

In the event that the upgrade fails the rhevm-upgrade command will attempt to roll your Red Hat Enterprise Virtualization Manager installation back to its previous state. Where this also fails detailed instructions for manually restoring the installation are displayed.

Procedure 15.9. Upgrading to Red Hat Enterprise Virtualization Manager 3.1

  1. Ensure that the system is subscribed to the required entitlements to receive Red Hat JBoss Enterprise Application Platform  6 packages. Red Hat JBoss Enterprise Application Platform  6 is a required dependency of Red Hat Enterprise Virtualization Manager 3.1.
    Certificate-based Red Hat Network
    The Red Hat JBoss Enterprise Application Platform  6 packages are provided by the Red Hat JBoss Enterprise Application Platform entitlement in certificate-based Red Hat Network.
    Use the subscription-manager command to ensure that the system is subscribed to the Red Hat JBoss Enterprise Application Platform entitlement.
    # subscription-manager list
    Red Hat Network Classic
    The Red Hat JBoss Enterprise Application Platform  6 packages are provided by the Red Hat JBoss Application Platform (v 6) for 6Server x86_64 channel. The Channel Entitlement Name for this channel is Red Hat JBoss Enterprise Application Platform (v 4, zip format).
    Use the rhn-channel command, or the Red Hat Network Web Interface, to subscribe to the Red Hat JBoss Application Platform (v 6) for 6Server x86_64 channel.
  2. Ensure that the system is subscribed to the required channels and entitlements to receive Red Hat Enterprise Virtualization Manager 3.1 packages.
    Certificate-based Red Hat Network
    The Red Hat Enterprise Virtualization 3.1 packages are provided by the rhel-6-server-rhevm-3.1-rpms repository associated with the Red Hat Enterprise Virtualization entitlement. Use the subscription-manager command to enable the repository in your yum configuration. The subscription-manager command must be run while logged in as the root user.
    # subscription-manager repos --enable=rhel-6-server-rhevm-3.1-rpms
    Red Hat Network Classic
    The Red Hat Enterprise Virtualization 3.1 packages are provided by the Red Hat Enterprise Virtualization Manager (v.3.1 x86_64) channel.
    Use the rhn-channel command, or the Red Hat Network Web Interface, to subscribe to the Red Hat Enterprise Virtualization Manager (v.3.1 x86_64) channel.
  3. Ensure that the system does not use any Red Hat Enterprise Virtualization Manager 3.0 packages by removing the Red Hat Enterprise Virtualization Manager 3.0 channels and entitlements.
    Certificate-based Red Hat Network
    Use the subscription-manager command to disable the Red Hat Enterprise Virtualization 3.0 repositories in your yum configuration.
    # subscription-manager repos --disable=rhel-6-server-rhevm-3-rpms
    # subscription-manager repos --disable=jb-eap-5-for-rhel-6-server-rpms
    Red Hat Network Classic
    Use the rhn-channel command, or the Red Hat Network Web Interface, to remove the Red Hat Enterprise Virtualization Manager (v.3.0 x86_64) channels.
    # rhn-channel --remove --channel=rhel-6-server-rhevm-3
    # rhn-channel --remove --channel=jbappplatform-5-x86_64-server-6-rpm
  4. Update the base operating system.
    # yum update
  5. To ensure that you have the most recent version of the rhevm-upgrade command installed you must update the rhevm-setup package.
    # yum update rhevm-setup
  6. To upgrade Red Hat Enterprise Virtualization Manager run the rhevm-upgrade command.
    # rhevm-upgrade
    Loaded plugins: product-id, rhnplugin
    Info: RHEV Manager 3.0 to 3.1 upgrade detected
    Checking pre-upgrade conditions...(This may take several minutes)
  7. If the ipa-server package is installed then an error message is displayed. Red Hat Enterprise Virtualization Manager 3.1 does not support installation on the same machine as Identity Management (IdM).
    Error: IPA was found to be installed on this machine. Red Hat Enterprise Virtualization Manager 3.1 does not support installing IPA on the same machine. Please remove ipa packages before you continue.
    To resolve this issue you must migrate the IdM configuration to another system before re-attempting the upgrade. For further information see https://access.redhat.com/knowledge/articles/233143.
  8. A list of packages that depend on Red Hat JBoss Enterprise Application Platform  5 is displayed. These packages must be removed to install Red Hat JBoss Enterprise Application Platform  6, required by Red Hat Enterprise Virtualization Manager  3.1.
     Warning: the following packages will be removed if you proceed with the upgrade:
    
        * objectweb-asm
    
     Would you like to proceed? (yes|no):
    You must enter yes to proceed with the upgrade, removing the listed packages.
Your Red Hat Enterprise Virtualization Manager installation has now been upgraded. To take full advantage of all Red Hat Enterprise Virtualization 3.1 features you must also:
  • Ensure that all of your virtualization hosts are up to date and running the most recent Red Hat Enterprise Linux packages or Hypervisor images.
  • Change all of your clusters to use compatibility version 3.1.
  • Change all of your data centers to use compatibility version 3.1.

15.7. Post-upgrade Tasks

15.7.1. Changing the Cluster Compatibility Version

Summary
Red Hat Enterprise Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Enterprise Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Note

To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level.

Procedure 15.10. Changing the Cluster Compatibility Version

  1. Log in to the Administration Portal as the administrative user. By default this is the admin user.
  2. Click the Clusters tab.
  3. Select the cluster to change from the list displayed. If the list of clusters is too long to filter visually then perform a search to locate the desired cluster.
  4. Click the Edit button.
  5. Change the Compatibility Version to the desired value.
  6. Click OK to open the Change Cluster Compatibility Version confirmation window.
  7. Click OK to confirm.
Result
You have updated the compatibility version of the cluster. Once you have updated the compatibility version of all clusters in a data center, then you are also able to change the compatibility version of the data center itself.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

15.7.2. Changing the Data Center Compatibility Version

Summary
Red Hat Enterprise Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Enterprise Virtualization that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.

Note

To change the data center compatibility version, you must have first updated all the clusters in your data center to a level that supports your desired compatibility level.

Procedure 15.11. Changing the Data Center Compatibility Version

  1. Log in to the Administration Portal as the administrative user. By default this is the admin user.
  2. Click the Data Centers tab.
  3. Select the data center to change from the list displayed. If the list of data centers is too long to filter visually then perform a search to locate the desired data center.
  4. Click the Edit button.
  5. Change the Compatibility Version to the desired value.
  6. Click OK.
Result
You have updated the compatibility version of the data center.

Warning

Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center. If you are upgrading the compatibility version from below 3.1 to a higher version, these storage domains will become unusable with versions older than 3.1.

Chapter 16. Backups and Migration

16.1. Backing Up and Restoring the Red Hat Enterprise Virtualization Manager

16.1.1. Backing up Red Hat Enterprise Virtualization Manager - Overview

While taking complete backups of the machine on which the Red Hat Enterprise Virtualization Manager is installed is recommended whenever changing the configuration of that machine, a utility is provided for backing up only the key files related to the Manager. This utility - the engine-backup command - can be used to rapidly back up the engine database and configuration files into a single file that can be easily stored.

16.1.2. Syntax for the engine-backup Command

The engine-backup command works in one of two basic modes:
# engine-backup --mode=backup
# engine-backup --mode=restore
These two modes are further extended by a set of parameters that allow you to specify the scope of the backup and different credentials for the engine database. A full list of parameters and their function is as follows:

Basic Options

--mode
Specifies whether the command will perform a backup operation or a restore operation. Two options are available - backup, and restore. This is a required parameter.
--file
Specifies the path and name of a file into which backups are to be taken in backup mode, and the path and name of a file from which to read backup data in restore mode. This is a required parameter in both backup mode and restore mode.
--log
Specifies the path and name of a file into which logs of the backup or restore operation are to be written. This parameter is required in both backup mode and restore mode.
--scope
Specifies the scope of the backup or restore operation. There are five options: all, which backs up or restores all databases and configuration data; files, which backs up or restores only files on the system; db, which backs up or restores only the Manager database; dwhdb, which backs up or restores only the Data Warehouse database; and reportsdb, which backs up or restores only the Reports database. The default scope is all.

Manager Database Options

The following options are only available when using the engine-backup command in restore mode. The option syntax below applies to restoring the Manager database. The same options exist for restoring the Data Warehouse database and the Reports database. See engine-backup --help for the option syntax.
--change-db-credentials
Allows you to specify alternate credentials for restoring the Manager database using credentials other than those stored in the backup itself. Specifying this parameter allows you to add the following parameters.
--db-host
Specifies the IP address or fully qualified domain name of the host on which the database resides. This is a required parameter.
--db-port
Specifies the port by which a connection to the database will be made.
--db-user
Specifies the name of the user by which a connection to the database will be made. This is a required parameter.
--db-passfile
Specifies a file containing the password by which a connection to the database will be made. Either this parameter or the --db-password parameter must be specified.
--db-password
Specifies the plain text password by which a connection to the database will be made. Either this parameter or the --db-passfile parameter must be specified.
--db-name
Specifies the name of the database to which the database will be restored. This is a required parameter.
--db-secured
Specifies that the connection with the database is to be secured.
--db-secured-validation
Specifies that the connection with the host is to be validated.

Help

--help
Provides an overview of the available modes, parameters, sample usage, how to create a new database and configure the firewall in conjunction with backing up and restoring the Red Hat Enterprise Virtualization Manager.

16.1.3. Creating a Backup with the engine-backup Command

Summary
The process for creating a backup for the Red Hat Enterprise Virtualization Manager using the engine-backup command can be performed while the Manager is active. Append one of the following options to --scope to specify which backup to perform:
  • all: A full backup of all databases and configuration files on the Manager
  • files: A backup of only the files on the system
  • db: A backup of only the Manager database
  • dwhdb: A backup of only the Data Warehouse database
  • reportsdb: A backup of only the Reports database

Important

To restore a database to a fresh installation of Red Hat Enterprise Virtualization Manager, a database backup alone is not sufficient; the Manager also requires access to the configuration files. Any backup that specifies a scope other than the default, all, must be accompanied by another backup using the files scope, or a filesystem backup.

Procedure 16.1. Example Usage of the engine-backup Command

  1. Log on to the machine running the Red Hat Enterprise Virtualization Manager.
  2. Create a backup:

    Example 16.1. Creating a Full Backup

    # engine-backup --scope=all --mode=backup --log=file name --file=file name

    Example 16.2. Creating a Manager Database Backup

    # engine-backup --scope=files --mode=backup --log=file name --file=file name
    # engine-backup --scope=db --mode=backup --log=file name --file=file name
    Replace the db option with dwhdb or reportsdb to back up the Data Warehouse database or the Reports database.
A tar file containing a backup is created using the path and file name provided.

16.1.4. Restoring a Backup with the engine-backup Command

While the process for restoring a backup using the engine-backup command is straightforward, it involves several additional steps in comparison to that for creating a backup depending on the destination to which the backup is to be restored. For example, the engine-backup command can be used to restore backups to fresh installations of Red Hat Enterprise Virtualization, on top of existing installations of Red Hat Enterprise Virtualization, and using local or remote databases.

Important

Backups can only be restored to environments of the same major release as that of the backup. For example, a backup of a Red Hat Enterprise Virtualization version 3.3 environment can only be restored to another Red Hat Enterprise Virtualization version 3.3 environment. To view the version of Red Hat Enterprise Virtualization contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files.

16.1.5. Restoring a Backup to a Fresh Installation

Summary
The engine-backup command can be used to restore a backup to a fresh installation of the Red Hat Enterprise Virtualization Manager. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Enterprise Virtualization Manager have been installed, but the engine-setup command has not yet been run. This procedure assumes that the backup file can be accessed from the machine on which the backup is to be restored.

Note

The engine-backup command does not handle the actual creation of the engine database or the initial configuration of the postgresql service. Therefore, these tasks must be performed manually as outlined below when restoring a backup to a fresh installation.

Procedure 16.2. Restoring a Backup to a Fresh Installation

  1. Log on to the machine on which the Red Hat Enterprise Virtualization Manager is installed. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring Reports and the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host.
  2. If you are using a remote database, install the postgresql-server package. This is not required for local databases as this package is included with the rhevm installation.
    # yum install postgesql-server
  3. Manually create an empty database to which the database in the backup can be restored and configure the postgresql service:
    1. Initialize the postgresql database, start the postgresql service, and ensure this service starts on boot:
      # service postgresql initdb
      # service postgresql start
      # chkconfig postgresql on
    2. Enter the postgresql command line:
      # su postgres
      $ psql
    3. Create the engine user:
      postgres=# create role engine with login encrypted password 'password';
      If you are also restoring the Reports and Data Warehouse, create the ovirt_engine_reports and ovirt_engine_history users on the relevant host:
      postgres=# create role ovirt_engine_reports with login encrypted password 'password';
      postgres=# create role ovirt_engine_history with login encrypted password 'password';
    4. Create the new database:
      postgres=# create database database_name owner engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
      If you are also restoring the Reports and Data Warehouse, create the databases on the relevant host:
      postgres=# create database database_name owner ovirt_engine_reports template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
      postgres=# create database database_name owner ovirt_engine_history template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
    5. Exit the postgresql command line and log out of the postgres user:
      postgres=# \q
      $ exit
    6. Edit the /var/lib/pgsql/data/pg_hba.conf file as follows:
      • For each local database, replace the existing directives in the section starting with local at the bottom of the file with the following directives:
        host    database_name    user_name    0.0.0.0/0  md5
        host    database_name    user_name    ::0/0      md5
      • For each remote database:
        • Add the following line immediately underneath the line starting with Local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager:
          host    database_name    user_name    X.X.X.X/32   md5
        • Allow TCP/IP connections to the database. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following line:
          listen_addresses='*'
          This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address.
        • Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
          # iptables -I INPUT 5 -p tcp -s Manager_IP_Address --dport 5432 -j ACCEPT
          # service iptables save
    7. Restart the postgresql service:
      # service postgresql restart
  4. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost.

    Note

    The following examples use a --*password option for each database without specifying a password, which will prompt for a password for each database. Passwords can be supplied for these options in the command itself, however this is not recommended as the password will then be stored in the shell history. Alternatively, --*passfile=password_file options can be used for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts.
    • Restore a complete backup:
      # engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
      If Reports and Data Warehouse are also being restored as part of the complete backup, include the revised credentials for the two additional databases:
      engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-reports-db-credentials --reports-db-host=database_location --reports-db-name=database_name --reports-db-user=ovirt_engine_reports --reports-db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
      
    • Restore a database-only backup by first restoring the configuration files backup and then restoring the database backup:
      # engine-backup --mode=restore --scope=files --file=file_name --log=log_file_name
      # engine-backup --mode=restore --scope=db --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
      The example above restores a backup of the Manager database.
      # engine-backup --mode=restore --scope=reportsdb --file=file_name --log=log_file_name --change-reports-db-credentials --reports-db-host=database_location --reports-db-name=database_name --reports-db-user=ovirt_engine_reports --reports-db-password
      The example above restores a backup of the Reports database.
      # engine-backup --mode=restore --scope=dwhdb --file=file_name --log=log_file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
      The example above restores a backup of the Data Warehouse database.
    If successful, the following output displays:
    You should now run engine-setup.
    Done.
  5. Log on to the Manager machine. Run the following command and follow the prompts to configure the restored Manager:
    # engine-setup
Result
The Red Hat Enterprise Virtualization Manager has been restored to the version preserved in the backup. To change the fully qualified domain name of the new Red Hat Enterprise Virtualization system, see Section 20.1.1, “The oVirt Engine Rename Tool”.

16.1.6. Restoring a Backup to Overwrite an Existing Installation

Summary
The engine-backup command can restore a backup to a machine on which the Red Hat Enterprise Virtualization Manager has already been installed and set up. This is useful when you have taken a backup up of an installation, performed changes on that installation and then want to restore the installation from the backup.

Important

When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. Because the engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database, you do not need to create a new database or specify the database credentials because the user and database already exist.

Procedure 16.3. Restoring a Backup to Overwrite an Existing Installation

  1. Log on to the machine on which the Red Hat Enterprise Virtualization Manager is installed.
  2. Run the following command and follow the prompts to remove the configuration files for and clean the database associated with the Manager:
    # engine-cleanup
  3. Restore a full backup or a database-only backup:
    • Restore a full backup:
      # engine-backup --mode=restore --file=file_name --log=log_file_name
    • Restore a database-only backup by first restoring the configuration files backup and then restoring the database backup:
      # engine-backup --mode=restore --scope=files --file=file_name --log=log_file_name
      # engine-backup --mode=restore --scope=db --file=file_name --log=log_file_name
      The example above restores a backup of the Manager database. If necessary, also restore the Reports and Data Warehouse databases:
      # engine-backup --mode=restore --scope=reportsdb --file=file_name --log=log_file_name
      # engine-backup --mode=restore --scope=dwhdb --file=file_name --log=log_file_name
    If successful, the following output displays:
    You should now run engine-setup.
    Done.
  4. Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured:
    # engine-setup
Result
The engine database and configuration files for the Red Hat Enterprise Virtualization Manager have been restored to the version in the backup.

16.1.7. Restoring a Backup with Different Credentials

Summary
The engine-backup command can restore a backup to a machine on which the Red Hat Enterprise Virtualization Manager has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system.

Important

When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. Because the engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database, you do not need to create a new database or specify the database credentials because the user and database already exist. However, if the credentials for the owner of the engine database are not known, you must change them before you can restore the backup.

Procedure 16.4. Restoring a Backup with Different Credentials

  1. Log on to the machine on which the Red Hat Enterprise Virtualization Manager is installed.
  2. Run the following command and follow the prompts to remove the configuration files for and clean the database associated with the Manager:
    # engine-cleanup
  3. Change the password for the owner of the engine database if the credentials of that user are not known:
    1. Enter the postgresql command line:
      # su postgres
      $ psql
    2. Change the password of the user that owns the engine database:
      postgres=# alter role user_name encrypted password 'new_password';
      Repeat this for the users that own the ovirt_engine_reports and ovirt_engine_dwh databases if necessary.
  4. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost.

    Note

    The following examples use a --*password option for each database without specifying a password, which will prompt for a password for each database. Passwords can be supplied for these options in the command itself, however this is not recommended as the password will then be stored in the shell history. Alternatively, --*passfile=password_file options can be used for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts.
    • Restore a complete backup:
      # engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
      If Reports and Data Warehouse are also being restored as part of the complete backup, include the revised credentials for the two additional databases:
      engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-reports-db-credentials --reports-db-host=database_location --reports-db-name=database_name --reports-db-user=ovirt_engine_reports --reports-db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
      
    • Restore a database-only backup by first restoring the configuration files backup and then restoring the database backup:
      # engine-backup --mode=restore --scope=files --file=file_name --log=log_file_name
      # engine-backup --mode=restore --scope=db --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
      The example above restores a backup of the Manager database.
      # engine-backup --mode=restore --scope=reportsdb --file=file_name --log=log_file_name --change-reports-db-credentials --reports-db-host=database_location --reports-db-name=database_name --reports-db-user=ovirt_engine_reports --reports-db-password
      The example above restores a backup of the Reports database.
      # engine-backup --mode=restore --scope=dwhdb --file=file_name --log=log_file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
      The example above restores a backup of the Data Warehouse database.
    If successful, the following output displays:
    You should now run engine-setup.
    Done.
  5. Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured:
    # engine-setup
Result
The engine database and configuration files for the Red Hat Enterprise Virtualization Manager have been restored to the version in the backup using the supplied credentials, and the Manager has been configured to use the new database.

16.1.8. Migrating the Engine Database to a Remote Server Database

You can migrate the engine database to a remote database server after the Red Hat Enterprise Virtualization Manager has been initially configured.
This task is split into two procedures. The first procedure, preparing the remote PostgreSQL database, is a necessary prerequisite for the migration itself and presumes that the server has Red Hat Enterprise Linux installed and has been configured with the appropriate subscriptions.
The second procedure, migrating the database, uses PostgreSQL pg_dump and pg_restore commands to handle the database backup and restore. As such, it is necessary to edit the /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file with the updated information. At a minimum, you must update the location of the new database server. If the database name, role name, or password are modified for the new database server, these values must also be updated in the 10-setup-database.conf file. This procedure uses the default engine database settings to minimize modification of this file.

Note

The Data Warehouse 10-setup-database.conf file also uses the address of the engine database. If Data Warehouse is installed, update the engine database values in both the /etc/ovirt-engine/engine.conf.d/10-setup-database.conf and the /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file.

Procedure 16.5. Preparing the Remote PostgreSQL Database for use with the Red Hat Enterprise Virtualization Manager

  1. Log in to the remote database server and install the PostgreSQL server package:
    # yum install postgresql-server
  2. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
    # service postgresql initdb
    # service postgresql start
    # chkconfig postgresql on
  3. Connect to the psql command line interface as the postgres user:
    # su - postgres
    $ psql
  4. Create a user for the Manager to use when it writes to and reads from the database. The default user name on the Manager is engine:
    postgres=# create role user_name with login encrypted password 'password';

    Note

    The password for the engine user is located in plain text in /etc/ovirt-engine/engine.conf.d/10-setup-database.conf. Any password can be used when creating the role on the new server, however if a different password is used then this file must be updated with the new password.
  5. Create a database in which to store data about the Red Hat Enterprise Virtualization environment. The default database name on the Manager is engine, and the default user name is engine:
    postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
    
  6. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager:
    host    database_name    user_name    X.X.X.X/32   md5
  7. Allow TCP/IP connections to the database. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following line:
    listen_addresses='*'
    This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address.
  8. Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
    # iptables -I INPUT 5 -p tcp --dport 5432 -j ACCEPT
    # service iptables save
  9. Restart the postgresql service:
    # service postgresql restart
Optionally, set up SSL to secure database connections using the instructions at http://www.postgresql.org/docs/8.4/static/ssl-tcp.html#SSL-FILE-USAGE.

Procedure 16.6. Migrating the Database

  1. Log in to the Red Hat Enterprise Virtualization Manager machine and stop the ovirt-engine service so that it does not interfere with the engine backup:
    # service ovirt-engine stop
  2. Create the engine database backup using the PostgreSQL pg_dump command:
    # su - postgres -c 'pg_dump -F c engine -f /tmp/engine.dump'
  3. Copy the backup file to the new database server. The target directory must allow write access for the postgres user:
    # scp /tmp/engine.dump root@new.database.server.com:/tmp/engine.dump
  4. Log in to the new database server and restore the database using the PostgreSQL pg_restore command:
    # su - postgres -c 'pg_restore -d engine /tmp/engine.dump'
  5. Log in to the Manager server and update the /etc/ovirt-engine/engine.conf.d/10-setup-database.conf and replace the localhost value of ENGINE_DB_HOST with the IP address of the new database server. If the engine name, role name, or password differ on the new database server, update those values in this file.
    If Data Warehouse is installed, these values also need to be updated in the /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file.
  6. Now that the database has been migrated, start the ovirt-engine service:
    # service ovirt-engine start

16.1.9. Migrating the Data Warehouse Database to a Remote Server Database

You can migrate the ovirt_engine_history database to a remote database server after the Red Hat Enterprise Virtualization Manager has been initially configured.
This task is split into two procedures. The first procedure, preparing the remote PostgreSQL database, is a necessary prerequisite for the migration itself and presumes that the server has Red Hat Enterprise Linux installed and has been configured with the appropriate subscriptions.
The second procedure, migrating the database, uses PostgreSQL pg_dump and pg_restore commands to handle the database backup and restore. As such, it is necessary to edit the /etc/ovirt-engine-reports/ovirt-engine-reports.conf.d/10-setup-database.conf and /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf files with the updated information. At a minimum, you must update the location of the new database server. If the database name, role name, or password are modified for the new database server, these values must also be updated in both 10-setup-database.conf files. This procedure uses the default ovirt_engine_history database settings to minimize modification of this file.

Procedure 16.7. Preparing the Remote PostgreSQL Database for use with the Red Hat Enterprise Virtualization Manager

  1. Log in to the remote database server and install the PostgreSQL server package:
    # yum install postgresql-server
  2. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
    # service postgresql initdb
    # service postgresql start
    # chkconfig postgresql on
  3. Connect to the psql command line interface as the postgres user:
    # su - postgres
    $ psql
  4. Create a user for the Manager to use when it writes to and reads from the database. The default user name for the ovirt_engine_history database is ovirt_engine_history:
    postgres=# create role user_name with login encrypted password 'password';

    Note

    The password for the ovirt_engine_history user is located in plain text in /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf. Any password can be used when creating the role on the new server, however if a different password is used then this file, and the /etc/ovirt-engine-reports/ovirt-engine-reports.conf.d/10-setup-database.conf file, must be updated with the new password.
  5. Create a database in which to store the history of the Red Hat Enterprise Virtualization environment. The default database name is ovirt_engine_history, and the default user name is ovirt_engine_history:
    postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
    
  6. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager:
    host    database_name    user_name    X.X.X.X/32   md5
  7. Allow TCP/IP connections to the database. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following line:
    listen_addresses='*'
    This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address.
  8. Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
    # iptables -I INPUT 5 -p tcp --dport 5432 -j ACCEPT
    # service iptables save
  9. Restart the postgresql service:
    # service postgresql restart
Optionally, set up SSL to secure database connections using the instructions at http://www.postgresql.org/docs/8.4/static/ssl-tcp.html#SSL-FILE-USAGE.

Procedure 16.8. Migrating the Database

  1. Log in to the Red Hat Enterprise Virtualization Manager machine and stop the ovirt-engine-dwhd service so that it does not interfere with the engine backup:
    # service ovirt-engine-dwhd stop
  2. Create the ovirt_engine_history database backup using the PostgreSQL pg_dump command:
    # su - postgres -c 'pg_dump -F c ovirt_engine_history -f /tmp/ovirt_engine_history.dump'
  3. Copy the backup file to the new database server. The target directory must allow write access for the postgres user:
    # scp /tmp/ovirt_engine_history.dump root@new.database.server.com:/tmp/ovirt_engine_history.dump
  4. Log in to the new database server and restore the database using the PostgreSQL pg_restore command:
    # su - postgres -c 'pg_restore -d ovirt_engine_history /tmp/ovirt_engine_history.dump'
  5. Log in to the Manager server and update the /etc/ovirt-engine-reports/ovirt-engine-reports.conf.d/10-setup-database.conf and /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf files, replacing the localhost value of DWH_DB_HOST with the IP address of the new database server. If the DWH_DB_DATABASE, DWH_DB_USER, or DWH_DB_PASSWORD differ on the new database server, update those values in these files.
    If the Manager database has also been migrated, these values must also be updated in the /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file.
  6. Use a web browser to log in to the Reports portal at

    https://hostname.example.com/ovirt-engine-reports

    using the superuser user name. Click ViewRepository to open the Folders side pane.
  7. In the Folders side pane, select RHEVM ReportsResourcesJDBCData Sources.
  8. Select oVirt History and click Edit.
  9. Update the Host (required) field with the IP address of the new database server and click Save.
  10. Now that the database has been migrated and the Reports portal connects to it, start the ovirt-engine-dwhd service:
    # service ovirt-engine-dwhd start

16.1.10. Migrating the Reports Database to a Remote Server Database

You can migrate the ovirt_engine_reports database to a remote database server after the Red Hat Enterprise Virtualization Manager has been initially configured.
This task is split into two procedures. The first procedure, preparing the remote PostgreSQL database, is a necessary prerequisite for the migration itself and presumes that the server has Red Hat Enterprise Linux installed and has been configured with the appropriate subscriptions.
The second procedure, migrating the database, uses PostgreSQL pg_dump and pg_restore commands to handle the database backup and restore. As such, it is necessary to edit the /var/lib/ovirt-engine-reports/build-conf/master.properties file with the updated information. At a minimum, you must update the location of the new database server. If the database name, role name, or password are modified for the new database server, these values must also be updated in both master.properties files. This procedure uses the default ovirt_engine_reports database settings to minimize modification of this file.

Procedure 16.9. Preparing the Remote PostgreSQL Database for use with the Red Hat Enterprise Virtualization Manager

  1. Log in to the remote database server and install the PostgreSQL server package:
    # yum install postgresql-server
  2. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
    # service postgresql initdb
    # service postgresql start
    # chkconfig postgresql on
  3. Connect to the psql command line interface as the postgres user:
    # su - postgres
    $ psql
  4. Create a user for the Manager to use when it writes to and reads from the database. The default user name for the ovirt_engine_reports database is ovirt_engine_reports:
    postgres=# create role user_name with login encrypted password 'password';

    Note

    The password for the ovirt_engine_reports user is located in plain text in /var/lib/ovirt-engine-reports/build-conf/master.properties. Any password can be used when creating the role on the new server, however if a different password is used then this file must be updated with the new password.
  5. Create a database in which to store the history of the Red Hat Enterprise Virtualization environment. The default database name is ovirt_engine_reports, and the default user name is ovirt_engine_reports:
    postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
    
  6. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager:
    host    database_name    user_name    X.X.X.X/32   md5
  7. Allow TCP/IP connections to the database. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following line:
    listen_addresses='*'
    This example configures the postgresql service to listen for connections on all interfaces. You can specify an interface by giving its IP address.
  8. Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
    # iptables -I INPUT 5 -p tcp --dport 5432 -j ACCEPT
    # service iptables save
  9. Restart the postgresql service:
    # service postgresql restart
Optionally, set up SSL to secure database connections using the instructions at http://www.postgresql.org/docs/8.4/static/ssl-tcp.html#SSL-FILE-USAGE.

Procedure 16.10. Migrating the Database

  1. Log in to the Red Hat Enterprise Virtualization Manager machine and stop the ovirt-engine-reportsd service so that it does not interfere with the engine backup:
    # service ovirt-engine-reportsd stop
  2. Create the ovirt_engine_reports database backup using the PostgreSQL pg_dump command:
    # su - postgres -c 'pg_dump -F c ovirt_engine_reports -f /tmp/ovirt_engine_reports.dump'
  3. Copy the backup file to the new database server. The target directory must allow write access for the postgres user:
    # scp /tmp/ovirt_engine_reports.dump root@new.database.server.com:/tmp/ovirt_engine_reports.dump
  4. Log in to the new database server and restore the database using the PostgreSQL pg_restore command:
    # su - postgres -c 'pg_restore -d ovirt_engine_reports /tmp/ovirt_engine_reports.dump'
  5. Log in to the Manager server and update /var/lib/ovirt-engine-reports/build-conf/master.properties, replacing the localhost value of dbHost with the IP address of the new database server. If the ovirt_engine_reports js.dbName, dbUsername, or dbPassword differ on the new database server, update those values in this file.
  6. Now that the database has been migrated, you must run engine-setup to rebuild reports with the new credentials:
    # engine-setup

16.2. Backing Up and Restoring Virtual Machines Using the Backup and Restore API

16.2.1. The Backup and Restore API

The backup and restore API is a collection of functions that allows you to perform full or file-level backup and restoration of virtual machines. The API combines several components of Red Hat Enterprise Virtualization, such as live snapshots and the REST API, to create and work with temporary volumes that can be attached to a virtual machine containing backup software provided by an independent software provider.
For supported third-party backup vendors, consult the Red Hat Enterprise Virtualization Ecosystem at Red Hat Marketplace.

Note

For information on how to work with the REST API, see "The REST Application Programming Interface" in the Technical Guide in the Red Hat Enterprise Virtualization documentation suite.

16.2.2. Backing Up a Virtual Machine

Use the backup and restore API to back up a virtual machine. This procedure assumes you have two virtual machines: the virtual machine to back up, and a virtual machine on which the software for managing the backup is installed.

Procedure 16.11. Backing Up a Virtual Machine

  1. Using the REST API, create a snapshot of the virtual machine to back up:
    POST /api/vms/11111111-1111-1111-1111-111111111111/snapshots/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <snapshot>
        <description>BACKUP</description>
    </snapshot>

    Note

    When you take a snapshot of a virtual machine, a copy of the configuration data of the virtual machine as at the time the snapshot was taken is stored in the data attribute of the configuration attribute in initialization under the snapshot.

    Important

    You cannot take snapshots of disks that are marked as shareable or that are based on direct LUN disks.
  2. Retrieve the configuration data of the virtual machine from the data attribute under the snapshot:
    GET /api/vms/11111111-1111-1111-1111-111111111111/snapshots/11111111-1111-1111-1111-111111111111 HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
  3. Identify the disk ID and snapshot ID of the snapshot:
    GET /api/vms/11111111-1111-1111-1111-111111111111/snapshots/11111111-1111-1111-1111-111111111111/disks HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
  4. Attach the snapshot to the backup virtual machine and activate the disk:
    POST /api/vms/22222222-2222-2222-2222-222222222222/disks/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk id="11111111-1111-1111-1111-111111111111">
        <snapshot id="11111111-1111-1111-1111-111111111111"/>
        <active>true</active>
    </disk>
    
  5. Use the backup software on the backup virtual machine to back up the data on the snapshot disk.
  6. Detach the snapshot disk from the backup virtual machine:
    DELETE /api/vms/22222222-2222-2222-2222-222222222222/disks/11111111-1111-1111-1111-111111111111 HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <action>
        <detach>true</detach>
    </action>
    
  7. Optionally, delete the snapshot:
    DELETE /api/vms/11111111-1111-1111-1111-111111111111/snapshots/11111111-1111-1111-1111-111111111111 HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
You have backed up the state of a virtual machine at a fixed point in time using backup software installed on a separate virtual machine.

16.2.3. Restoring a Virtual Machine

Restore a virtual machine that has been backed up using the backup and restore API. This procedure assumes you have a backup virtual machine on which the software used to manage the previous backup is installed.

Procedure 16.12. Restoring a Virtual Machine

  1. In the Administration Portal, create a floating disk on which to restore the backup. See Section 13.6.1, “Creating Floating Virtual Disks” for details on how to create a floating disk.
  2. Attach the disk to the backup virtual machine:
    POST /api/vms/22222222-2222-2222-2222-222222222222/disks/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk id="11111111-1111-1111-1111-111111111111">
    </disk>
    
  3. Use the backup software to restore the backup to the disk.
  4. Detach the disk from the backup virtual machine:
    DELETE /api/vms/22222222-2222-2222-2222-222222222222/disks/11111111-1111-1111-1111-111111111111 HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <action>
        <detach>true</detach>
    </action>
    
  5. Create a new virtual machine using the configuration data of the virtual machine being restored:
    POST /api/vms/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <vm>
        <cluster>
            <name>cluster_name</name>
        </cluster>
        <name>NAME</name>
        ...
    </vm>
  6. Attach the disk to the new virtual machine:
    POST /api/vms/33333333-3333-3333-3333-333333333333/disks/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk id="11111111-1111-1111-1111-111111111111">
    </disk>
    
You have restored a virtual machine using a backup that was created using the backup and restore API.

Chapter 17. Users and Roles

17.1. Introduction to Users

Red Hat Enterprise Virtualization uses external directory services for user authentication and information. All user accounts must be created in external directory servers; these users are called directory users. The exception is the admin user which resides in the internal domain created during installation.
After a directory server is attached to Red Hat Enterprise Virtualization Manager, the users in the directory can be added to the Administration Portal, making them Red Hat Enterprise Virtualization Manager users. Red Hat Enterprise Virtualization Manager users can be assigned different roles and permissions according to the tasks they have to perform.
There are two types of Red Hat Enterprise Virtualization Manager users - end users who use and manage virtual resources from the User Portal, and administrators who maintain the system infrastructure using the Administration Portal. User roles and admin roles can be assigned to Red Hat Enterprise Virtualization Manager users for individual resources like virtual machines and hosts, or on a hierarchy of objects like clusters and data centers.

17.2. Directory Users

17.2.1. Directory Services Support in Red Hat Enterprise Virtualization

During installation Red Hat Enterprise Virtualization Manager creates its own internal administration user, admin. This account is intended for use when initially configuring the environment, and for troubleshooting. To add other users to Red Hat Enterprise Virtualization you must attach a directory server to the Manager. For diectory servers implemented prior to Red Hat Enterprise Virtualization 3.5, use the Domain Management Tool with the engine-manage-domains command to manage your domains. See the The Domain Management Tool section of the Red Hat Enterprise Virtualization Administration Guide for more information. With Red Hat Enterprise Virtualization 3.5, use the new generic LDAP provider implementation. See Configuring a Generic LDAP Provider section of the Red Hat Enterprise Virtualization Administration Guide for more information.
Once at least one directory server has been attached to the Manager, you can add users that exist in the directory server and assign roles to them using the Administration Portal. Users can be identified by their User Principal Name (UPN) of the form user@domain. Attachment of more than one directory server to the Manager is also supported.
The directory servers supported for use with Red Hat Enterprise Virtualization 3.5 are:
  • Active Directory
  • Identity Management (IdM)
  • Red Hat Directory Server 9 (RHDS 9)
  • OpenLDAP
You must ensure that the correct DNS records exist for your directory server. In particular you must ensure that the DNS records for the directory server include:
  • A valid pointer record (PTR) for the directory server's reverse lookup address.
  • A valid service record (SRV) for LDAP over TCP port 389.
  • A valid service record (SRV) for Kerberos over TCP port 88.
  • A valid service record (SRV) for Kerberos over UDP port 88.
If these records do not exist in DNS then you cannot add the domain to the Red Hat Enterprise Virtualization Manager configuration using engine-manage-domains.
For more detailed information on installing and configuring a supported directory server, see the vendor's documentation:

Important

A user with permissions to browse all users and groups must be created in the directory server specifically for use as the Red Hat Enterprise Virtualization administrative user. Do not use the administrative user for the directory server as the Red Hat Enterprise Virtualization administrative user.

Important

It is not possible to install Red Hat Enterprise Virtualization Manager (rhevm) and IdM (ipa-server) on the same system. IdM is incompatible with the mod_ssl package, which is required by Red Hat Enterprise Virtualization Manager.

Important

If you are using Active Directory as your directory server, and you want to use sysprep in the creation of Templates and Virtual Machines, then the Red Hat Enterprise Virtualization administrative user must be delegated control over the Domain to:
  • Join a computer to the domain
  • Modify the membership of a group
For information on creation of user accounts in Active Directory, see http://technet.microsoft.com/en-us/library/cc732336.aspx.
For information on delegation of control in Active Directory, see http://technet.microsoft.com/en-us/library/cc732524.aspx.

Note

Red Hat Enterprise Virtualization Manager uses Kerberos to authenticate with directory servers. The Red Hat Directory Server (RHDS) does not provide native support for Kerberos. If you are using RHDS as your directory server then you must ensure that the directory server is made a service within a valid Kerberos domain. To do this you must perform these steps while referring to the relevant directory server documentation:
  • Configure the memberOf plug-in for RHDS to allow group membership. In particular ensure that the value of the memberofgroupattr attribute of the memberOf plug-in is set to uniqueMember. In OpenLDAP, the memberOf functionality is not called a "plugin". It is called an "overlay" and requires no configuration after installation.
    Consult the Red Hat Directory Server 9.0 Plug-in Guide for more information on configuring the memberOf plug-in.
  • Define the directory server as a service of the form ldap/hostname@REALMNAME in the Kerberos realm. Replace hostname with the fully qualified domain name associated with the directory server and REALMNAME with the fully qualified Kerberos realm name. The Kerberos realm name must be specified in capital letters.
  • Generate a keytab file for the directory server in the Kerberos realm. The keytab file contains pairs of Kerberos principals and their associated encrypted keys. These keys allow the directory server to authenticate itself with the Kerberos realm.
    Consult the documentation for your Kerberos principle for more information on generating a keytab file.
  • Install the keytab file on the directory server. Then configure RHDS to recognize the keytab file and accept Kerberos authentication using GSSAPI.
    Consult the Red Hat Directory Server 9.0 Administration Guide for more information on configuring RHDS to use an external keytab file.
  • Test the configuration on the directory server by using the kinit command to authenticate as a user defined in the Kerberos realm. Once authenticated run the ldapsearch command against the directory server. Use the -Y GSSAPI parameters to ensure the use of Kerberos for authentication.

17.2.2. Configuring a Generic LDAP Provider

A generic LDAP provider is available to configure directory services for authenticating and authorizing users. The new provider implementation uses LDAP protocol to access the LDAP server and is fully customizable. To configure a generic LDAP provider, you must modify the configuration file for the authentication extension, the configuration file for the authorization extension, and the LDAP configuration file for the two extensions to point to.

Note

For a complete example of setting up LDAP and Kerberos for single sign-on to the Administration Portal and the User Portal , see Section 17.2.3.1, “Configuring LDAP and Kerberos for Single Sign-on”.

Procedure 17.1. Configuring a Generic LDAP Provider

  1. On the Red Hat Enterprise Virtualization Manager, install the LDAP extension package:
    # yum install ovirt-engine-extension-aaa-ldap
  2. Copy the LDAP configuration template file into the /etc/ovirt-engine directory. Template files are available for active directories (ad) and other directory types (simple). This example uses the simple configuration template.
    # cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple/. /etc/ovirt-engine
  3. Edit the LDAP property configuration file by uncommenting an LDAP server type and updating the domain and passwords fields.
    #  vi /etc/ovirt-engine/aaa/profile1.properties

    Example 17.1. Example profile: LDAP server section

    #
    # Select one
    #
    include = <openldap.properties>
    #include = <389ds.properties>
    #include = <rhds.properties>
    #include = <ipa.properties>
    #include = <iplanet.properties>
    #include = <rfc2307.properties>
    
    #
    # Server
    #
    vars.server = ldap1.company.com
    
    #
    # Search user and its password.
    #
    vars.user = uid=search,cn=users,cn=accounts,dc=company,dc=com
    vars.password = 123456
    
    pool.default.serverset.single.server = ${global:vars.server}
    pool.default.auth.simple.bindDN = ${global:vars.user}
    pool.default.auth.simple.password = ${global:vars.password}
    
    To use TLS or SSL protocol to interact with the LDAP server, obtain the LDAP server's root CA certificate, and use it to create a public keystore file. Uncomment the following lines and specify the full path to the public keystore file and the password to access the file.

    Note

    For more information on creating a public keystore file, see Section E.2, “Setting Up SSL or TLS Connections between the Manager and an LDAP Server”.

    Example 17.2. Example profile: keystore section

    # Create keystore, import certificate chain and uncomment
    # if using tls.
    pool.default.ssl.startTLS = true
    pool.default.ssl.truststore.file = /full/path/to/myrootca.jks
    pool.default.ssl.truststore.password = changeit
  4. Review the authentication configuration file. The profile name is visible to users on the Administration Portal and the User Portal login pages. The configuration profile location must match the LDAP configuration file location. All fields can be left as default.
    # vi /etc/ovirt-engine/extensions.d/profile1-authn.properties

    Example 17.3. Example authentication configuration file

    ovirt.engine.extension.name = profile1-authn
    ovirt.engine.extension.bindings.method = jbossmodule
    ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.ldap
    ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.ldap.AuthnExtension
    ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
    ovirt.engine.aaa.authn.profile.name = profile1
    ovirt.engine.aaa.authn.authz.plugin = profile1-authz
    config.profile.file.1 = ../aaa/profile1.properties
  5. Review the authorization configuration file. The configuration profile location must match the LDAP configuration file location. All fields can be left as default.
    # vi /etc/ovirt-engine/extensions.d/profile1-authz.properties

    Example 17.4. Example authorization configuration file

    ovirt.engine.extension.name = profile1-authz
    ovirt.engine.extension.bindings.method = jbossmodule
    ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.ldap
    ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.ldap.AuthzExtension
    ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
    config.profile.file.1 = ../aaa/profile1.properties
  6. Ensure that the ownership and permissions of the configuration profile are appropriate:
    # chown ovirt:ovirt /etc/ovirt-engine/aaa/profile1.properties
    # chmod 600 /etc/ovirt-engine/aaa/profile1.properties
  7. Restart the engine service.
    # service ovirt-engine restart
  8. The ldap1 profile you have created is now available on the Administration Portal and the User Portal login pages. To give the user accounts on the LDAP server appropriate permissions, for example to log in to the User Portal, see the Red Hat Enterprise Virtualization Manager User Tasks section of the Red Hat Enterprise Virtualization Administration Guide.

Note

For more information, see the LDAP authentication and authorization extension README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap-version.

17.2.3. Single Sign-On to the Administration and User Portal

With Red Hat Enterprise Virtualization 3.5, single sign-on to the Administration Portal and the User Portal is supported. With this feature enabled, users are able to log in to the User Portal or the Administration Portal using credentials obtained by single sign-on methods such as Kerberos. It is up to the administrator to configure which single sign-on method to use.
To enable single sign-on to the Administration Portal and the User Portal using Kerberos, see Section 17.2.3.1, “Configuring LDAP and Kerberos for Single Sign-on”.

Note

If single sign-on to the User Portal is enabled, single sign-on to virtual machines will not be possible. With single sign-on to the User Portal enabled, the User Portal does not need to accept a password, thus the password cannot be delegated to sign in to virtual machines.

17.2.3.1. Configuring LDAP and Kerberos for Single Sign-on

This example assumes the following:
  • The existing Key Distribution Center (KDC) server uses the MIT version of Kerberos 5.
  • You have administrative rights to the KDC server.
  • The Kerberos client is installed on the Red Hat Enterprise Virtualization Manager and user machines.
  • The kadmin utility is used to create Kerberos service principals and keytab files.
This procedure involves the following components:

On the KDC server

  • Create a service principal and a keytab file for the Apache service on the Red Hat Enterprise Virtualization Manager.

On the Red Hat Enterprise Virtualization Manager

  • Install the Manager's authentication and authorization extension packages and the Apache Kerberos authentication module.
  • Configure the extension files.

Procedure 17.2. Configuring Kerberos for the Apache Service

  1. On the KDC server, use the kadmin utility to create a service principal for the Apache service on the Red Hat Enterprise Virtualization Manager. The service principal is a reference ID to the KDC for the Apache service.
    # kadmin
    kadmin> addprinc -randkey HTTP/fqdn-of-rhevm@REALM.COM
  2. Generate a keytab file for the Apache service. The keytab file stores the shared secret key.
    kadmin> ktadd -k /tmp/http.keytab HTTP/fqdn-of-rhevm@REALM.COM
    kadmin> quit
  3. Copy the keytab file from the KDC server to the Red Hat Enterprise Virtualization Manager:
    # scp /tmp/http.keytab root@rhevm.example.com:/etc/httpd

Procedure 17.3. Configuring Single Sign-on to the User Portal or Administration Portal

  1. On the Red Hat Enterprise Virtualization Manager, ensure that the ownership and permissions for the keytab are appropriate:
    # chown apache /etc/httpd/http.keytab
    # chmod 400 /etc/httpd/http.keytab
  2. Install the authentication extension package, LDAP extension package, and the mod_auth_kerb authentication module:
    # yum install ovirt-engine-extension-aaa-misc ovirt-engine-extension-aaa-ldap mod_auth_kerb
  3. Copy the SSO configuration template file into the /etc/ovirt-engine directory. Template files are available for Active Directory (ad-sso) and other directory types (simple-sso). This example uses the simple SSO configuration template
    # cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple-sso/. /etc/ovirt-engine
  4. Create a symbolic link for the /etc/httpd/conf.d directory for Apache to use the SSO configuration files:
    # ln -s /etc/ovirt-engine/aaa/ovirt-sso.conf /etc/httpd/conf.d
  5. Edit the authentication method file for Apache to use Kerberos for authentication:
    # vi /etc/ovirt-engine/aaa/ovirt-sso.conf

    Example 17.5. Example authentication method file

    <LocationMatch ^(/ovirt-engine/(webadmin|userportal|api)|/api)>
        RewriteEngine on
        RewriteCond %{LA-U:REMOTE_USER} ^(.*)$
        RewriteRule ^(.*)$ - [L,P,E=REMOTE_USER:%1]
        RequestHeader set X-Remote-User %{REMOTE_USER}s
    
        AuthType Kerberos
        AuthName "Kerberos Login"
        Krb5Keytab /etc/httpd/http.keytab
        KrbAuthRealms REALM.COM
        Require valid-user
    </LocationMatch>
  6. Edit the LDAP property configuration file by uncommenting an LDAP server type and updating the domain and passwords fields:
    #  vi /etc/ovirt-engine/aaa/profile1.properties

    Example 17.6. Example profile: LDAP server section

    #
    # Select one
    #
    include = <openldap.properties>
    #include = <389ds.properties>
    #include = <rhds.properties>
    #include = <ipa.properties>
    #include = <iplanet.properties>
    #include = <rfc2307.properties>
    
    #
    # Server
    #
    vars.server = ldap1.company.com
    
    #
    # Search user and its password.
    #
    vars.user = uid=search,cn=users,cn=accounts,dc=company,dc=com
    vars.password = 123456
    
    pool.default.serverset.single.server = ${global:vars.server}
    pool.default.auth.simple.bindDN = ${global:vars.user}
    pool.default.auth.simple.password = ${global:vars.password}
    
    To use TLS or SSL protocol to interact with the LDAP server, obtain the LDAP server's root CA certificate, and use it to create a public keystore file. Uncomment the following lines and specify the full path to the public keystore file and the password to access the file.

    Note

    For more information on creating a public keystore file, see Section E.2, “Setting Up SSL or TLS Connections between the Manager and an LDAP Server”.

    Example 17.7. Example profile: keystore section

    # Create keystore, import certificate chain and uncomment
    # if using ssl/tls.
    pool.default.ssl.startTLS = true
    pool.default.ssl.truststore.file = /full/path/to/myrootca.jks
    pool.default.ssl.truststore.password = changeit
  7. Review the authentication configuration file. The profile name is visible to users on the Administration Portal and the User Portal login pages. The configuration profile location must match the LDAP configuration file location. All fields can be left as default.
    # vi /etc/ovirt-engine/extensions.d/profile1-http-authn.properties

    Example 17.8. Example authentication configuration file

    ovirt.engine.extension.name = profile1-authn
    ovirt.engine.extension.bindings.method = jbossmodule
    ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.ldap
    ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.ldap.AuthnExtension
    ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
    ovirt.engine.aaa.authn.profile.name = profile1-http
    ovirt.engine.aaa.authn.authz.plugin = profile1-authz
    ovirt.engine.aaa.authn.mapping.plugin = http-mapping
    config.artifact.name = HEADER
    config.artifact.arg = X-Remote-User
  8. Review the authorization configuration file. The configuration profile location must match the LDAP configuration file location. All fields can be left as default.
    #  vi /etc/ovirt-engine/extensions.d/profile1-authz.properties

    Example 17.9. Example authorization configuration file

    ovirt.engine.extension.name = profile1-authz
    ovirt.engine.extension.bindings.method = jbossmodule
    ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.ldap
    ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.ldap.AuthzExtension
    ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
    config.profile.file.1 = ../aaa/profile1.properties
  9. Ensure that the ownership and permissions of the configuration profile are appropriate:
    # chown ovirt:ovirt /etc/ovirt-engine/aaa/profile1.properties
    # chmod 600 /etc/ovirt-engine/aaa/profile1.properties
  10. Restart the Apache service and the engine service:
    # service httpd restart
    # service ovirt-engine restart

17.3. User Authorization

17.3.1. User Authorization Model

Red Hat Enterprise Virtualization applies authorization controls based on the combination of the three components:
  • The user performing the action
  • The type of action being performed
  • The object on which the action is being performed

17.3.2. User Actions

For an action to be successfully performed, the user must have the appropriate permission for the object being acted upon. Each type of action corresponds to a permission. There are many different permissions in the system, so for simplicity:
Actions

Figure 17.1. Actions

Important

Some actions are performed on more than one object. For example, copying a template to another storage domain will impact both the template and the destination storage domain. The user performing an action must have appropriate permissions for all objects the action impacts.

17.4. Red Hat Enterprise Virtualization Manager User Tasks

17.4.1. Adding Users

Summary
Users in Red Hat Enterprise Virtualization must be added from an external directory service before they can be assigned roles and permissions.

Procedure 17.4. Adding Users to Red Hat Enterprise Virtualization

  1. Click the Users tab to display the list of authorized users.
  2. Click Add. The Add Users and Groups window opens.
  3. In the Search drop down menu, select the appropriate domain. Enter a name or part of a name in the search text field, and click GO. Alternatively, click GO to view a list of all users and groups.
  4. Select the check boxes for the appropriate users or groups.
  5. Click OK.
Result
The added user displays on the Users tab.

17.4.2. Viewing User Information

Summary
You can view detailed information on each user in the Users tab.

Procedure 17.5. Viewing User Information

  1. Click the Users tab to display the list of authorized users.
  2. Select the user, or perform a search if the user is not visible on the results list.
  3. The details pane displays for the selected user, usually with the General tab displaying general information, such as the domain name, email and status of the user.
  4. The other tabs allow you to view groups, permissions, quotas, and events for the user.
    For example, to view the groups to which the user belongs, click the Directory Groups tab.
Result
You have viewed domain, permissions, quota, group and event information for a user.

17.4.3. Viewing User Permissions on Resources

Summary
Users can be assigned permissions on specific resources or a hierarchy of resources. You can view the assigned users and their permissions on each resource.

Procedure 17.6. Viewing User Permissions on Resources

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
Result
You have viewed the assigned users and their roles for a selected resource.

17.4.4. Removing Users

Summary
When a user account is no longer required, remove it from Red Hat Enterprise Virtualization.

Procedure 17.7. Removing Users

  1. Click the Users tab to display the list of authorized users.
  2. Select the user to be removed. Ensure the user is not running any virtual machines.
  3. Click the Remove button. A message displays prompting you to confirm the removal. Click OK.
Result
The user is removed from Red Hat Enterprise Virtualization, but not from the external directory.

17.4.5. Resetting the Password for the Internal Administrative User

To change the password of the internal administrative user (admin@internal), you must use the engine configuration tool in interactive mode.

Procedure 17.8. Resetting the Password for the Internal Administrative User

  1. Log in to the machine on which the Red Hat Enterprise Virtualization Manager is installed.
  2. Change the password in interactive mode:
    # engine-config -s AdminPassword=interactive
  3. Apply the changes:
    # service ovirt-engine restart
You have changed the password for the internal administrative user, and must use this password the next time you use this account to log in to the Administration Portal, User Portal or REST API.

Chapter 18. Quotas and Service Level Agreement Policy

18.1. Introduction to Quota

Quota is a resource limitation tool provided with Red Hat Enterprise Virtualization. Quota may be thought of as a layer of limitations on top of the layer of limitations set by User Permissions.
Quota is a data center object.
Quota allows administrators of Red Hat Enterprise Virtualization environments to limit user access to memory, CPU, and storage. Quota defines the memory resources and storage resources an administrator can assign users. As a result users may draw on only the resources assigned to them. When the quota resources are exhausted, Red Hat Enterprise Virtualization does not permit further user actions.
There are two different kinds of Quota:

Table 18.1. The Two Different Kinds of Quota

Quota type Definition
Run-time Quota This quota limits the consumption of runtime resources, like CPU and memory.
Storage Quota This quota limits the amount of storage available.
Quota, like SELinux, has three modes:

Table 18.2. Quota Modes

Quota Mode Function
Enforced This mode puts into effect the Quota that you have set in audit mode, limiting resources to the group or user affected by the quota.
Audit This mode allows you to change Quota settings. Choose this mode to increase or decrease the amount of runtime quota and the amount of storage quota available to users affected by it.
Disabled This mode turns off the runtime and storage limitations defined by the quota.
When a user attempts to run a virtual machine, the specifications of the virtual machine are compared to the storage allowance and the runtime allowance set in the applicable quota.
If starting a virtual machine causes the aggregated resources of all running virtual machines covered by a quota to exceed the allowance defined in the quota, then the Manager refuses to run the virtual machine.
When a user creates a new disk, the requested disk size is added to the aggregated disk usage of all the other disks covered by the applicable quota. If the new disk takes the total aggregated disk usage above the amount allowed by the quota, disk creation fails.
Quota allows for resource sharing of the same hardware. It supports hard and soft thresholds. Administrators can use a quota to set thresholds on resources. These thresholds appear, from the user's point of view, as 100% usage of that resource. To prevent failures when the customer unexpectedly exceeds this threshold, the interface supports a "grace" amount by which the threshold can be briefly exceeded. Exceeding the threshold results in a warning sent to the customer.

Important

Quota imposes limitations upon the running of virtual machines. Ignoring these limitations is likely to result in a situation in which you cannot use your virtual machines and virtual disks.
When quota is running in enforced mode, virtual machines and disks that do not have quotas assigned cannot be used.
To power on a virtual machine, a quota must be assigned to that virtual machine.
To create a snapshot of a virtual machine, the disk associated with the virtual machine must have a quota assigned.
When creating a template from a virtual machine, you are prompted to select the quota that you want the template to consume. This allows you to set the template (and all future machines created from the template) to consume a different quota than the virtual machine and disk from which the template is generated.

18.2. Shared Quota and Individually Defined Quota

Users with SuperUser permissions can create quotas for individual users or quotas for groups.
Group quotas can be set for Active Directory users. If a group of ten users are given a quota of 1 TB of storage and one of the ten users fills the entire terabyte, then the entire group will be in excess of the quota and none of the ten users will be able to use any of the storage associated with their group.
An individual user's quota is set for only the individual. Once the individual user has used up all of his or her storage or runtime quota, the user will be in excess of the quota and the user will no longer be able to use the storage associated with his or her quota.

18.3. Quota Accounting

When a quota is assigned to a consumer or a resource, each action by that consumer or on the resource involving storage, vCPU, or memory results in quota consumption or quota release.
Since the quota acts as an upper bound that limits the user's access to resources, the quota calculations may differ from the actual current use of the user. The quota is calculated for the max growth potential and not the current usage.

Example 18.1. Accounting example

A user runs a virtual machine with 1 vCPU and 1024 MB memory. The action consumes 1 vCPU and 1024 MB of the quota assigned to that user. When the virtual machine is stopped 1 vCPU and 1024 MB of RAM are released back to the quota assigned to that user. Run-time quota consumption is accounted for only during the actual run-time of the consumer.
A user creates a virtual thin provision disk of 10 GB. The actual disk usage may indicate only 3 GB of that disk are actually in use. The quota consumption, however, would be 10 GB, the max growth potential of that disk.

18.4. Enabling and Changing a Quota Mode in a Data Center

This procedure enables or changes the quota mode in a data center. You must select a quota mode before you can define quotas. You must be logged in to the Administration Portal to follow the steps of this procedure.
Use Audit mode to test your quota to make sure it works as you expect it to. You do not need to have your quota in Audit mode to create or change a quota.

Procedure 18.1. Enabling and Changing Quota in a Data Center

  1. Click the Data Centers tab in the Navigation Pane.
  2. From the list of data centers displayed in the Navigation Pane, choose the data center whose quota policy you plan to edit.
  3. Click Edit in the top left of the Navigation Pane.
    An Edit Data Center window opens.
  4. In the Quota Mode drop-down, change the quota mode to Enforced.
  5. Click OK.
You have now enabled a quota mode at the Data Center level. If you set the quota mode to Audit during testing, then you must change it to Enforced in order for the quota settings to take effect.

18.5. Creating a New Quota Policy

Summary
You have enabled quota mode, either in Audit or Enforcing mode. You want to define a quota policy to manage resource usage in your data center.

Procedure 18.2. Creating a New Quota Policy

  1. In tree mode, select the data center. The Quota tab appears in the Navigation Pane.
  2. Click the Quota tab in the Navigation Pane.
  3. Click Add in the Navigation Pane. The New Quota window opens.
  4. Fill in the Name field with a meaningful name.
    Fill in the Description field with a meaningful name.
  5. In the Memory & CPU section of the New Quota window, use the green slider to set Cluster Threshold.
  6. In the Memory & CPU section of the New Quota window, use the blue slider to set Cluster Grace.
  7. Click Edit on the bottom-right of the Memory & CPU field. An Edit Quota window opens.
  8. Under the Memory field, select either the Unlimited radio button (to allow limitless use of Memory resources in the cluster), or select the limit to radio button to set the amount of memory set by this quota. If you select the limit to radio button, input a memory quota in megabytes (MB) in the MB field.
  9. Under the CPU field, select either the Unlimited radio button or the limit to radio button to set the amount of CPU set by this quota. If you select the limit to radio button, input a number of vCPUs in the vCpus field.
  10. Click OK in the Edit Quota window.
  11. In the Storage section of the New Quota window, use the green slider to set Storage Threshold.
  12. In the Storage section of the New Quota window, use the blue slider to set Storage Grace.
  13. Click Edit in the Storage field. The Edit Quota window opens.
  14. Under the Storage Quota field, select either the Unlimited radio button (to allow limitless use of Storage) or the limit to radio button to set the amount of storage to which quota will limit users. If you select the limit to radio button, input a storage quota size in gigabytes (GB) in the GB field.
  15. Click OK in the Edit Quota window. You are returned to the New Quota window.
  16. Click OK in the New Quota window.
Result
You have created a new quota policy.

18.6. Explanation of Quota Threshold Settings

Table 18.3. Quota thresholds and grace

Setting Definition
Cluster Threshold The amount of cluster resources available per data center.
Cluster Grace The amount of the cluster available for the data center after exhausting the data center's Cluster Threshold.
Storage Threshold The amount of storage resources available per data center.
Storage Grace The amount of storage available for the data center after exhausting the data center's Storage Threshold.
If a quota is set to 100 GB with 20% Grace, then consumers are blocked from using storage after they use 120 GB of storage. If the same quota has a Threshold set at 70%, then consumers receive a warning when they exceed 70 GB of storage consumption (but they remain able to consume storage until they reach 120 GB of storage consumption.) Both "Threshold" and "Grace" are set relative to the quota. "Threshold" may be thought of as the "soft limit", and exceeding it generates a warning. "Grace" may be thought of as the "hard limit", and exceeding it makes it impossible to consume any more storage resources.

18.7. Assigning a Quota to an Object

Summary
This procedure explains how to associate a virtual machine with a quota.

Procedure 18.3. Assigning a Quota to a Virtual Machine

  1. In the navigation pane, select the Virtual Machine to which you plan to add a quota.
  2. Click Edit. The Edit Virtual Machine window appears.
  3. Select the quota you want the virtual machine to consume. Use the Quota drop-down to do this.
  4. Click OK.
Result
You have designated a quota for the virtual machine you selected.
Summary
This procedure explains how to associate a virtual machine disk with a quota.

Procedure 18.4. Assigning a Quota to a Virtual Disk

  1. In the navigation pane, select the Virtual Machine whose disk(s) you plan to add a quota.
  2. In the details pane, select the disk you plan to associate with a quota.
  3. Click Edit. The Edit Virtual Disk window appears.
  4. Select the quota you want the virtual disk to consume.
  5. Click OK.
Result
You have designated a quota for the virtual disk you selected.

Important

Quota must be selected for all objects associated with a virtual machine, in order for that virtual machine to work. If you fail to select a quota for the objects associated with a virtual machine, the virtual machine will not work. The error that the Manager throws in this situation is generic, which makes it difficult to know if the error was thrown because you did not associate a quota with all of the objects associated with the virtual machine. It is not possible to take snapshots of virtual machines that do not have an assigned quota. It is not possible to create templates of virtual machines whose virtual disks do not have assigned quotas.

18.8. Using Quota to Limit Resources by User

Summary
This procedure describes how to use quotas to limit the resources a user has access to.

Procedure 18.5. Assigning a User to a Quota

  1. In the tree, click the Data Center with the quota you want to associate with a User.
  2. Click the Quota tab in the navigation pane.
  3. Select the target quota in the list in the navigation pane.
  4. Click the Consumers tab in the details pane.
  5. Click Add at the top of the details pane.
  6. In the Search field, type the name of the user you want to associate with the quota.
  7. Click GO.
  8. Select the check box at the left side of the row containing the name of the target user.
  9. Click OK in the bottom right of the Assign Users and Groups to Quota window.
Result
After a short time, the user will appear in the Consumers tab of the details pane.

18.9. Editing Quotas

Summary
This procedure describes how to change existing quotas.

Procedure 18.6. Editing Quotas

  1. On the tree pane, click on the data center whose quota you want to edit.
  2. Click on the Quota tab in the Navigation Pane.
  3. Click the name of the quota you want to edit.
  4. Click Edit in the Navigation pane.
  5. An Edit Quota window opens. If required, enter a meaningful name in the Name field.
  6. If required, you can enter a meaningful description in the Description field.
  7. Select either the All Clusters radio button or the Specific Clusters radio button. Move the Cluster Threshold and Cluster Grace sliders to the desired positions on the Memory & CPU slider.
  8. Select either the All Storage Domains radio button or the Specific Storage Domains radio button. Move the Storage Threshold and Storage Grace sliders to the desired positions on the Storage slider.
  9. Click OK in the Edit Quota window to confirm the new quota settings.
Result
You have changed an existing quota.

18.10. Removing Quotas

Summary
This procedure describes how to remove quotas.

Procedure 18.7. Removing Quotas

  1. On the tree pane, click on the data center whose quota you want to edit.
  2. Click on the Quota tab in the Navigation Pane.
  3. Click the name of the quota you want to remove.
  4. Click Remove at the top of the Navigation pane, under the row of tabs.
  5. Click OK in the Remove Quota(s) window to confirm the removal of this quota.
Result
You have removed a quota.

18.11. Service Level Agreement Policy Enforcement

Summary
This procedure describes how to set service level agreement CPU features.

Procedure 18.8. Setting a Service Level Agreement CPU Policy

  1. Select New VM in the Navigation Pane.
  2. Select Show Advanced Options.
  3. Select the Resource Allocation tab.
    Description

    Figure 18.1. Service Level Agreement Policy Enforcement - CPU Allocation Menu

  4. Specify CPU Shares. Possible options are Low, Medium, High, Custom, and Disabled. Virtual machines set to High receive twice as many shares as Medium, and virtual machines set to Medium receive twice as many shares as virtual machines set to Low. Disabled instructs VDSM to use an older algorithm for determining share dispensation; usually the number of shares dispensed under these conditions is 1020.
Result
You have set a service level agreement CPU policy. The CPU consumption of users is now governed by the policy you have set.

Chapter 19. Event Notifications

19.1. Configuring Event Notifications in the Administration Portal

Summary
The Red Hat Enterprise Virtualization Manager can notify designated users via email when specific events occur in the environment that the Red Hat Enterprise Virtualization Manager manages. To use this functionality, you must set up a mail transfer agent to deliver messages. Only email notifications can be configured through the Administration Portal. SNMP traps must be configured on the Manager machine.

Procedure 19.1. Configuring Event Notifications

  1. Ensure you have set up the mail transfer agent with the appropriate variables.
  2. Use the Users resource tab, tree mode, or the search function to find and select the user to which event notifications will be sent.
  3. Click the Event Notifier tab in the details pane to list the events for which the user will be notified. This list is blank if you have not configured any event notifications for that user.
  4. Click Manage Events to open the Add Event Notification window.
    The Add Events Notification Window

    Figure 19.1. The Add Events Notification Window

  5. Use the Expand All button or the subject-specific expansion buttons to view the events.
  6. Select the appropriate check boxes.
  7. Enter an email address in the Mail Recipient field.
  8. Click OK to save changes and close the window.
  9. Add and start the ovirt-engine-notifier service on the Red Hat Enterprise Virtualization Manager. This activates the changes you have made:
    # chkconfig --add ovirt-engine-notifier
    # chkconfig ovirt-engine-notifier on
    # service ovirt-engine-notifier restart
Result
The specified user now receives emails based on events in the Red Hat Enterprise Virtualization environment. The selected events display on the Event Notifier tab for that user.

19.2. Canceling Event Notifications in the Administration Portal

Summary
A user has configured some unnecessary email notifications and wants them canceled.

Procedure 19.2. Canceling Event Notifications

  1. In the Users tab, select the user or the user group.
  2. Select the Event Notifier tab in the details pane to list events for which the user receives email notifications.
  3. Click Manage Events to open the Add Event Notification window.
  4. Use the Expand All button, or the subject-specific expansion buttons, to view the events.
  5. Clear the appropriate check boxes to remove notification for that event.
  6. Click OK to save changes and close the window.
Result
You have canceled unnecessary event notifications for the user.

19.3. Parameters for Event Notifications in ovirt-engine-notifier.conf

The event notifier configuration file can be found in /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf.

Table 19.1. ovirt-engine-notifier.conf variables

Variable Name Default Remarks
SENSITIVE_KEYS none A comma-separated list of keys that will not be logged.
JBOSS_HOME /usr/share/jbossas The location of the JBoss application server used by the Manager.
ENGINE_ETC /etc/ovirt-engine The location of the etc directory used by the Manager.
ENGINE_LOG /var/log/ovirt-engine The location of the logs directory used by the Manager.
ENGINE_USR /usr/share/ovirt-engine The location of the usr directory used by the Manager.
ENGINE_JAVA_MODULEPATH ${ENGINE_USR}/modules The file path to which the JBoss modules are appended.
NOTIFIER_DEBUG_ADDRESS none The address of a machine that can be used to perform remote debugging of the Java virtual machine that the notifier uses.
NOTIFIER_STOP_TIME 30 The time, in seconds, after which the service will time out.
NOTIFIER_STOP_INTERVAL 1 The time, in seconds, by which the timeout counter will be incremented.
INTERVAL_IN_SECONDS 120 The interval in seconds between instances of dispatching messages to subscribers.
IDLE_INTERVAL 30 The interval, in seconds, between which low-priority tasks will be performed.
DAYS_TO_KEEP_HISTORY 0 This variable sets the number of days dispatched events will be preserved in the history table. If this variable is not set, events remain on the history table indefinitely.
FAILED_QUERIES_NOTIFICATION_THRESHOLD 30 The number of failed queries after which a notification email is sent. A notification email is sent after the first failure to fetch notifications, and then once every time the number of failures specified by this variable is reached. If you specify a value of 0 or 1, an email will be sent with each failure.
FAILED_QUERIES_NOTIFICATION_RECIPIENTS none The email addresses of the recipients to which notification emails will be sent. Email addresses must be separated by a comma. This entry has been deprecated by the FILTER variable.
DAYS_TO_SEND_ON_STARTUP 0 The number of days of old events that will be processed and sent when the notifier starts.
FILTER exclude:* The algorithm used to determine the triggers for and recipients of email notifications. The value for this variable comprises a combination of include or exclude, the event, and the recipient. For example, include:VDC_START(smtp:mail@example.com) ${FILTER}
MAIL_SERVER none The SMTP mail server address. Required.
MAIL_PORT 25 The port used for communication. Possible values include 25 for plain SMTP, 465 for SMTP with SSL, and 587 for SMTP with TLS.
MAIL_USER none If SSL is enabled to authenticate the user, then this variable must be set. This variable is also used to specify the "from" user address when the MAIL_FROM variable is not set. Some mail servers do not support this functionality. The address is in RFC822 format.
SENSITIVE_KEYS ${SENSITIVE_KEYS},MAIL_PASSWORD Required to authenticate the user if the mail server requires authentication or if SSL or TLS is enabled.
MAIL_PASSWORD none Required to authenticate the user if the mail server requires authentication or if SSL or TLS is enabled.
MAIL_SMTP_ENCRYPTION none The type of encryption to be used in communication. Possible values are none, ssl, tls.
HTML_MESSAGE_FORMAT false The mail server sends messages in HTML format if this variable is set to true.
MAIL_FROM none This variable specifies a sender address in RFC822 format, if supported by the mail server.
MAIL_REPLY_TO none This variable specifies reply-to addresses in RFC822 format on sent mail, if supported by the mail server.
MAIL_SEND_INTERVAL 1 The number of SMTP messages to be sent for each IDLE_INTERVAL
MAIL_RETRIES 4 The number of times to attempt to send an email before failing.
SNMP_MANAGER none The IP addresses or fully qualified domain names of machines that will act as the SNMP managers. Entries must be separated by a space and can contain a port number. For example, manager1.example.com manager2.example.com:164
SNMP_COMMUNITY public The default SNMP community.
SNMP_OID 1.3.6.1.4.1.2312.13.1.1 The default trap object identifiers for alerts. All trap types are sent, appended with event information, to the SNMP manager when this OID is defined. Note that changing the default trap prevents generated traps from complying with the Manager's management information base.
ENGINE_INTERVAL_IN_SECONDS 300 The interval, in seconds, between monitoring the machine on which the Manager is installed. The interval is measured from the time the monitoring is complete.
ENGINE_MONITOR_RETRIES 3 The number of times the notifier attempts to monitor the status of the machine on which the Manager is installed in a given interval after a failure.
ENGINE_TIMEOUT_IN_SECONDS 30 The time, in seconds, to wait before the notifier attempts to monitor the status of the machine on which the Manager is installed in a given interval after a failure.
IS_HTTPS_PROTOCOL false This entry must be set to true if JBoss is being run in secured mode.
SSL_PROTOCOL TLS The protocol used by JBoss configuration connector when SSL is enabled.
SSL_IGNORE_CERTIFICATE_ERRORS false This value must be set to true if JBoss is running in secure mode and SSL errors is to be ignored.
SSL_IGNORE_HOST_VERIFICATION false This value must be set to true if JBoss is running in secure mode and host name verification is to be ignored.
REPEAT_NON_RESPONSIVE_NOTIFICATION false This variable specifies whether repeated failure messages will be sent to subscribers if the machine on which the Manager is installed is non-responsive.
ENGINE_PID /var/lib/ovirt-engine/ovirt-engine.pid The path and file name of the PID of the Manager.

19.4. Configuring the Red Hat Enterprise Virtualization Manager to Send SNMP Traps

Configure your Red Hat Enterprise Virtualization Manager to send Simple Network Management Protocol traps to one or more external SNMP managers. SNMP traps contain system event information; they are used to monitor your Red Hat Enterprise Virtualization environment. The number and type of traps sent to the SNMP manager can be defined within the Red Hat Enterprise Virtualization Manager.
This procedure assumes that you have configured one or more external SNMP managers to receive traps, and that you have the following details:
  • The IP addresses or fully qualified domain names of machines that will act as SNMP managers. Optionally, determine the port through which the manager receives trap notifications; by default, this is UDP port 162.
  • The SNMP community. Multiple SNMP managers can belong to a single community. Management systems and agents can communicate only if they are within the same community. The default community is public.
  • The trap object identifier for alerts. The Red Hat Enterprise Virtualization Manager provides a default OID of 1.3.6.1.4.1.2312.13.1.1. All trap types are sent, appended with event information, to the SNMP manager when this OID is defined. Note that changing the default trap prevents generated traps from complying with the Manager's management information base.

Note

The Red Hat Enterprise Virtualization Manager provides management information bases at /usr/share/doc/ovirt-engine/mibs/OVIRT-MIB.txt and /usr/share/doc/ovirt-engine/mibs/REDHAT-MIB.txt. Load the MIBs in your SNMP manager before proceeding.
Default SNMP configuration values exist on the Manager in the events notification daemon configuration file /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf. The values outlined in the following procedure are based on the default or example values provided in that file. It is recommended that you define an override file, rather than edit the ovirt-engine-notifier.conf file, to persist your configuration options across system changes, like upgrades.

Procedure 19.3. Configuring SNMP Traps on the Manager

  1. On the Manager, create the SNMP configuration file:
    # touch /etc/ovirt-engine/notifier/notifier.conf.d/20-snmp.conf
  2. Specify the SNMP manager(s), the SNMP community, and the OID in the following format:
    SNMP_MANAGERS="manager1.example.com manager2.example.com:162"
    SNMP_COMMUNITY=public
    SNMP_OID=1.3.6.1.4.1.2312.13.1.1
    
  3. Define which events to send to the SNMP manager:
    FILTER="include:*(snmp:) ${FILTER}"
    FILTER="include:AUDIT_LOG_MSG(snmp:) ${FILTER}"
    FILTER="exclude:AUDIT_LOG_MSG include:*(snmp:) ${FILTER}"
    FILTER="exclude:*"
    
    The first line in the example above sends all alerts to the default SNMP profile. The second line sends alerts for AUDIT_LOG_MSG to the default SNMP profile. The third line sends alerts for everything but AUDIT_LOG_MSG to the default SNMP profile. The fourth line is the default filter defined in ovirt-engine-notifier.conf; if you do not disable this filter or apply overriding filters, no notifications will be sent. A full list of audit log messages is available in /usr/share/doc/ovirt-engine/AuditLogMessages.properties. Alternatively, filter results within your SNMP manager.
  4. Save the file.
  5. Start the ovirt-engine-notifier service, and ensure that this service starts on boot:
    # service ovirt-engine-notifier start
    # chkconfig ovirt-engine-notifier on
Check your SNMP manager to ensure that traps are being received.

Note

SNMP_MANAGERS, MAIL_SERVER, or both must be properly defined in ovirt-engine-notifier.conf or in an override file in order for the notifier service to run.

Chapter 20. Utilities

20.1. The oVirt Engine Rename Tool

20.1.1. The oVirt Engine Rename Tool

When the engine-setup command is run in a clean environment, the command generates a number of certificates and keys that use the fully qualified domain name of the Manager supplied during the setup process. If the fully qualified domain name of the Manager must be changed later on (for example, due to migration of the machine hosting the Manager to a different domain), the records of the fully qualified domain name must be updated to reflect the new name. The ovirt-engine-rename command automates this task.
The ovirt-engine-rename command updates records of the fully qualified domain name of the Manager in the following locations:
  • /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
  • /etc/ovirt-engine/imageuploader.conf.d/10-engine-setup.conf
  • /etc/ovirt-engine/isouploader.conf.d/10-engine-setup.conf
  • /etc/ovirt-engine/logcollector.conf.d/10-engine-setup.conf
  • /etc/pki/ovirt-engine/cert.conf
  • /etc/pki/ovirt-engine/cert.template
  • /etc/pki/ovirt-engine/certs/apache.cer
  • /etc/pki/ovirt-engine/keys/apache.key.nopass
  • /etc/pki/ovirt-engine/keys/apache.p12

Warning

While the ovirt-engine-rename command creates a new certificate for the web server on which the Manager runs, it does not affect the certificate for the engine or the certificate authority. Due to this, there is some risk involved in using the ovirt-engine-rename command, particularly in environments that have been upgraded from Red Hat Enterprise Virtualization version 3.2 and earlier. Therefore, changing the fully qualified domain name of the Manager by running engine-cleanup and engine-setup is recommended where possible.

20.1.2. Syntax for the oVirt Engine Rename Command

The basic syntax for the ovirt-engine-rename command is:
# /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
The command also accepts the following options:
--newname=[new name]
Allows you to specify the new fully qualified domain name for the Manager without user interaction.
--log=[file]
Allows you to specify the path and name of a file into which logs of the rename operation are to be written.
--config=[file]
Allows you to specify the path and file name of a configuration file to load into the rename operation.
--config-append=[file]
Allows you to specify the path and file name of a configuration file to append to the rename operation. This option can be used to specify the path and file name of an answer file.
--generate-answer=[file]
Allows you to specify the path and file name of a file into which your answers to and the values changed by the ovirt-engine-rename command are recorded.

20.1.3. Using the oVirt Engine Rename Tool

Summary
You can use the ovirt-engine-rename command to update records of the fully qualified domain name of the Manager.

Procedure 20.1. Renaming the Red Hat Enterprise Virtualization Manager

  1. Prepare all DNS and other relevant records for the new fully qualified domain name.
  2. Update the DHCP server configuration if DHCP is used.
  3. Update the host name on the Manager.
  4. Run the following command:
    # /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
  5. When prompted, press Enter to stop the engine service:
    During execution engine service will be stopped (OK, Cancel) [OK]:
  6. When prompted, enter the new fully qualified domain name for the Manager:
    New fully qualified server name:[new name]
Result
The ovirt-engine-rename command updates records of the fully qualified domain name of the Manager.

20.2. The Domain Management Tool

20.2.1. The Domain Management Tool

Red Hat Enterprise Virtualization Manager authenticates users using directory services. To add users to Red Hat Enterprise Virtualization Manager you must first use the internal admin user to add the directory service that the users must be authenticated against. You add and remove directory services domains using the included domain management tool, engine-manage-domains.
The engine-manage-domains command is only accessible on the machine on which Red Hat Enterprise Virtualization Manager is installed. The engine-manage-domains command must be run as the root user.

Important

With Red Hat Enterprise Virtualization 3.5, a generic LDAP provider is available to configure directory services for authenticating and authorizing users. To benefit from the latest features, use the new generic LDAP provider implementation when configuring new directory services. See Section 17.2.2, “Configuring a Generic LDAP Provider” for more information.
It is recommended that Red Hat Enterprise Virtualization environments with directory services configured by the engine-manage-domains tool continue to use that existing implementation.

20.2.2. Syntax for the Domain Management Tool

The usage syntax is:
engine-manage-domains ACTION [options]
Available actions are:
add
Add a domain to Red Hat Enterprise Virtualization Manager's directory services configuration.
edit
Edit a domain in Red Hat Enterprise Virtualization Manager's directory services configuration.
delete
Delete a domain from Red Hat Enterprise Virtualization Manager's directory services configuration.
validate
Validate Red Hat Enterprise Virtualization Manager's directory services configuration. This command attempts to authenticate each domain in the configuration using the configured user name and password.
list
List Red Hat Enterprise Virtualization Manager's current directory services configuration.
These options can be combined with the actions on the command line:
--add-permissions
Specifies that the domain user will be given the SuperUser role in Red Hat Enterprise Virtualization Manager. By default, if the --add-permissions parameter is not specified, the SuperUser role is not assigned to the domain user. The --add-permissions option is optional. It is only valid when used in combination with the add and edit actions.
--change-password-msg=[MSG]
Specifies the message that is returned to the user at login when their password has expired. This allows you to direct users to a specific URL (must begin with http or https) where their password can be changed. The --change-password-msg option is optional, and is only valid when used in combination with the add and edit actions.
--config-file=[FILE]
Specifies an alternate configuration file that the command must use. The --config-file parameter is always optional.
--domain=[DOMAIN]
The domain on which the action will be performed. The --domain parameter is mandatory for the add, edit, and delete actions.
--force
Forces the command to skip confirmation of delete operations.
--ldap-servers=[SERVERS]
A comma delimited list of LDAP servers to be set to the domain.
--log-file=[LOG_FILE]
The name of a file into which to write logs for an operation.
--log-level=[LOG_LEVEL]
The log level. You can choose either DEBUG (the default option), INFO, WARN, or ERROR. These options are case insensitive.
--log4j-config=[LOG4J_FILE]
A log4j.xml file from which to read logging configuration information.
--provider=[PROVIDER]
The LDAP provider type of the directory server for the domain. Valid values are:
  • ad - Microsoft Active Directory.
  • ipa - Identity Management (IdM).
  • rhds - Red Hat Directory Server. Red Hat Directory Server does not come with Kerberos. Red Hat Enterprise Virtualization requires Kerberos authentication. Red Hat Directory Server must be running as a service inside a Kerberos domain to provide directory services to the Manager.

    Note

    To use Red Hat Directory Server as your directory server, you must have the memberof plug-in installed in Red Hat Directory Server. To use the memberof plug-in, your users must be inetuser.
  • itds - IBM Tivoli Directory Server.
  • oldap - OpenLDAP.
--report
When used in conjunction with the validate action, this command outputs a report of all validation errors encountered.
--resolve-kdc
Resolve key distribution center servers using DNS.
--user=[USER]
Specifies the domain user to use. The --user parameter is mandatory for add, and optional for edit.
--password-file=[FILE]
Specifies that the domain user's password is on the first line of the provided file. This option, or the --interactive option, must be used to provide the password for use with the add action.
For further details on usage, see the engine-manage-domains command's help output:
# engine-manage-domains --help

20.2.3. Using the Domain Management Tool

The following examples demonstrate the use of the engine-manage-domains command to perform basic manipulation of the Red Hat Enterprise Virtualization Manager domain configuration.

20.2.4. Listing Domains in Configuration

The engine-manage-domains command lists the directory services domains defined in the Red Hat Enterprise Virtualization Manager configuration. This command prints the domain, the user name in User Principal Name (UPN) format, and whether the domain is local or remote for each configuration entry.

Example 20.1. engine-manage-domains List Action

# engine-manage-domains list
Domain: directory.demo.redhat.com
    User name: admin@DIRECTORY.DEMO.REDHAT.COM
    This domain is a remote domain.

20.2.5. Adding Domains to Configuration

In this example, the engine-manage-domains command is used to add the IdM domain directory.demo.redhat.com to the Red Hat Enterprise Virtualization Manager configuration. The configuration is set to use the admin user when querying the domain; the password is provided interactively.

Example 20.2. engine-manage-domains Add Action

# engine-manage-domains add --domain=directory.demo.redhat.com --provider=IPA --user=admin
loaded template kr5.conf file
setting default_tkt_enctypes
setting realms
setting domain realm
success
User guid is: 80b71bae-98a1-11e0-8f20-525400866c73
Successfully added domain directory.demo.redhat.com. oVirt Engine restart is required in order for the changes to take place (service ovirt-engine restart).

20.2.6. Editing a Domain in the Configuration

In this example, the engine-manage-domains command is used to edit the directory.demo.redhat.com domain in the Red Hat Enterprise Virtualization Manager configuration. The configuration is updated to use the admin user when querying this domain; the password is provided interactively.

Example 20.3. engine-manage-domains Edit Action

# engine-manage-domains -action=edit -domain=directory.demo.redhat.com -user=admin -interactive
loaded template kr5.conf file
setting default_tkt_enctypes
setting realms
setting domain realmo
success
User guide is: 80b71bae-98a1-11e0-8f20-525400866c73
Successfully edited domain directory.demo.redhat.com. oVirt Engine restart is required in order for the changes to take place (service ovirt-engine restart).

20.2.7. Validating Domain Configuration

In this example, the engine-manage-domains command is used to validate the Red Hat Enterprise Virtualization Manager configuration. The command attempts to log into each listed domain with the credentials provided in the configuration. The domain is reported as valid if the attempt is successful.

Example 20.4. engine-manage-domains Validate Action

# engine-manage-domains validate
User guide is: 80b71bae-98a1-11e0-8f20-525400866c73
Domain directory.demo.redhat.com is valid.

20.2.8. Deleting a Domain from the Configuration

In this example, the engine-manage-domains command is used to remove the directory.demo.redhat.com domain from the Red Hat Enterprise Virtualization Manager configuration. Users defined in the removed domain will no longer be able to authenticate with the Red Hat Enterprise Virtualization Manager. The entries for the affected users will remain defined in the Red Hat Enterprise Virtualization Manager until they are explicitly removed.
The domain being removed in this example is the last one listed in the Red Hat Enterprise Virtualization Manager configuration. A warning is displayed highlighting this fact and that only the admin user from the internal domain will be able to log in until another domain is added.

Example 20.5. engine-manage-domains Delete Action

# engine-manage-domains delete --domain=directory.demo.redhat.com
WARNING: Domain directory.demo.redhat.com is the last domain in the configuration. After deleting it you will have to either add another domain, or to use the internal admin user in order to login.
Successfully deleted domain directory.demo.redhat.com. Please remove all users and groups of this domain using the Administration portal or the API.

20.3. The Engine Configuration Tool

20.3.1. The Engine Configuration Tool

The engine configuration tool is a command-line utility for configuring global settings for your Red Hat Enterprise Virtualization environment. The tool interacts with a list of key-value mappings that are stored in the engine database, and allows you to retrieve and set the value of individual keys, and retrieve a list of all available configuration keys and values. Furthermore, different values can be stored for each configuration level in your Red Hat Enterprise Virtualization environment.

Note

Neither the Red Hat Enterprise Virtualization Manager nor Red Hat JBoss Enterprise Application Platform need to be running to retrieve or set the value of a configuration key. Because the configuration key value-key mappings are stored in the engine database, they can be updated while the postgresql service is running. Changes are then applied when the ovirt-engine service is restarted.

20.3.2. Syntax for the engine-config Command

You can run the engine configuration tool from the machine on which the Red Hat Enterprise Virtualization Manager is installed. For detailed information on usage, print the help output for the command:
# engine-config --help

Common tasks

List available configuration keys
# engine-config --list
List available configuration values
# engine-config --all
Retrieve value of configuration key
# engine-config --get [KEY_NAME]
Replace [KEY_NAME] with the name of the preferred key to retrieve the value for the given version of the key. Use the --cver parameter to specify the configuration version of the value to be retrieved. If no version is provided, values for all existing versions are returned.
Set value of configuration key
# engine-config --set [KEY_NAME]=[KEY_VALUE] --cver=[VERSION]
Replace [KEY_NAME] with the name of the specific key to set, and replace [KEY_VALUE] with the value to be set. You must specify the [VERSION] in environments with more than one configuration version.
Restart the ovirt-engine service to load changes
The ovirt-engine service needs to be restarted for your changes to take effect.
# service ovirt-engine restart

20.3.3. Resetting the Password for the Internal Administrative User

To change the password of the internal administrative user (admin@internal), you must use the engine configuration tool in interactive mode.

Procedure 20.2. Resetting the Password for the Internal Administrative User

  1. Log in to the machine on which the Red Hat Enterprise Virtualization Manager is installed.
  2. Change the password in interactive mode:
    # engine-config -s AdminPassword=interactive
  3. Apply the changes:
    # service ovirt-engine restart
You have changed the password for the internal administrative user, and must use this password the next time you use this account to log in to the Administration Portal, User Portal or REST API.

20.4. The Image Uploader Tool

20.4.1. The Image Uploader Tool

The engine-image-uploader command allows you to list export storage domains and upload virtual machine images in OVF format to an export storage domain and have them automatically recognized in the Red Hat Enterprise Virtualization Manager.

Note

The image uploader only supports gzip-compressed OVF files created by Red Hat Enterprise Virtualization.
The archive contains images and master directories in the following format:
|-- images
|   |-- [Image Group UUID]
|        |--- [Image UUID (this is the disk image)]
|        |--- [Image UUID (this is the disk image)].meta
|-- master
|   |---vms
|       |--- [UUID]
|             |--- [UUID].ovf

20.4.2. Syntax for the engine-image-uploader Command

The basic syntax for the image uploader command is:
engine-image-uploader [options] list
engine-image-uploader [options] upload [file].[file]...[file]
The image uploader command supports two actions - list, and upload.
  • The list action lists the export storage domains to which images can be uploaded.
  • The upload action uploads images to the specified export storage domain.
You must specify one of the above actions when you use the image uploader command. Moreover, you must specify at least one local file to use the upload action.
There are several parameters to further refine the engine-image-uploader command. You can set defaults for any of these parameters in the /etc/ovirt-engine/imageuploader.conf file.

General Options

-h, --help
Displays information on how to use the image uploader command.
--conf-file=[PATH]
Sets [PATH] as the configuration file the command will use. The default is etc/ovirt-engine/imageuploader.conf.
--log-file=[PATH]
Sets [PATH] as the specific file name the command will use to write log output. The default is /var/log/ovirt-engine/ovirt-image-uploader/ovirt-image-uploader-[date].log.
--cert-file=[PATH]
Sets [PATH] as the certificate for validating the engine. The default is /etc/pki/ovirt-engine/ca.pem.
--insecure
Specifies that no attempt will be made to verify the engine.
--quiet
Sets quiet mode, reducing console output to a minimum.
-v, --verbose
Sets verbose mode, providing more console output.
-f, --force
Force mode is necessary when the source file being uploaded has the same file name as an existing file in the destination export domain. This option forces the existing file to be overwritten.

Red Hat Enterprise Virtualization Manager Options

-u [USER], --user=[USER]
Specifies the user whose credentials will be used to execute the command. The [USER] is specified in the format [username]@[domain]. The user must exist in the specified domain and be known to the Red Hat Enterprise Virtualization Manager.
-r [FQDN], --engine=[FQDN]
Specifies the IP address or fully qualified domain name of the Red Hat Enterprise Virtualization Manager from which the images will be uploaded. It is assumed that the image uploader is being run from the same machine on which the Red Hat Enterprise Virtualization Manager is installed. The default value is localhost:443.

Export Storage Domain Options

The following options specify the export domain to which the images will be uploaded. These options cannot be used together; you must used either the -e option or the -n option.
-e [EXPORT_DOMAIN], --export-domain=[EXPORT_DOMAIN]
Sets the storage domain EXPORT_DOMAIN as the destination for uploads.
-n [NFSSERVER], --nfs-server=[NFSSERVER]
Sets the NFS path [NFSSERVER] as the destination for uploads.

Import Options

The following options allow you to customize which attributes of the images being uploaded are included when the image is uploaded to the export domain.
-i, --ovf-id
Specifies that the UUID of the image will not be updated. By default, the command generates a new UUID for images that are uploaded. This ensures there is no conflict between the id of the image being uploaded and the images already in the environment.
-d, --disk-instance-id
Specifies that the instance ID for each disk in the image will not be renamed. By default, the command generates new UUIDs for disks in images that are uploaded. This ensures there are no conflicts between the disks on the image being uploaded and the disks already in the environment.
-m, --mac-address
Specifies that network components in the image will not be removed from the image. By default, the command removes network interface cards from image being uploaded to prevent conflicts with network cards on other virtual machines already in the environment. If you do not use this option, you can use the Administration Portal to add network interface cards to newly imported images and the Manager will ensure there are no MAC address conflicts.
-N [NEW_IMAGE_NAME], --name=[NEW_IMAGE_NAME]
Specifies a new name for the image being uploaded.

20.4.3. Creating an OVF Archive That is Compatible With the Image Uploader

Summary
You can create files that can be uploaded using the engine-image-uploader tool.

Procedure 20.3. Creating an OVF Archive That is Compatible With the Image Uploader

  1. Use the Manager to create an empty export domain. An empty export domain makes it easy to see which directory contains your virtual machine.
  2. Export your virtual machine to the empty export domain you just created.
  3. Log in to the storage server that serves as the export domain, find the root of the NFS share and change to the subdirectory under that mount point. You started with a new export domain, there is only one directory under the exported directory. It contains the images/ and master/ directories.
  4. Run the tar -zcvf my.ovf images/ master/ command to create the tar/gzip OVF archive.
  5. Anyone you give the resulting OVF file to (in this example, called my.ovf) can import it to Red Hat Enterprise Virtualization Manager using the engine-image-uploader command.
Result
You have created a compressed OVF image file that can be distributed. Anyone you give it to can use the engine-image-uploader command to upload your image into their Red Hat Enterprise Virtualization environment.

20.4.4. Basic engine-image-uploader Usage Examples

The following is an example of how to use the engine uploader command to list export storage domains:

Example 20.6. Listing export storage domains using the image uploader

# engine-image-uploader list
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
Export Storage Domain Name | Datacenter  | Export Domain Status
myexportdom               | Myowndc    | active
The following is an example of how to upload an Open Virtualization Format (OVF) file:

Example 20.7. Uploading a file using the image uploader

# engine-image-uploader -e myexportdom upload myrhel6.ovf
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):

20.5. The USB Filter Editor

20.5.1. Installing the USB Filter Editor

Summary
The USB Filter Editor is a Windows tool used to configure the usbfilter.txt policy file. The policy rules defined in this file allow or deny the pass-through of specific USB devices from client machines to virtual machines managed using the Red Hat Enterprise Virtualization Manager. The policy file resides on the Red Hat Enterprise Virtualization Manager in the following location:

/etc/ovirt-engine/usbfilter.txt
Changes to USB filter policies do not take effect unless the ovirt-engine service on the Red Hat Enterprise Virtualization Manager server is restarted.
Download the USBFilterEditor.msi file from the Content Delivery Network: https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=20703. The file works with Red Hat Enterprise Virtualization 3.0, 3.1, 3.2, 3.3, 3.4, and 3.5.

Procedure 20.4. Installing the USB Filter Editor

  1. On a Windows machine, launch the USBFilterEditor.msi installer obtained from the Content Delivery Network.
  2. Follow the steps of the installation wizard. Unless otherwise specified, the USB Filter Editor will be installed by default in either C:\Program Files\RedHat\USB Filter Editor or C:\Program Files(x86)\RedHat\USB Filter Editor depending on your version of Windows.
  3. A USB Filter Editor shortcut icon is created on your desktop.

Important

Use a Secure Copy (SCP) client to import and export filter policies from the Red Hat Enterprise Virtualization Manager. A Secure Copy tool for Windows machines is WinSCP (http://winscp.net).
Result
The default USB device policy provides virtual machines with basic access to USB devices; update the policy to allow the use of additional USB devices.

20.5.2. The USB Filter Editor Interface

  • Double-click the USB Filter Editor shortcut icon on your desktop.
    Red Hat USB Filter Editor

    Figure 20.1. Red Hat USB Filter Editor

The Red Hat USB Filter Editor interface displays the Class, Vendor, Product, Revision, and Action for each USB device. Permitted USB devices are set to Allow in the Action column; prohibited devices are set to Block.

Table 20.1. USB Editor Fields

Name Description
Class Type of USB device; for example, printers, mass storage controllers.
Vendor The manufacturer of the selected type of device.
Product The specific USB device model.
Revision The revision of the product.
Action Allow or block the specified device.
The USB device policy rules are processed in their listed order. Use the Up and Down buttons to move devices higher or lower in the list. The universal Block rule needs to remain as the lowest entry to ensure all USB devices are denied unless explicitly allowed in the USB Filter Editor.

20.5.3. Adding a USB Policy

Summary
Add a USB policy to the USB Filter Editor.
Double-click the USB Filter Editor shortcut icon on your desktop to open the editor.

Procedure 20.5. Adding a USB Policy

  1. Click the Add button. The Edit USB Criteria window opens:
    Edit USB Criteria

    Figure 20.2. Edit USB Criteria

  2. Use the USB Class, Vendor ID, Product ID, and Revision check boxes and lists to specify the device.
    Click the Allow button to permit virtual machines use of the USB device; click the Block button to prohibit the USB device from virtual machines.
    Click OK to add the selected filter rule to the list and close the window.

    Example 20.8. Adding a Device

    The following is an example of how to add USB Class Smartcard, device EP-1427X-2 Ethernet Adapter, from manufacturer Acer Communications & Multimedia to the list of allowed devices.
  3. Click FileSave to save the changes.
Result
You have added a USB policy to the USB Filter Editor. USB filter policies need to be exported to the Red Hat Enterprise Virtualization Manager to take effect.

20.5.4. Removing a USB Policy

Summary
Remove a USB policy from the USB Filter Editor.
Double-click the USB Filter Editor shortcut icon on your desktop to open the editor.

Procedure 20.6. Removing a USB Policy

  1. Select the policy to be removed.
    Select USB Policy

    Figure 20.3. Select USB Policy

  2. Click Remove. A message displays prompting you to confirm that you want to remove the policy.
    Edit USB Criteria

    Figure 20.4. Edit USB Criteria

  3. Click Yes to confirm that you want to remove the policy.
  4. Click FileSave to save the changes.
Result
You have removed a USB policy from the USB Filter Editor. USB filter policies need to be exported to the Red Hat Enterprise Virtualization Manager to take effect.

20.5.5. Searching for USB Device Policies

Summary
Search for attached USB devices to either allow or block them in the USB Filter Editor.
Double-click the USB Filter Editor shortcut icon on your desktop to open the editor.

Procedure 20.7. Searching for USB Device Policies

  1. Click Search. The Attached USB Devices window displays a list of all the attached devices.
    Attached USB Devices

    Figure 20.5. Attached USB Devices

  2. Select the device and click Allow or Block as appropriate. Double-click the selected device to close the window. A policy rule for the device is added to the list.
  3. Use the Up and Down buttons to change the position of the new policy rule in the list.
  4. Click FileSave to save the changes.
Result
You have searched the attached USB devices. USB filter policies need to be exported to the Red Hat Enterprise Virtualization Manager to take effect.

20.5.6. Exporting a USB Policy

Summary
USB device policy changes need to be exported and uploaded to the Red Hat Enterprise Virtualization Manager for the updated policy to take effect. Upload the policy and restart the ovirt-engine service.
Double-click the USB Filter Editor shortcut icon on your desktop to open the editor.

Procedure 20.8. Exporting a USB Policy

  1. Click Export; the Save As window opens.
  2. Save the file with a file name of usbfilter.txt.
  3. Using a Secure Copy client, such as WinSCP, upload the usbfilter.txt file to the server running Red Hat Enterprise Virtualization Manager. The file must be placed in the following directory on the server:

    /etc/ovirt-engine/
  4. As the root user on the server running Red Hat Enterprise Virtualization Manager, restart the ovirt-engine service.
    # service ovirt-engine restart
Result
The USB device policy will now be implemented on virtual machines running in the Red Hat Enterprise Virtualization environment.

20.5.7. Importing a USB Policy

Summary
An existing USB device policy must be downloaded and imported into the USB Filter Editor before you can edit it.

Procedure 20.9. Importing a USB Policy

  1. Using a Secure Copy client, such as WinSCP, upload the usbfilter.txt file to the server running Red Hat Enterprise Virtualization Manager. The file must be placed in the following directory on the server:

    /etc/ovirt-engine/
  2. Double-click the USB Filter Editor shortcut icon on your desktop to open the editor.
  3. Click Import to open the Open window.
  4. Open the usbfilter.txt file that was downloaded from the server.
Result
You are able to edit the USB device policy in the USB Filter Editor.

20.6. The Log Collector Tool

20.6.1. Log Collector

A log collection tool is included in the Red Hat Enterprise Virtualization Manager. This allows you to easily collect relevant logs from across the Red Hat Enterprise Virtualization environment when requesting support.
The log collection command is engine-log-collector. You are required to log in as the root user and provide the administration credentials for the Red Hat Enterprise Virtualization environment. The engine-log-collector -h command displays usage information, including a list of all valid options for the engine-log-collector command.

20.6.2. Syntax for engine-log-collector Command

The basic syntax for the log collector command is:
engine-log-collector [options] list [all, clusters, datacenters]
engine-log-collector [options] collect
The two supported modes of operation are list and collect.
  • The list parameter lists either the hosts, clusters, or data centers attached to the Red Hat Enterprise Virtualization Manager. You are able to filter the log collection based on the listed objects.
  • The collect parameter performs log collection from the Red Hat Enterprise Virtualization Manager. The collected logs are placed in an archive file under the /tmp/logcollector directory. The engine-log-collector command assigns each log a specific file name.
Unless another parameter is specified, the default action is to list the available hosts together with the data center and cluster to which they belong. You will be prompted to enter user names and passwords to retrieve certain logs.
There are numerous parameters to further refine the engine-log-collector command.

General options

--version
Displays the version number of the command in use and returns to prompt.
-h, --help
Displays command usage information and returns to prompt.
--conf-file=PATH
Sets PATH as the configuration file the tool is to use.
--local-tmp=PATH
Sets PATH as the directory in which logs are saved. The default directory is /tmp/logcollector.
--ticket-number=TICKET
Sets TICKET as the ticket, or case number, to associate with the SOS report.
--upload=FTP_SERVER
Sets FTP_SERVER as the destination for retrieved logs to be sent using FTP. Do not use this option unless advised to by a Red Hat support representative.
--log-file=PATH
Sets PATH as the specific file name the command should use for the log output.
--quiet
Sets quiet mode, reducing console output to a minimum. Quiet mode is off by default.
-v, --verbose
Sets verbose mode, providing more console output. Verbose mode is off by default.

Red Hat Enterprise Virtualization Manager Options

These options filter the log collection and specify authentication details for the Red Hat Enterprise Virtualization Manager.
These parameters can be combined for specific commands. For example, engine-log-collector --user=admin@internal --cluster ClusterA,ClusterB --hosts "SalesHost"* specifies the user as admin@internal and limits the log collection to only SalesHost hosts in clusters A and B.
--no-hypervisors
Omits virtualization hosts from the log collection.
-u USER, --user=USER
Sets the user name for login. The USER is specified in the format user@domain, where user is the user name and domain is the directory services domain in use. The user must exist in directory services and be known to the Red Hat Enterprise Virtualization Manager.
-r FQDN, --rhevm=FQDN
Sets the fully qualified domain name of the Red Hat Enterprise Virtualization Manager server from which to collect logs, where FQDN is replaced by the fully qualified domain name of the Manager. It is assumed that the log collector is being run on the same local host as the Red Hat Enterprise Virtualization Manager; the default value is localhost.
-c CLUSTER, --cluster=CLUSTER
Collects logs from the virtualization hosts in the nominated CLUSTER in addition to logs from the Red Hat Enterprise Virtualization Manager. The cluster(s) for inclusion must be specified in a comma-separated list of cluster names or match patterns.
-d DATACENTER, --data-center=DATACENTER
Collects logs from the virtualization hosts in the nominated DATACENTER in addition to logs from the Red Hat Enterprise Virtualization Manager. The data center(s) for inclusion must be specified in a comma-separated list of data center names or match patterns.
-H HOSTS_LIST, --hosts=HOSTS_LIST
Collects logs from the virtualization hosts in the nominated HOSTS_LIST in addition to logs from the Red Hat Enterprise Virtualization Manager. The hosts for inclusion must be specified in a comma-separated list of host names, fully qualified domain names, or IP addresses. Match patterns are also valid.

SOS Report Options

The log collector uses the JBoss SOS plugin. Use the following options to activate data collection from the JMX console.
--jboss-home=JBOSS_HOME
JBoss installation directory path. The default is /var/lib/jbossas.
--java-home=JAVA_HOME
Java installation directory path. The default is /usr/lib/jvm/java.
--jboss-profile=JBOSS_PROFILE
Displays a quoted and space-separated list of server profiles; limits log collection to specified profiles. The default is 'rhevm-slimmed'.
--enable-jmx
Enables the collection of run-time metrics from Red Hat Enterprise Virtualization's JBoss JMX interface.
--jboss-user=JBOSS_USER
User with permissions to invoke JBoss JMX. The default is admin.
--jboss-logsize=LOG_SIZE
Maximum size in MB for the retrieved log files.
--jboss-stdjar=STATE
Sets collection of JAR statistics for JBoss standard JARs. Replace STATE with on or off. The default is on.
--jboss-servjar=STATE
Sets collection of JAR statistics from any server configuration directories. Replace STATE with on or off. The default is on.
--jboss-twiddle=STATE
Sets collection of twiddle data on or off. Twiddle is the JBoss tool used to collect data from the JMX invoker. Replace STATE with on or off. The default is on.
--jboss-appxml=XML_LIST
Displays a quoted and space-separated list of applications with XML descriptions to be retrieved. Default is all.

SSH Configuration

--ssh-port=PORT
Sets PORT as the port to use for SSH connections with virtualization hosts.
-k KEYFILE, --key-file=KEYFILE
Sets KEYFILE as the public SSH key to be used for accessing the virtualization hosts.
--max-connections=MAX_CONNECTIONS
Sets MAX_CONNECTIONS as the maximum concurrent SSH connections for logs from virtualization hosts. The default is 10.

PostgreSQL Database Options

The database user name and database name must be specified, using the pg-user and dbname parameters, if they have been changed from the default values.
Use the pg-dbhost parameter if the database is not on the local host. Use the optional pg-host-key parameter to collect remote logs. The PostgreSQL SOS plugin must be installed on the database server for remote log collection to be successful.
--no-postgresql
Disables collection of database. The log collector will connect to the Red Hat Enterprise Virtualization Manager PostgreSQL database and include the data in the log report unless the --no-postgresql parameter is specified.
--pg-user=USER
Sets USER as the user name to use for connections with the database server. The default is postgres.
--pg-dbname=DBNAME
Sets DBNAME as the database name to use for connections with the database server. The default is rhevm.
--pg-dbhost=DBHOST
Sets DBHOST as the host name for the database server. The default is localhost.
--pg-host-key=KEYFILE
Sets KEYFILE as the public identity file (private key) for the database server. This value is not set by default; it is required only where the database does not exist on the local host.

20.6.3. Basic Log Collector Usage

When the engine-log-collector command is run without specifying any additional parameters, its default behavior is to collect all logs from the Red Hat Enterprise Virtualization Manager and its attached hosts. It will also collect database logs unless the --no-postgresql parameter is added. In the following example, log collector is run to collect all logs from the Red Hat Enterprise Virtualization Manager and three attached hosts.

Example 20.9. Log Collector Usage

# engine-log-collector
INFO: Gathering oVirt Engine information...
INFO: Gathering PostgreSQL the oVirt Engine database and log files from localhost...
Please provide REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
About to collect information from 3 hypervisors. Continue? (Y/n):
INFO: Gathering information from selected hypervisors...
INFO: collecting information from 192.168.122.250
INFO: collecting information from 192.168.122.251
INFO: collecting information from 192.168.122.252
INFO: finished collecting information from 192.168.122.250
INFO: finished collecting information from 192.168.122.251
INFO: finished collecting information from 192.168.122.252
Creating compressed archive...
INFO Log files have been collected and placed in /tmp/logcollector/sosreport-rhn-account-20110804121320-ce2a.tar.xz.
The MD5 for this file is 6d741b78925998caff29020df2b2ce2a and its size is 26.7M

20.7. The ISO Uploader Tool

20.7.1. The ISO Uploader Tool

The ISO uploader is a tool for uploading ISO images to the ISO storage domain. It is installed as part of the Red Hat Enterprise Virtualization Manager.
The ISO uploader command is engine-iso-uploader. You must log in as the root user and provide the administration credentials for the Red Hat Enterprise Virtualization environment to use this command. The engine-iso-uploader -h command displays usage information, including a list of all valid options for the engine-iso-uploader command.

20.7.2. Syntax for the engine-iso-uploader Command

The basic syntax for the ISO uploader command is:
engine-iso-uploader [options] list
engine-iso-uploader [options] upload [file].[file]...[file]
The ISO uploader command supports two actions - list, and upload.
  • The list action lists the ISO storage domains to which ISO files can be uploaded. The Red Hat Enterprise Virtualization Manager creates this list on the machine on which the Manager is installed during the installation process.
  • The upload action uploads a single ISO file or multiple ISO files separated by spaces to the specified ISO storage domain. NFS is used by default, but SSH is also available.
You must specify one of the above actions when you use the ISO uploader command. Moreover, you must specify at least one local file to use the upload action.
There are several parameters to further refine the engine-iso-uploader command.

General Options

--version
Displays the version of the ISO uploader command.
-h, --help
Displays information on how to use the ISO uploader command.
--conf-file=[PATH]
Sets [PATH] as the configuration file the command will to use. The default is /etc/ovirt-engine/isouploader.conf.
--log-file=[PATH]
Sets [PATH] as the specific file name the command will use to write log output. The default is /var/log/ovirt-engine/ovirt-iso-uploader/ovirt-iso-uploader[date].log.
--cert-file=[PATH]
Sets [PATH] as the certificate for validating the engine. The default is /etc/pki/ovirt-engine/ca.pem.
--insecure
Specifies that no attempt will be made to verify the engine.
--nossl
Specifies that SSL will not be used to connect to the engine.
--quiet
Sets quiet mode, reducing console output to a minimum.
-v, --verbose
Sets verbose mode, providing more console output.
-f, --force
Force mode is necessary when the source file being uploaded has the same file name as an existing file in the destination ISO domain. This option forces the existing file to be overwritten.

Red Hat Enterprise Virtualization Manager Options

-u [USER], --user=[USER]
Specifies the user whose credentials will be used to execute the command. The [USER] is specified in the format [username]@[domain]. The user must exist in the specified domain and be known to the Red Hat Enterprise Virtualization Manager.
-r [FQDN], --engine=[FQDN]
Specifies the IP address or fully qualified domain name of the Red Hat Enterprise Virtualization Manager from which the images will be uploaded. It is assumed that the image uploader is being run from the same machine on which the Red Hat Enterprise Virtualization Manager is installed. The default value is localhost:443.

ISO Storage Domain Options

The following options specify the ISO domain to which the images will be uploaded. These options cannot be used together; you must used either the -i option or the -n option.
-i, --iso-domain=[ISODOMAIN]
Sets the storage domain [ISODOMAIN] as the destination for uploads.
-n, --nfs-server=[NFSSERVER]
Sets the NFS path [NFSSERVER] as the destination for uploads.

Connection Options

The ISO uploader uses NFS as default to upload files. These options specify SSH file transfer instead.
--ssh-user=[USER]
Sets [USER] as the SSH user name to use for the upload. The default is root.
--ssh-port=[PORT]
Sets [PORT] as the port to use when connecting to SSH.
-k [KEYFILE], --key-file=[KEYFILE]
Sets [KEYFILE] as the public key to use for SSH authentication. You will be prompted to enter the password of the user specified with --ssh-user=[USER] if no key is set.

20.7.3. Specifying an NFS Server

Example 20.10. Uploading to an NFS Server

# engine-iso-uploader --nfs-server=storage.demo.redhat.com:/iso/path upload RHEL6.0.iso

20.7.4. Basic ISO Uploader Usage

The example below demonstrates the ISO uploader and the list parameter. The first command lists the available ISO storage domains; the admin@internal user is used because no user was specified in the command. The second command uploads an ISO file over NFS to the specified ISO domain.

Example 20.11. List Domains and Upload Image

# engine-iso-uploader list
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ISO Storage Domain Name   | Datacenter          | ISO Domain Status
ISODomain                 | Default             | active
# engine-iso-uploader --iso-domain=[ISODomain] upload [RHEL6.iso]
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):

20.7.5. Uploading the VirtIO and Guest Tool Image Files to an ISO Storage Domain

The example below demonstrates the command to upload the virtio-win.iso, virtio-win_x86.vfd, virtio-win_amd64.vfd, and rhev-tools-setup.iso image files to the ISODomain.

Example 20.12. Uploading the VirtIO and Guest Tool Image Files

# engine-iso-uploader --iso-domain=[ISODomain] upload /usr/share/virtio-win/virtio-win.iso /usr/share/virtio-win/virtio-win_x86.vfd /usr/share/virtio-win/virtio-win_amd64.vfd /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso

20.7.6. VirtIO and Guest Tool Image Files

The virtio-win ISO and Virtual Floppy Drive (VFD) images, which contain the VirtIO drivers for Windows virtual machines, and the rhev-tools-setup ISO, which contains the Red Hat Enterprise Virtualization Guest Tools for Windows virtual machines, are copied to an ISO storage domain upon installation and configuration of the domain.
These image files provide software that can be installed on virtual machines to improve performance and usability. The most recent virtio-win and rhev-tools-setup files can be accessed via the following symbolic links on the file system of the Red Hat Enterprise Virtualization Manager:
  • /usr/share/virtio-win/virtio-win.iso
  • /usr/share/virtio-win/virtio-win_x86.vfd
  • /usr/share/virtio-win/virtio-win_amd64.vfd
  • /usr/share/rhev-guest-tools-iso/rhev-tools-setup.iso
These image files must be manually uploaded to ISO storage domains that were not created locally by the installation process. Use the engine-iso-uploader command to upload these images to your ISO storage domain. Once uploaded, the image files can be attached to and used by virtual machines.

Part III. Gathering Information About the Environment

Chapter 21. Log Files

21.1. Red Hat Enterprise Virtualization Manager Installation Log Files

Table 21.1. Installation

Log File Description
/var/log/ovirt-engine/engine-cleanup_yyyy_mm_dd_hh_mm_ss.log Log from the engine-cleanup command. This is the command used to reset a Red Hat Enterprise Virtualization Manager installation. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist.
/var/log/ovirt-engine/engine-db-install-yyyy_mm_dd_hh_mm_ss.log Log from the engine-setup command detailing the creation and configuration of the rhevm database.
/var/log/ovirt-engine/rhevm-dwh-setup-yyyy_mm_dd_hh_mm_ss.log Log from the rhevm-dwh-setup command. This is the command used to create the ovirt_engine_history database for reporting. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently.
/var/log/ovirt-engine/ovirt-engine-reports-setup-yyyy_mm_dd_hh_mm_ss.log Log from the rhevm-reports-setup command. This is the command used to install the Red Hat Enterprise Virtualization Manager Reports modules. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently.
/var/log/ovirt-engine/setup/ovirt-engine-setup-yyyymmddhhmmss.log Log from the engine-setup command. A log is generated each time the command is run. The date and time of the run is used in the filename to allow multiple logs to exist concurrently.

21.2. Red Hat Enterprise Virtualization Manager Log Files

Table 21.2. Service Activity

Log File Description
/var/log/ovirt-engine/engine.log Reflects all Red Hat Enterprise Virtualization Manager GUI crashes, Active Directory lookups, Database issues, and other events.
/var/log/ovirt-engine/host-deploy Log files from hosts deployed from the Red Hat Enterprise Virtualization Manager.
/var/lib/ovirt-engine/setup-history.txt Tracks the installation and upgrade of packages associated with the Red Hat Enterprise Virtualization Manager.

21.3. SPICE Log Files

SPICE log files are useful when troubleshooting SPICE connection issues. To start SPICE debugging, change the log level to debugging. Then, identify the log location.
Both the clients used to access the guest machines and the guest machines themselves have SPICE log files. For client side logs, if a SPICE client was launched using a browser plug-in, debugging is generally controlled by environment variables. If a SPICE client is launched using the native client, for which a console.vv file is downloaded, use the remote-viewer command to enable debugging and generate log output.

21.3.1. SPICE Logs for Hypervisor SPICE Servers

Table 21.3. SPICE Logs for Hypervisor SPICE Servers

Log Type Log Location To Change Log Level:
Host/Hypervisor SPICE Server
/var/log/libvirt/qemu/(guest_name).log
Run export SPICE_DEBUG_LEVEL=5 on the host/hypervisor prior to launching the guest.

21.3.2. SPICE Logs for Guest Machines

Table 21.4. SPICE Logs for Guest Machines

Log Type Log Location To Change Log Level:
Windows Guest
C:\Windows\Temp\vdagent.log
C:\Windows\Temp\vdservice.log
Not applicable
Red Hat Enterprise Linux Guest
/var/log/spice-vdagent.log
Create a /etc/sysconfig/spice-vdagentd file with this entry: SPICE_VDAGENTD_EXTRA_ARGS=”-d -d”

21.3.3. SPICE Logs for SPICE Clients Launched Using Browser Plug-ins

For SPICE client launched using browser plug-ins, the log location and change log level instructions are different depending on the OS type, OS version, and system type.

Table 21.5. SPICE Logs for Client Machines (Browser Plug-ins)

Log Type Log Location To Change Log Level:
SPICE Client (Windows 7)
C:\Windows\Temp\spicex.log
  1. Click the Computer main menu item, and select Computer.
  2. Click System properties, and select Advanced system settings.
  3. Select Advanced, and click Environment Variables.
  4. Find the User or System variables, and add a New variable called SPICEX_DEBUG_LEVEL with a value of 4.
SPICE Client (Windows XP)
C:\Documents and Settings\(User Name)\Local Settings\Temp\spicex.log
  1. Click the start main menu item, and select My Computer.
  2. Click Control Panel, and click System.
  3. Select Advanced, and click Environment Variables.
  4. Find the User or System variables, and add a New variable called SPICEX_DEBUG_LEVEL with a value of 4.
SPICE Client (Red Hat Enterprise Linux 6)
~/home/.spicec/spice-xpi.log
Edit the /etc/spice/logger.ini file and change the log4j.rootCategory variable from INFO, R to DEBUG, R.
SPICE Client (Red Hat Enterprise Linux 7)
~/.xsession-errors
Launch Firefox from the command line with debug options: G_MESSAGES_DEBUG=all SPICE_DEBUG=1 firefox.
Touch the ~/.xsession-errors file.
USB Redirector on Windows Client
C:\Windows\Temp\usbclerk.log
Not applicable.

21.3.4. SPICE Logs for SPICE Clients Launched Using console.vv Files

For Linux client machines:
  1. Enable SPICE debugging by running the remote-viewer command with the --spice-debug option. When prompted, enter the connection URL, for example, spice://[virtual_machine_IP]:[port].
    #  remote-viewer --spice-debug
    
  2. To view logs, download the console.vv file and run the remote-viewer command with the --spice-debug option and specify the full path to the console.vv file.
    # remote-viewer --spice-debug /path/to/console.vv
For Windows client machines:
  1. Download the debug-helper.exe file and move it to the same directory as the remote-viewer.exe file. For example, the C:\Users\[user name]\AppData\Local\virt-viewer\bin directory.
  2. Execute the debug-helper.exe file to install the GNU Debugger (GDB).
  3. Enable SPICE debugging by executing the debug-helper.exe file.
    debug-helper.exe remote-viewer.exe --spice-controller
    
  4. To view logs, connect to the virtual machine, and you will see a command prompt running GDB that prints standard output and standard error of remote-viewer.

21.4. Red Hat Enterprise Virtualization Host Log Files

Table 21.6. 

Log File Description
/var/log/vdsm/libvirt.log Log file for libvirt.
/var/log/vdsm/spm-lock.log Log file detailing the host's ability to obtain a lease on the Storage Pool Manager role. The log details when the host has acquired, released, renewed, or failed to renew the lease.
/var/log/vdsm/vdsm.log Log file for VDSM, the Manager's agent on the virtualization host(s).
/tmp/ovirt-host-deploy-@DATE@.log Host deployment log, copied to engine as /var/log/ovirt-engine/host-deploy/ovirt-@DATE@-@HOST@-@CORRELATION_ID@.log after the host has been successfully deployed.

21.5. Remotely Logging Host Activities

21.5.1. Setting Up a Virtualization Host Logging Server

Summary
Red Hat Enterprise Virtualization hosts generate and update log files, recording their actions and problems. Collecting these log files centrally simplifies debugging.
This procedure should be used on your centralized log server. You could use a separate logging server, or use this procedure to enable host logging on the Red Hat Enterprise Virtualization Manager.

Procedure 21.1. Setting up a Virtualization Host Logging Server

  1. Configure SELinux to allow rsyslog traffic.
    # semanage port -a -t syslogd_port_t -p udp 514
  2. Edit /etc/rsyslog.conf and add the following lines:
    $template TmplAuth, "/var/log/%fromhost%/secure" 
    $template TmplMsg, "/var/log/%fromhost%/messages" 
    
    $RuleSet remote
    authpriv.*   ?TmplAuth
    *.info,mail.none;authpriv.none,cron.none   ?TmplMsg
    $RuleSet RSYSLOG_DefaultRuleset
    $InputUDPServerBindRuleset remote
    
    Uncomment the following:
    #$ModLoad imudp
    #$UDPServerRun 514
  3. Restart the rsyslog service:
    # service rsyslog restart
Result
Your centralized log server is now configured to receive and store the messages and secure logs from your virtualization hosts.

21.5.2. Configuring Red Hat Enterprise Virtualization Hypervisor Hosts to Use a Logging Server

Summary
Red Hat Enterprise Virtualization hosts generate and update log files, recording their actions and problems. Collecting these log files centrally simplifies debugging.
Use this procedure on a Red Hat Enterprise Virtualization Hypervisor host to begin sending log files to your centralized log server.

Procedure 21.2. Configuring Red Hat Enterprise Virtualization Hypervisor Hosts to Use a Logging Server

  1. Log in to your Red Hat Enterprise Virtualization Hypervisor host as admin to access the Hypervisors text user interface (TUI) setup screen.
  2. Select Logging from the list of options on the left of the screen.
  3. Press the Tab key to reach the text entry fields. Enter the IP address or FQDN of your centralized log server and the port it uses.
  4. Press the Tab key to reach the Apply, and press the Enter Key.
Result
Your Red Hat Enterprise Virtualization Hypervisor host has been configured to send messages to a centralized log server.

Chapter 22. Proxies

22.1. SPICE Proxy

22.1.1. SPICE Proxy Overview

The SPICE Proxy is a tool used to connect SPICE Clients to virtual machines when the SPICE Clients are outside the network that connects the hypervisors. Setting up a SPICE Proxy consists of installing Squid on a machine and configuring iptables to allow proxy traffic through the firewall. Turning a SPICE Proxy on consists of using engine-config on the Manager to set the key SpiceProxyDefault to a value consisting of the name and port of the proxy. Turning a SPICE Proxy off consists of using engine-config on the Manager to remove the value to which the key SpiceProxyDefault has been set.

Important

The SPICE Proxy can only be used in conjunction with the standalone SPICE client, and cannot be used to connect to virtual machines using SPICE HTML5 or noVNC.

22.1.2. SPICE Proxy Machine Setup

Summary
This procedure explains how to set up a machine as a SPICE Proxy. A SPICE Proxy makes it possible to connect to the Red Hat Enterprise Virtualization network from outside the network. We use Squid in this procedure to provide proxy services.

Procedure 22.1. Installing Squid on Red Hat Enterprise Linux

  1. Install Squid on the Proxy machine:
    # yum install squid
  2. Open /etc/squid/squid.conf. Change:
    http_access deny CONNECT !SSL_ports
    to:
    http_access deny CONNECT !Safe_ports
  3. Restart the proxy:
    # service squid restart
  4. Open the default squid port:
    # iptables -A INPUT -p tcp --dport 3128 -j ACCEPT
  5. Make this iptables rule persistent:
    # service iptables save
Result
You have now set up a machine as a SPICE proxy. Before connecting to the Red Hat Enterprise Virtualization network from outside the network, activate the SPICE proxy.

22.1.3. Turning on SPICE Proxy

Summary
This procedure explains how to activate (or turn on) the SPICE proxy.

Procedure 22.2. Activating SPICE Proxy

  1. On the Manager, use the engine-config tool to set a proxy:
    # engine-config -s SpiceProxyDefault=someProxy
  2. Restart the ovirt-engine service:
    # service ovirt-engine restart
    The proxy must have this form:
    protocol://[host]:[port]

    Note

    Only the HTTP protocol is supported by SPICE clients. If HTTPS is specified, the client will ignore the proxy setting and attempt a direct connection to the hypervisor.
Result
SPICE Proxy is now activated (turned on). It is now possible to connect to the Red Hat Enterprise Virtualization network through the SPICE proxy.

22.1.4. Turning Off a SPICE Proxy

Summary
This procedure explains how to turn off (deactivate) a SPICE proxy.

Procedure 22.3. Turning Off a SPICE Proxy

  1. Log in to the Manager:
    $ ssh root@[IP of Manager]
  2. Run the following command to clear the SPICE proxy:
    # engine-config -s SpiceProxyDefault=""
  3. Restart the Manager:
    # service ovirt-engine restart
Result
SPICE proxy is now deactivated (turned off). It is no longer possible to connect to the Red Hat Enterprise Virtualization network through the SPICE proxy.

22.2. Squid Proxy

22.2.1. Installing and Configuring a Squid Proxy

Summary
This section explains how to install and configure a Squid proxy to the User Portal. A Squid proxy server is used as a content accelerator. It caches frequently-viewed content, reducing bandwidth and improving response times.

Procedure 22.4. Configuring a Squid Proxy

  1. Obtain a keypair and certificate for the HTTPS port of the Squid proxy server. You can obtain this keypair the same way that you would obtain a keypair for another SSL/TLS service. The keypair is in the form of two PEM files which contain the private key and the signed certificate. For this procedure, we assume that they are named proxy.key and proxy.cer.

    Note

    The keypair and certificate can also be generated using the certificate authority of the engine. If you already have the private key and certificate for the proxy and do not want to generate it with the engine certificate authority, skip to the next step.
  2. Choose a host name for the proxy. Then, choose the other components of the distinguished name of the certificate for the proxy.

    Note

    It is good practice to use the same country and same organization name used by the engine itself. Find this information by logging in to the machine where the Manager is installed and running the following command:
    # openssl x509 -in /etc/pki/ovirt-engine/ca.pem -noout -subject
    
    This command outputs something like this:
    subject= /C=US/O=Example Inc./CN=engine.example.com.81108
    
    The relevant part here is /C=US/O=Example Inc.. Use this to build the complete distinguished name for the certificate for the proxy:
    /C=US/O=Example Inc./CN=proxy.example.com
  3. Log in to the proxy machine and generate a certificate signing request:
    # openssl req -newkey rsa:2048 -subj '/C=US/O=Example Inc./CN=proxy.example.com' -nodes -keyout proxy.key -out proxy.req
    

    Important

    You must include the quotes around the distinguished name for the certificate. The -nodes option ensures that the private key is not encrypted; this means that you do not need to enter the password to start the proxy server.
    The command generates two files: proxy.key and proxy.req. proxy.key is the private key. Keep this file safe. proxy.req is the certificate signing request. proxy.req does not require any special protection.
  4. To generate the signed certificate, copy the certificate signing request file from the proxy machine to the Manager machine:
    # scp proxy.req engine.example.com:/etc/pki/ovirt-engine/requests/.
    
  5. Log in to the Manager machine and sign the certificate:
    # /usr/share/ovirt-engine/bin/pki-enroll-request.sh --name=proxy --days=3650 --subject='/C=US/O=Example Inc./CN=proxy.example.com'
    
    This signs the certificate and makes it valid for 10 years (3650 days). Set the certificate to expire earlier, if you prefer.
  6. The generated certificate file is available in the directory /etc/pki/ovirt-engine/certs and should be named proxy.cer. On the proxy machine, copy this file from the Manager machine to your current directory:
    # scp engine.example.com:/etc/pki/ovirt-engine/certs/proxy.cer .
    
  7. Ensure both proxy.key and proxy.cer are present on the proxy machine:
    # ls -l proxy.key proxy.cer
    
  8. Install the Squid proxy server package on the proxy machine:
    # yum install squid
    
  9. Move the private key and signed certificate to a place where the proxy can access them, for example to the /etc/squid directory:
    # cp proxy.key proxy.cer /etc/squid/.
    
  10. Set permissions so that the squid user can read these files:
    # chgrp squid /etc/squid/proxy.*
    # chmod 640 /etc/squid/proxy.*
    
  11. The Squid proxy must verify the certificate used by the engine. Copy the Manager certificate to the proxy machine. This example uses the file path /etc/squid:
    # scp engine.example.com:/etc/pki/ovirt-engine/ca.pem /etc/squid/.
    

    Note

    The default CA certificate is located in /etc/pki/ovirt-engine/ca.pem on the Manager machine.
  12. Set permissions so that the squid user can read the certificate file:
    # chgrp squid /etc/squid/ca.pem
    # chmod 640 /etc/squid/ca.pem
    
  13. If SELinux is in enforcing mode, change the context of port 443 using the semanage tool to permit Squid to use port 443:
    # yum install policycoreutils-python
    # semanage port -m -p tcp -t http_cache_port_t 443
    
  14. Replace the existing Squid configuration file with the following:
    https_port 443 key=/etc/squid/proxy.key cert=/etc/squid/proxy.cer ssl-bump defaultsite=engine.example.com
    cache_peer engine.example.com parent 443 0 no-query originserver ssl sslcafile=/etc/squid/ca.pem name=engine
    cache_peer_access engine allow all
    ssl_bump allow all
    http_access allow all
    
  15. Restart the Squid proxy server:
    # service squid restart
    
  16. Connect to the User Portal using the complete URL, for instance:
    https://proxy.example.com/UserPortal/org.ovirt.engine.ui.userportal.UserPortal/UserPortal.html

    Note

    Shorter URLs, for example https://proxy.example.com/UserPortal, will not work. These shorter URLs are redirected to the long URL by the application server, using the 302 response code and the Location header. The version of Squid in Red Hat Enterprise Linux does not support rewriting these headers.

Note

Squid Proxy in the default configuration will terminate its connection after 15 idle minutes. To increase the amount of time before Squid Proxy terminates its idle connection, adjust the read_timeout option in squid.conf (for instance read_timeout 10 hours).

Chapter 23. History Database, Reports, and Dashboards

23.1. Introduction

23.1.1. History Database Overview

Red Hat Enterprise Virtualization includes a comprehensive management history database, which can be used by reporting applications to generate reports at data center, cluster and host levels. This chapter provides information to enable you to set up queries against the history database and generate reports.
Red Hat Enterprise Virtualization Manager uses PostgreSQL 8.4.x as a database platform to store information about the state of the virtualization environment, its configuration and performance. At install time, Red Hat Enterprise Virtualization Manager creates a PostgreSQL database called engine.
Installing the rhevm-dwh package creates a second database called ovirt_engine_history, which contains historical configuration information and statistical metrics collected every minute over time from the engine operational database. Tracking the changes to the database provides information on the objects in the database, enabling the user to analyze activity, enhance performance, and resolve difficulties.

Warning

The replication of data in the ovirt_engine_history database is performed by the Red Hat Enterprise Virtualization Manager Extract Transform Load Service, ovirt-engine-dwhd. The service is based on Talend Open Studio, a data integration tool. This service is configured to start automatically during the data warehouse package setup. It is a Java program responsible for extracting data from the engine database, transforming the data to the history database standard and loading it to the ovirt_engine_history database.
The ovirt-engine-dwhd service must not be stopped.
The ovirt_engine_history database schema changes over time. The database includes a set of database views to provide a supported, versioned API with a consistent structure. A view is a virtual table composed of the result set of a database query. The database stores the definition of a view as a SELECT statement. The result of the SELECT statement populates the virtual table that the view returns. A user references the view name in PL/PGSQL statements the same way a table is referenced.

23.1.2. Database Names in Red Hat Enterprise Virtualization 3.0 and 3.1

If your Red Hat Enterprise Virtualization environment was upgraded from Red Hat Enterprise Virtualization 3.0 to 3.1, it retains the database naming convention of an operational database called rhevm and a history database called rhevm_history.
If your Red Hat Enterprise Virtualization is a new installation of Red Hat Enterprise Virtualization 3.1, the operational database is called ovirt-engine and history database is called ovirt-engine-history.
The ovirt-engine database is equivalent to the rhevm database. The ovirt-engine-history database is equivalent to the rhevm_history database.

23.1.3. JasperReports and JasperServer in Red Hat Enterprise Virtualization

Red Hat Enterprise Virtualization provides a customized implementation of JasperServer, which allows web-based access to a range of pre-configured reports and dashboards, plus the ability to create ad hoc reports.
JasperReports is an open source reporting tool, capable of being embedded in Java-based applications. It produces reports which can be rendered to screen, printed, or exported to a variety of formats including PDF, Excel, CSV, Word, RTF, Flash, ODT and ODS. JasperReports integrates with JasperServer, an open source reporting server for JasperReports. Using JasperServer, reports built in JasperReports can be accessed via a web interface.

23.2. History Database

23.2.1. Red Hat Enterprise Virtualization History Database

Red Hat Enterprise Virtualization Reports uses data from the Red Hat Enterprise Virtualization History Database (called ovirt_engine_history) which tracks the engine database over time.

Important

Sufficient data must exist in the history database to produce meaningful reports. Most reports use values aggregated on a daily basis. Meaningful reports can only be produced if data for at least several days is available. In particular, because trend reports are designed to highlight long term trends in the system, a sufficient history is required to highlight meaningful trends.

23.2.2. Tracking Configuration History

The ETL service, ovirt-engine-dwhd, tracks three types of changes:
  • A new entity is added to the engine database - the ETL Service replicates the change to the ovirt_engine_history database as a new entry.
  • An existing entity is updated - the ETL Service replicates the change to the ovirt_engine_history database as a new entry.
  • An entity is removed from the engine database - A new entry in the ovirt_engine_history database flags the corresponding entity as removed. Removed entities are only flagged as removed. To maintain correctness of historical reports and representations, they are not physically removed.
The configuration tables in the ovirt_engine_history database differ from the corresponding tables in the engine database in several ways. The most apparent difference is they contain fewer configuration columns. This is because certain configuration items are less interesting to report than others and are not kept due to database size considerations. Also, columns from a few tables in the engine database appear in a single table in ovirt_engine_history and have different column names to make viewing data more convenient and comprehensible. All configuration tables contain:
  • a history_id to indicate the configuration version of the entity;
  • a create_date field to indicate when the entity was added to the system;
  • an update_date field to indicate when the entity was changed; and
  • a delete_date field to indicate the date the entity was removed from the system.

23.2.3. Recording Statistical History

The ETL service collects data into the statistical tables every minute. Data is stored for every minute of the past 24 hours, at a minimum, but can be stored for as long as 48 hours depending on the last time a deletion job was run. Minute-by-minute data more than two hours old is aggregated into hourly data and stored for two months. Hourly data more than two days old is aggregated into daily data and stored for five years.
Hourly data and daily data can be found in the hourly and daily tables.
Each statistical datum is kept in its respective aggregation level table: samples, hourly, and daily history. All history tables also contain a history_id column to uniquely identify rows. Tables reference the configuration version of a host in order to enable reports on statistics of an entity in relation to its past configuration.

23.2.4. Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf

The following is a list of options for configuring application settings for the Data Warehouse service. These options are available in the /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.conf file. Configure any changes to the default values in an override file under /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/. Restart the Data Warehouse service after saving the changes.

Table 23.1. ovirt-engine-dwhd.conf application settings variables

Variable name Default Value Remarks
DWH_DELETE_JOB_HOUR 3 The time at which a deletion job is run. Specify a value between 0 and 23, where 0 is midnight.
DWH_SAMPLING 60 The interval, in seconds, at which data is collected into statistical tables.
DWH_TABLES_KEEP_SAMPLES 24 The number of hours that data from DWH_SAMPLING is stored. Data more than two hours old is aggregated into hourly data.
DWH_TABLES_KEEP_HOURLY 1440 The number of hours that hourly data is stored. The default is 60 days. Hourly data more than two days old is aggregated into daily data.
DWH_TABLES_KEEP_DAILY 43800 The number of hours that daily data is stored. The default is five years.
DWH_ERROR_EVENT_INTERVAL 300000 The minimum interval, in milliseconds, at which errors are pushed to the Manager's audit.log.

23.2.5. Tracking Tag History

The ETL Service collects tag information as displayed in the Administration Portal every minute and stores this data in the tags historical tables. The ETL Service tracks five types of changes:
  • A tag is created in the Administration Portal - the ETL Service copies the tag details, position in the tag tree and relation to other objects in the tag tree.
  • A entity is attached to the tag tree in the Administration Portal - the ETL Service replicates the addition to the ovirt_engine_history database as a new entry.
  • A tag is updated - the ETL Service replicates the change of tag details to the ovirt_engine_history database as a new entry.
  • An entity or tag branch is removed from the Administration Portal - the ovirt_engine_history database flags the corresponding tag and relations as removed in new entries. Removed tags and relations are only flagged as removed or detached. In order to maintain correctness of historical reports and representations, they are not physically removed.
  • A tag branch is moved - the corresponding tag and relations are updated as new entries. Moved tags and relations are only flagged as updated. In order to maintain correctness of historical reports and representations, they are not physically updated.

23.2.6. Allowing Read-Only Access to the History Database

Summary
To allow access to the history database without allowing edits, you must create a read-only PostgreSQL user that can log in to and read from the ovirt_engine_history database. This procedure must be executed on the system on which the history database is installed.

Procedure 23.1. Allowing Read-Only Access to the History Database

  1. Create the user to be granted read-only access to the history database:
    # psql -U postgres -c "CREATE ROLE [user name] WITH LOGIN ENCRYPTED PASSWORD '[password]';" -d ovirt_engine_history
  2. Grant the newly created user permission to connect to the history database:
    # psql -U postgres -c "GRANT CONNECT ON DATABASE ovirt_engine_history TO [user name];"
  3. Grant the newly created user usage of the public schema:
    # psql -U postgres -c "GRANT USAGE ON SCHEMA public TO [user name];" ovirt_engine_history
  4. Generate the rest of the permissions that will be granted to the newly created user and save them to a file:
    # psql -U postgres -c "SELECT 'GRANT SELECT ON ' || relname || ' TO [user name];' FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE nspname = 'public' AND relkind IN ('r', 'v');" --pset=tuples_only=on  ovirt_engine_history > grant.sql
  5. Use the file you created in the previous step to grant permissions to the newly created user:
    # psql -U postgres -f grant.sql ovirt_engine_history
  6. Remove the file you used to grant permissions to the newly created user:
    # rm grant.sql
Result
You can now access the ovirt_engine_history database with the newly created user using the following command:
# psql -U [user name] ovirt_engine_history
SELECT statements against tables and views in the ovirt_engine_history database succeed, while modifications fail.

23.2.7. Reports Examples

The following examples provide an introduction to reports produced from queries to the ovirt_engine_history database. The database gives users access to a rich data set and enables a variety of complex reporting scenarios. These examples illustrate only basic reporting requirements.
Resource Utilization on a Single Host
This example produces a resource utilization report for a single host. The resource utilization report provides CPU- and memory-usage percentage information from readings taken at one-minute intervals. This kind of report is useful for gaining insight into the load factor of an individual host over a short period of time. The report is defined by the following SQL query. Ensure the values provided for the host_name and history_datetime components of the where clause are substituted with the appropriate values for your environment and that the latest configuration is in use.

Example 23.1. Report query for resource utilization on a single host

          
 select history_datetime as DateTime, cpu_usage_percent as CPU, memory_usage_percent as Memory
    from host_configuration, host_samples_history
    where host_configuration.host_id = host_samples_history.host_id
    and host_name = 'example.labname.abc.company.com'
    and host_configuration.history_id in (select max(a.history_id)
    						from host_configuration as a
    						where host_configuration.host_id = a.host_id)
    and history_datetime >= '2011-07-01 18:45'
    and history_datetime <= '2011-07-31 21:45'
 

This query returns a table of data with one row per minute:

Table 23.2. Resource Utilization for a Single Host Example Data

DateTime CPU Memory
2010-07-01 18:45 42 0
2010-07-01 18:46 42 0
2010-07-01 18:47 42 1
2010-07-01 18:48 33 0
2010-07-01 18:49 33 0
2010-07-01 18:50 25 1
Compose the data into a graph or chart using third-party data analysis and visualization tools such as OpenOffice.org Calc and Microsoft Excel. For this example, a line graph showing the utilization for a single host over time is a useful visualization. Figure 23.1, “Single host utilization line graph” was produced using the Chart Wizard tool in OpenOffice.org Calc.
Single host utilization line graph

Figure 23.1. Single host utilization line graph

Resource Utilization Across All Hosts
This example produces an aggregated resource utilization report across all hosts in the Red Hat Enterprise Virtualization Manager environment. Aggregated usage percentages for CPU and memory are shown with an hourly temporal resolution. This kind of report reveals utilization trends for the entire environment over a long period of time and is useful for capacity planning purposes. The following SQL query defines the report. Ensure the values provided for the history_datetime components of the where clause are substituted with appropriate values for your environment.

Example 23.2. Report query for resource utilization across all hosts


    select extract(hour from history_datetime) as Hour, avg(cpu_usage_percent) as CPU, avg(memory_usage_percent) as Memory
    from host_hourly_history
    where history_datetime >= '2011-07-01' and history_datetime < '2011-07-31'
    group by extract(hour from history_datetime)
    order by extract(hour from history_datetime)

This query returns a table of data with one row per hour:

Table 23.3. Resource utilization across all hosts example data

Hour CPU Memory
0 39 40
1 38 38
2 37 32
3 35 45
4 35 37
5 36 37
Compose the data into a graph or chart using third party data analysis and visualization tools such as OpenOffice.org Calc and Microsoft Excel. For this example, a line graph showing the total system utilization over time is a useful visualization. Figure 23.2, “Total system utilization line graph” was produced using the Chart Wizard tool in OpenOffice.org Calc.
Total system utilization line graph

Figure 23.2. Total system utilization line graph

Tag Filter of Latest Virtual Machine Configuration
This example filters the latest virtual machine configuration list using the history tag tables. This kind of report demonstrates usage of the tags tree built in the Red Hat Enterprise Virtualization Manager to filter lists. The following SQL query defines this report. This query uses a predefined function that receives tag history IDs and returns the tag path with latest names of the tags in the Administration Portal. Ensure the values provided for the function result components of the where clause are substituted with appropriate values for your environment.

Example 23.3. 

	SELECT vm_name
  FROM vm_configuration
		inner join latest_tag_relations_history on (vm_configuration.vm_id = latest_tag_relations_history.entity_id)
			inner join latest_tag_details on (latest_tag_details.tag_id = latest_tag_relations_history.parent_id)
 WHERE getpathinnames(latest_tag_details.history_id) like '/root/tlv%'
This query returns a table of data with all virtual machine names that are attached to this tag:

Table 23.4. Tag Filtering of Latest Virtual Machine Configuration

vm_name
RHEL6-Pool-67
RHEL6-Pool-5
RHEL6-Pool-6
RHEL6-23
List Current Virtual Machines' Names, Types, and Operating Systems
This example produces a list of all current virtual machines names, types and operating systems in the Red Hat Enterprise Virtualization Manager environment. This kind of report demonstrates the usage of the ENUM table. The following SQL query defines this report:

Example 23.4. 

SELECT 	vm_name, vm_type, operating_system
  FROM 	vm_configuration
		inner join enum_translator as vm_type_value on (vm_type_value.enum_type = 'VM_TYPE' and vm_configuration.vm_type = vm_type_value.enum_key)
		inner join enum_translator as os_value on (os_value.enum_type = 'OS_TYPE' and vm_configuration.operating_system = os_value.enum_key)
This query returns a table of virtual machines with operating system and virtual machine type data:

Table 23.5. Current Virtual Machines' Names, Types, and Operating Systems

vm_name vm_type operating_system
RHEL6-Pool-2 Desktop RHEL 6 x64
RHEL6-Pool-1 Desktop RHEL 6 x64
RHEL6-Pool-3 Desktop RHEL 6 x64
RHEL6-Pool-4 Desktop RHEL 6 x64
RHEL6-Pool-5 Desktop RHEL 6 x64

23.2.8. Statistics History Views

23.2.8.1. Statistics History Views

This section describes the statistics history views available to the user for querying and generating reports.

23.2.8.2. Datacenter Statistics Views

Historical statistics for each data center in the system.

Table 23.6. Historical Statistics for Each Data Center in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
datacenter_id uuid The unique ID of the data center.
datacenter_status smallint
  • -1 - Unknown Status (used only to indicate a problem with the ETL -- PLEASE NOTIFY SUPPORT)
  • 1 - Up
  • 2 - Maintenance
  • 3 - Problematic
minutes_in_status decimal The total number of minutes that the data center was in the status shown in the datacenter_status column for the aggregation period. For example, if a data center was up for 55 minutes and in maintenance mode for 5 minutes during an hour, two rows will show for this hour. One will have a datacenter_status of Up and minutes_in_status of 55, the other will have a datacenter_status of Maintenance and a minutes_in_status of 5.
datacenter_configuration_version integer The data center configuration version at the time of sample.

23.2.8.3. Storage Domain Statistics Views

Table 23.7. Historical Statistics for Each Storage Domain in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
storage_domain_id uuid Unique ID of the storage domain in the system.
available_disk_size_gb integer The total available (unused) capacity on the disk, expressed in gigabytes (GB).
used_disk_size_gb integer The total used capacity on the disk, expressed in gigabytes (GB).
storage_configuration_version integer The storage domain configuration version at the time of sample.
storage_domain_status smallint The storage domain status.
minutes_in_status decimal The total number of minutes that the storage domain was in the status shown state as shown in the status column for the aggregation period. For example, if a storage domain was "Active" for 55 minutes and "Inactive" for 5 minutes within an hour, two rows will be reported in the table for the same hour. One row will have a status of Active with minutes_in_status of 55, the other will have a status of Inactive and minutes_in_status of 5.

23.2.8.4. Host Statistics Views

Table 23.8. Historical Statistics for Each Host in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
host_id uuid Unique ID of the host in the system.
host_status smallint
  • -1 - Unknown Status (used only to indicate a problem with the ETL -- PLEASE NOTIFY SUPPORT)
  • 1 - Up
  • 2 - Maintenance
  • 3 - Problematic
minutes_in_status decimal The total number of minutes that the host was in the status shown in the status column for the aggregation period. For example, if a host was up for 55 minutes and down for 5 minutes during an hour, two rows will show for this hour. One will have a status of Up and minutes_in_status of 55, the other will have a status of Down and a minutes_in_status of 5.
memory_usage_percent smallint Percentage of used memory on the host.
max_memory_usage smallint The maximum memory usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
cpu_usage_percent smallint Used CPU percentage on the host.
max_cpu_usage smallint The maximum CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
ksm_cpu_percent smallint CPU percentage ksm on the host is using.
max_ksm_cpu_percent smallint The maximum KSM usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
active_vms smallint The average number of active virtual machines for this aggregation.
max_active_vms smallint The maximum active number of virtual machines for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
total_vms smallint The average number of all virtual machines on the host for this aggregation.
max_total_vms smallint The maximum total number of virtual machines for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
total_vms_vcpus smallint Total number of VCPUs allocated to the host.
max_total_vms_vcpus smallint The maximum total virtual machine VCPU number for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
cpu_load smallint The CPU load of the host.
max_cpu_load smallint The maximum CPU load for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
system_cpu_usage_percent smallint Used CPU percentage on the host.
max_system_cpu_usage_percent smallint The maximum system CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
user_cpu_usage_percent smallint Used user CPU percentage on the host.
max_user_cpu_usage_percent smallint The maximum user CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
swap_used_mb integer Used swap size usage of the host in megabytes (MB).
max_swap_used_mb integer The maximum user swap size usage of the host for the aggregation period in megabytes (MB), expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
host_configuration_version integer The host configuration version at the time of sample.
ksm_shared_memory_mb bigint The Kernel Shared Memory size in megabytes (MB) that the host is using.
max_ksm_shared_memory_mb bigint The maximum KSM memory usage for the aggregation period expressed in megabytes (MB). For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.

23.2.8.5. Host Interface Statistics Views

Historical Statistics for Each Host Network Interface in the System

Table 23.9. Historical Statistics for Each Host Network Interface in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history view (rounded to minute, hour, day as per the aggregation level).
host_interface_id uuid Unique identifier of the interface in the system.
receive_rate_percent smallint Used receive rate percentage on the host.
max_receive_rate_percent smallint The maximum receive rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
transmit_rate_percent smallint Used transmit rate percentage on the host.
max_transmit_rate_percent smallint The maximum transmit rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
host_interface_configuration_version integer The host interface configuration version at the time of sample.

23.2.8.6. Virtual Machine Statistics Views

Table 23.10. Historical statistics for the virtual machines in the system

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
vm_id uuid Unique ID of the virtual machine in the system.
vm_status smallint
  • -1 - Unknown Status (used only to indicate problems with the ETL -- PLEASE NOTIFY SUPPORT)
  • 0 - Down
  • 1 - Up
  • 2 - Paused
  • 3 - Problematic
minutes_in_status decimal The total number of minutes that the virtual machine was in the status shown in the status column for the aggregation period. For example, if a virtual machine was up for 55 minutes and down for 5 minutes during an hour, two rows will show for this hour. One will have a status of Up and minutes_in_status, the other will have a status of Down and a minutes_in_status of 5.
cpu_usage_percent smallint The percentage of the CPU in use by the virtual machine.
max_cpu_usage smallint The maximum CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
memory_usage_percent smallint Percentage of used memory in the virtual machine. The guest tools must be installed on the virtual machine for memory usage to be recorded.
max_memory_usage smallint The maximum memory usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value. The guest tools must be installed on the virtual machine for memory usage to be recorded.
user_cpu_usage_percent smallint Used user CPU percentage on the host.
max_user_cpu_usage_percent smallint The maximum user CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregation, it is the maximum hourly average value.
system_cpu_usage_percent smallint Used system CPU percentage on the host.
max_system_cpu_usage_percent smallint The maximum system CPU usage for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
vm_ip varchar(255) The IP address of the first NIC. Only shown if the guest agent is installed.
current_user_name varchar(255) Name of user logged into the virtual machine console, if a guest agent is installed.
currently_running_on_host uuid The unique ID of the host the virtual machine is running on.
vm_configuration_version integer The virtual machine configuration version at the time of sample.
current_host_configuration_version integer The current host the virtual machine is running on.
current_user_id uuid The unique ID of the user in the system. This ID is generated by the Manager.

23.2.8.7. Virtual Machine Interface Statistics Views

Table 23.11. Historical Statistics for the Virtual Machine Network Interfaces in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
vm_interface_id uuid Unique identifier of the interface in the system.
receive_rate_percent smallint Used receive rate percentage on the host.
max_receive_rate_percent smallint The maximum receive rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
transmit_rate_percent smallint Used transmit rate percentage on the host.
max_transmit_rate_percent smallint The maximum transmit rate for the aggregation period, expressed as a percentage. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average rate.
vm_interface_configuration_version integer The virtual machine interface configuration version at the time of sample.

23.2.8.8. Virtual Machine Disk Statistics Views

Table 23.12. Historical Statistics for the Virtual Disks in the System

Name Type Description
history_id bigint The unique ID of this row in the table.
history_datetime timestamp with time zone The timestamp of this history row (rounded to minute, hour, day as per the aggregation level).
vm_disk_id uuid Unique ID of the disk in the system.
vm_disk_status integer
  • 0 - Unassigned
  • 1 - OK
  • 2 - Locked
  • 3 - Invalid
  • 4 - Illegal
minutes_in_status decimal The total number of minutes that the virtual machine disk was in the status shown in the status column for the aggregation period. For example, if a virtual machine disk was locked for 55 minutes and OK for 5 minutes during an hour, two rows will show for this hour. One will have a status of Locked and minutes_in_status of 55, the other will have a status of OK and a minutes_in_status of 5.
vm_disk_actual_size_mb integer The actual size allocated to the disk.
read_rate_bytes_per_second integer Read rate to disk in bytes per second.
max_read_rate_bytes_per_second integer The maximum read rate for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
read_latency_seconds decimal The virtual machine disk read latency measured in seconds.
max_read_latency_seconds decimal The maximum write latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
write_rate_bytes_per_second integer Write rate to disk in bytes per second.
max_write_rate_bytes_per_second integer The maximum write rate for the aggregation period. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
write_latency_seconds decimal The virtual machine disk write latency measured in seconds.
max_write_latency_seconds decimal The maximum write latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
flush_latency_seconds decimal The virtual machine disk flush latency measured in seconds.
max_flush_latency_seconds decimal The maximum flush latency for the aggregation period, measured in seconds. For hourly aggregations, this is the maximum collected sample value. For daily aggregations, it is the maximum hourly average value.
vm_disk_configuration_version integer The virtual machine disk configuration version at the time of sample.

23.2.9. Configuration History Views

23.2.9.1. Configuration History Views

This section describes the configuration views available to the user for querying and generating reports.

Note

delete_date does not appear in latest views because these views provide the latest configuration of living entities, which, by definition, have not been deleted.

23.2.9.2. Data Center Configuration

The following table shows the configuration history parameters of the data centers in the system.

Table 23.13. v3_5_configuration_history_datacenters

Name Type Description
history_id integer The ID of the configuration version in the history database.
datacenter_id uuid The unique ID of the data center in the system.
datacenter_name varchar(40) Name of the data center, as displayed in the edit dialog.
datacenter_description varchar(4000) Description of the data center, as displayed in the edit dialog.
storage_type smallint
  • 0 -Unknown
  • 1 - NFS
  • 2 - FCP
  • 3 - iSCSI
  • 4 - Local
  • 6 - All
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.3. Datacenter Storage Domain Map

The following table shows the relationships between storage domains and data centers in the system.

Table 23.14. v3_5_map_history_datacenters_storage_domains

Name Type Description
history_id integer The ID of the configuration version in the history database.
storage_domain_id uuid The unique ID of this storage domain in the system.
datacenter_id uuid The unique ID of the data center in the system.
attach_date timestamp with time zone The date the storage domain was attached to the data center.
detach_date timestamp with time zone The date the storage domain was detached from the data center.

23.2.9.4. Storage Domain Configuration

The following table shows the configuration history parameters of the storage domains in the system.

Table 23.15. v3_5_configuration_history_storage_domains

Name Type Description
history_id integer The ID of the configuration version in the history database.
storage_domain_id uuid The unique ID of this storage domain in the system.
storage_domain_name varchar(250) Storage domain name.
storage_domain_type smallint
  • 0 - Data (Master)
  • 1 - Data
  • 2 - ISO
  • 3 - Export
storage_type smallint
  • 0 - Unknown
  • 1 - NFS
  • 2 - FCP
  • 3 - iSCSI
  • 4 - Local
  • 6 - All
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.5. Cluster Configuration

The following table shows the configuration history parameters of the clusters in the system.

Table 23.16. v3_5_configuration_history_clusters

Name Type Description
history_id integer The ID of the configuration version in the history database.
cluster_id uuid The unique identifier of the datacenter this cluster resides in.
cluster_name varchar(40) Name of the cluster, as displayed in the edit dialog.
cluster_description varchar(4000) As defined in the edit dialog.
datacenter_id uuid The unique identifier of the datacenter this cluster resides in.
cpu_name varchar(255) As displayed in the edit dialog.
compatibility_version varchar(40) As displayed in the edit dialog.
datacenter_configuration_version integer The data center configuration version at the time of creation or update.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.6. Host Configuration

The following table shows the configuration history parameters of the hosts in the system.

Table 23.17. v3_5_configuration_history_hosts

Name Type Description
history_id integer The ID of the configuration version in the history database.
host_id uuid The unique ID of the host in the system.
host_unique_id varchar(128) This field is a combination of the host physical UUID and one of its MAC addresses, and is used to detect hosts already registered in the system.
host_name varchar(255) Name of the host (same as in the edit dialog).
cluster_id uuid The unique ID of the cluster that this host belongs to.
host_type smallint
  • 0 - RHEL Host
  • 2 - RHEV Hypervisor Node
fqdn_or_ip varchar(255) The host's DNS name or its IP address for Red Hat Enterprise Virtualization Manager to communicate with (as displayed in the edit dialog).
memory_size_mb integer The host's physical memory capacity, expressed in megabytes (MB).
swap_size_mb integer The host swap partition size.
cpu_model varchar(255) The host's CPU model.
number_of_cores smallint Total number of CPU cores in the host.
number_of_sockets smallint Total number of CPU sockets.
cpu_speed_mh decimal The host's CPU speed, expressed in megahertz (MHz).
host_os varchar(255) The host's operating system version.
pm_ip_address varchar(255) Power Management server IP address.
kernel_version varchar(255) The host's kernel version.
kvm_version varchar(255) The host's KVM version.
vdsm_version varchar(40) The host's VDSM version.
vdsm_port integer As displayed in the edit dialog.
cluster_configuration_version integer The cluster configuration version at the time of creation or update.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.7. Host Interface Configuration

The following table shows the configuration history parameters of the host interfaces in the system.

Table 23.18. v3_5_configuration_history_hosts_interfaces

Name Type Description
history_id integer The ID of the configuration version in the history database.
host_interface_id uuid The unique ID of this interface in the system.
host_interface_name varchar(50) The interface name as reported by the host.
host_id uuid Unique ID of the host this interface belongs to.
host_interface_type smallint
  • 0 - rt18139_pv
  • 1 - rt18139
  • 2 - e1000
  • 3 - pv
host_interface_speed_bps integer The interface speed in bits per second.
mac_address varchar(20) The interface MAC address.
logical_network_name varchar(50) The logical network associated with the interface.
ip_address varchar(50) As displayed in the edit dialog.
gateway varchar(20) As displayed in the edit dialog.
bond Boolean A flag to indicate if this interface is a bonded interface.
bond_name varchar(50) The name of the bond this interface is part of (if it is part of a bond).
vlan_id integer As displayed in the edit dialog.
host_configuration_version integer The host configuration version at the time of creation or update.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.8. Virtual Machine Configuration

The following table shows the configuration history parameters of the virtual machines in the system.

Table 23.19. v3_5_configuration_history_vms

Name Type Description
history_id integer The ID of the configuration version in the history database.
vm_id uuid The unique ID of this VM in the system.
vm_name varchar(255) The name of the VM.
vm_description varchar(4000) As displayed in the edit dialog.
vm_type smallint
  • 0 - Desktop
  • 1 - Server
cluster_id uuid The unique ID of the cluster this VM belongs to.
template_id uuid The unique ID of the template this VM is derived from. The field is for future use, as the templates are not synchronized to the history database in this version.
template_name varchar(40) Name of the template from which this VM is derived.
cpu_per_socket smallint Virtual CPUs per socket.
number_of_sockets smallint Total number of virtual CPU sockets.
memory_size_mb integer Total memory allocated to the VM, expressed in megabytes (MB).
operating_system smallint
  • 0 - Other OS
  • 1 - Windows XP
  • 3 - Windows 2003
  • 4 - Windows 2008
  • 5 - Linux
  • 7 - Red Hat Enterprise Linux 5.x
  • 8 - Red Hat Enterprise Linux 4.x
  • 9 - Red Hat Enterprise Linux 3.x
  • 10 - Windows 2003 x64
  • 11 - Windows 7
  • 12 - Windows 7 x64
  • 13 - Red Hat Enterprise Linux 5.x x64
  • 14 - Red Hat Enterprise Linux 4.x x64
  • 15 - Red Hat Enterprise Linux 3.x x64
  • 16 - Windows 2008 x64
  • 17 - Windows 2008 R2 x64
  • 18 - Red Hat Enterprise Linux 6.x
  • 19 - Red Hat Enterprise Linux 6.x x64
  • 20 - Windows 8
  • 21 - Windows 8 x64
  • 23 - Windows 2012 x64
  • 1001 - Other
  • 1002 - Linux
  • 1003 - Red Hat Enterprise Linux 6.x
  • 1004 - SUSE Linux Enterprise Server 11
  • 1193 - SUSE Linux Enterprise Server 11
  • 1252 - Ubuntu Precise Pangolin LTS
  • 1253 - Ubuntu Quantal Quetzal
  • 1254 - Ubuntu Raring Ringtails
  • 1255 - Ubuntu Saucy Salamander
default_host uuid As displayed in the edit dialog, the ID of the default host in the system.
high_availability Boolean As displayed in the edit dialog.
initialized Boolean A flag to indicate if this VM was started at least once for Sysprep initialization purposes.
stateless Boolean As displayed in the edit dialog.
fail_back Boolean As displayed in the edit dialog.
usb_policy smallint As displayed in the edit dialog.
time_zone varchar(40) As displayed in the edit dialog.
cluster_configuration_version integer The cluster configuration version at the time of creation or update.
default_host_configuration_version integer The host configuration version at the time of creation or update.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.
vm_pool_id uuid The virtual machine's pool unique ID.
vm_pool_name varchar(255) The name of the virtual machine's pool.

23.2.9.9. Virtual Machine Interface Configuration History

The following table shows the configuration history of the virtual interfaces in the system.

Table 23.20. v3_5_configuration_history_vms_interfaces

Name Type Description
history_id integer The ID of the configuration version in the history database.
vm_interface_id uuid The unique ID of this interface in the system.
vm_interface_name varchar(50) As displayed in the edit dialog.
vm_interface_type smallint
The type of the virtual interface.
  • 0 - rt18139_pv
  • 1 - rt18139
  • 2 - e1000
  • 3 - pv
vm_interface_speed_bps integer The average speed of the interface during the aggregation in bits per second.
mac_address varchar(20) As displayed in the edit dialog.
logical_network_name varchar(50) As displayed in the edit dialog.
vm_configuration_version integer The virtual machine configuration version at the time of creation or update.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.2.9.10. Virtual Machine Device Configuration

The following table shows the relationships between virtual machines and their associated devices, including disks and virtual interfaces.

Table 23.21. v3_5_configuration_history_vms_devices

Name Type Description
history_id integer The ID of the configuration version in the history database.
vm_id uuid The unique ID of the virtual machine in the system.
type varchar(30) VM Device Type which can be "disk" or "interface"
address varchar(255) The virtual machine's device physical address
is_managed Boolean Flag that indicates if the device is managed by the Manager
is_plugged Boolean Flag that indicates if the device is plugged into the virtual machine.
is_readonly Boolean Flag that indicates if the device is read only.
vm_configuration_version integer The virtual machine configuration version at the time the sample was taken.
device_configuration_version integer The device configuration version at the time the sample was taken.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp timestamp with time zone The date this entity was added to the system.
delete_date timestamp with time zone The date this entity was added to the system.

23.2.9.11. Virtual Machine Disk Configuration

The following table shows the configuration history parameters of the virtual disks in the system.

Table 23.22. v3_5_configuration_history_vms_disks

Name Type Description
history_id integer The ID of the configuration version in the history database.
vm_disk_id uuid The unique ID of this disk in the system.
vm_disk_description varchar(4000) As displayed in the edit dialog.
storage_domain_id uuid The ID of the storage domain this disk image belongs to.
vm_disk_size_mb integer The defined size of the disk in megabytes (MB).
vm_disk_type integer
As displayed in the edit dialog. Only System and data are currently used.
  • 0 - Unassigned
  • 1 - System
  • 2 - Data
  • 3 - Shared
  • 4 - Swap
  • 5 - Temp
vm_disk_format integer
As displayed in the edit dialog.
  • 3 - Unassigned
  • 4 - COW
  • 5 - RAW
vm_disk_interface integer
  • 0 - IDE
  • 1 - SCSI (not supported)
  • 2 - VirtIO
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.
is_shared Boolean Flag that indicates if the virtual machine's disk is shared.
image_id uuid The unique ID of the image in the system.

23.2.9.12. User Details History

User extended details in the system. The users and groups are created with the manager from either an LDAP or non-LDAP based authorization provider.

Table 23.23. v3_6_users_details_history view

Name Type Description
user_id uuid The unique ID of the user in the system as generated by Manager.
first_name varchar(255) The user's first name.
last_name varchar(255) The user's last name.
domain varchar(255) The name of the authorization extension.
username varchar(255) The account name
department varchar(255) The organizational department the user belongs to.
user_role_title varchar(255) The title or role of the user within the organization.
email varchar(255) The email of the user in the organization.
external_id text The unique identifier of the user from the external system.
active Boolean If the user is active or not - this is being checked once in an hour, if the user can be found in the authorization extension then it will remain active. A user can be turned to active also on successful login.
create_date timestamp with time zone The date this entity was added to the system.
update_date timestamp with time zone The date this entity was changed in the system.
delete_date timestamp with time zone The date this entity was deleted from the system.

23.3. Reports

23.3.1. Online Help for JasperReports

JasperServer provides extensive online help. Use the online help to find information on common administration tasks and the JasperServer product in general. This section provides information on the reports available for Red Hat Enterprise Virtualization and the customizations that integrate JasperServer with Red Hat Enterprise Virtualization. To navigate to the online help facility, click on Help in the top right hand corner of the browser.
Red Hat Enterprise Virtualization Reports online help

Figure 23.3. Red Hat Enterprise Virtualization Reports online help

Note

Detailed user, administration, and installation guides for JasperReports can be found in /usr/share/jasperreports-server-pro/docs/

23.3.2. JasperReports System Requirements

The Red Hat Enterprise Virtualization Manager Reports tool supports the same browsers that are supported by the corresponding version of JasperReports Server. For an updated list, navigate to http://community.jaspersoft.com/documentation/v55-v551-v550/jasperreports-server-supported-platform-datasheet and click Web Browsers in the table of contents.

23.3.3. Users in the Red Hat Enterprise Virtualization Reports Portal

The Red Hat Enterprise Virtualization Reports Portal does not use your directory server for authentication.
By default, there are two Reports Portal users: admin and superuser. The passwords for these users were set during the installation of Red Hat Enterprise Virtualization Reports. Generally, additional users must be added manually.
When a domain user accesses the Reports Portal from within the Administration Portal using right-click reporting, a corresponding user is automatically created in the Reports Portal using the user's domain user name. This user cannot login to the Reports Portal directly, but is able to view all the reports accessible from the Administration portal.

Note

Previously, the admin user name was rhevm-admin. If you are performing a clean installation, the user name is now admin. If you are performing an upgrade, the user name will remain rhevm-admin.

23.3.4. Logging in to Access the Reports Portal

You were prompted to set a password for the superuser and admin accounts when you installed Red Hat Enterprise Virtualization Reports. Red Hat Enterprise Virtualization Reports does not provide default passwords.
To access reports, navigate to the reports portal at: https://YOUR.MANAGER.URL/ovirt-engine-reports/login.html. A login screen for Red Hat Enterprise Virtualization Reports is displayed.

Note

You can also access the reports portal from your Red Hat Enterprise Virtualization landing page.
Red Hat Enterprise Virtualization Reports login screen

Figure 23.4. Red Hat Enterprise Virtualization Reports login screen

Enter your login credentials. If this is the first time you are connecting to the reports portal, log in as ovirt-user. Click the Login button.
Red Hat Enterprise Virtualization Reports main screen

Figure 23.5. Red Hat Enterprise Virtualization Reports main screen

The Reports Portal does not use your directory service for authentication. By default, the Reports Portal includes two users: admin and superuser. Generally, additional users need to be created within the Reports Portal.

23.3.5. Accessing the Red Hat Enterprise Virtualization Reports User Management Menu

Summary
You can add additional reports users, giving them access to the reports portal. Complete this procedure as a user with sufficient permissions to manage other users, like admin.
  1. In to Red Hat Enterprise Virtualization reports portal, hover over the Manage button on the top menu bar.
  2. Click on Users in the drop-down menu that appears to access the Manage Users interface. It contains three panes:
    • Organizations
    • Users
    • Properties
  3. Select a user in the Users pane by clicking on the name of the user. Information about the user displays in the Properties pane.
  4. Click the Edit button at the bottom of the user's Properties pane.
    The Properties pane contains these fields:
    • User name,
    • User ID,
    • Email,
    • Password (required),
    • Confirm Password (required),
    • A User is enabled check box,
    • A The user is defined externally check box,
    • A list of Roles Available to the user, and
    • A list of Roles Assigned to the user.
  5. Click the Save button.
Result
You have given more users permissions to access the reports portal.

23.3.6. Reports Portal User Roles

There are three roles, each of which provides a different level of permissions:
  1. ROLE_ADMINISTRATOR - Can create/edit/delete reports, dashboards, ad hoc reports, and manage the server.
  2. ROLE_USER - Can create/edit/delete ad hoc reports and view reports and dashboards.
  3. ROLE_ANONYMOUS - Can log in and look at reports and dashboards.
Other roles can be created and assigned. For information on how to create and assign other roles, detailed information about user management, and other system functions, please refer to the JasperServer documentation.
JasperReports user roles

Figure 23.6. JasperReports user roles

23.3.7. Navigating Reports and Dashboards

Select the Reports button on the reports portal home page.
You can use the smaller Home ( ) button in the navigation bar at the top of the reports portal to return to this page.
Use the Filter pane on the left of the screen to select a subset of reports you would like to view.
Red Hat Enterprise Virtualization Reports Filter pane

Figure 23.7. Red Hat Enterprise Virtualization Reports Filter pane

You can use filters to select from the available reports.

Table 23.24. Navigation Filters

Filter Description
Available Resources Select from All, Modified by me, or Viewed by me.
Resource type Choose from the types of available resources including Reports, Ad Hoc views, Dashboards, and more.
Timeframe Choose a time frame you'd like to see information from.
Schedule Filter by data collection schedule.

23.3.8. Report Parameters

Report parameters are user-defined at report run time. Report parameters define the scope and timeframe of the report. When running a report, you are prompted for the parameters applicable to the report you selected.
To view the required parameters for a report, click the report in the reports list.
Red Hat Enterprise Virtualization Reports - Reports List

Figure 23.8. Red Hat Enterprise Virtualization Reports - Reports List

Select a report from the list to display the Input Controls window. The Input Controls window consists of a number of drop-down menus allow you to define the report's parameters.

Note

The dialog is contextual and differs from report to report. Parameters marked with an asterisk (*) are required.
Report Parameter Selection

Figure 23.9. Report Parameter Selection

Cascading parameters
Many report parameters are cascading input fields. This means the selection made for one parameter changes the options available for another parameter. The Data Center and Cluster parameters are cascading. Once a user selects a data center, only clusters within that data center are available for selection. Similarly, if a user selects a cluster, the Host Type field updates to show only host types that exist in the selected cluster. Cascading parameters filter out objects that do not contain child objects relevant to the report. For example, a report pertaining to virtual machines removes the selection of clusters that do not contain virtual machines. A report pertaining to both virtual machines and hosts only provides a selection from clusters containing both virtual machines and hosts.
Deleted objects
Objects deleted (removed) from the system are still recorded in the reporting history database. Select deleted objects, such as clusters, data centers and hosts, as values for report parameters if required. The bottom of the parameter options list shows deleted objects, which are suffixed with the date of removal from the system.
You can toggle whether deleted entries are shown in the report using the Show Deleted Entities? field in the Input Controls window.

23.3.9. Right-click Reporting Integration with the Red Hat Enterprise Virtualization Administration Portal

The Administration portal provides integrated access to reports on most resources.
To access a report on a given resource, select the resource in the Administration Portal. Right-click the resource to show a context sensitive menu, and select the Show Report option. This expands to show all of the available reports on the selected resource.
Right-click Reporting

Figure 23.10. Right-click Reporting

Alternatively, you can select a given resource in the Administration Portal. If there are reports on that resource, the Show Report action becomes available above the results list.
Alternative to Right-click Reporting

Figure 23.11. Alternative to Right-click Reporting

23.3.10. Executive Reports

23.3.10.1. Executive reports: Active Virtual Machines by OS

The Active Virtual Machines by OS report shows a summary of the number of active virtual machines in a given time period, broken down by operating system. The following parameters are provided to run this report:

Table 23.25. Active Virtual Machines by OS Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The report includes only virtual machines in the selected data center. The options list shows only data centers that contain virtual machines.
Cluster The report only includes virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report only includes virtual machines of the selected type. Possible types are Server and Desktop. The options list shows only types that exist in the selected data center and cluster. If All is selected, the report includes all virtual machine types.

23.3.10.2. Executive Reports: Cluster Capacity Vs Usage

The Cluster Capacity Vs Usage report shows the relationship between system capacity and usage (workload) over a given time period. Capacity is expressed in terms of CPU cores and physical memory, while usage is expressed as vCPUs and virtual machine memory. The following parameters must be provided to run this report:

Table 23.26. Cluster Capacity Vs Usage Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list contains only data centers that contain clusters.
Cluster The report only includes the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all clusters in the selected data center.

23.3.10.3. Executive Reports: Host Operating System Break Down

The Host OS Break Down report indicates the number of hosts running each operating system version over a given time period. The following parameters must be provided to run this report:

Table 23.27. Host OS Break Down Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.

23.3.10.4. Executive Reports: Summary of Host Usage Resources

The Summary of Host Usage Resources report shows a scatter plot of average host resource utilization for a given time period in terms of CPU and memory usage. The following parameters must be provided to run this report:

Table 23.28. Summary of Host Usage Resources Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.

23.3.11. Inventory Reports

23.3.11.1. Inventory Reports: Hosts Inventory

The Hosts Inventory report shows a list of all hosts in the selected data center and cluster. The following parameters must be provided to run this report:

Table 23.29. Hosts Inventory Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.

23.3.11.2. Inventory Reports: Storage Domain Over Time

The Storage Domain Size Over Time report shows a line graph contrasting the total available and total used space for a single storage domain over time for a given period. The following parameters must be provided to run this report:

Table 23.30. Storage Domain Size Over Time Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. The list of options for the Storage Domain name parameter includes only storage domains that were attached during the specified period.
Data Center The options list for the Storage Domain Name parameter shows only storage domains in this selected data center.
Storage Domain Type The options list for the Storage Domain Name parameter shows only storage domains of this selected type.
Storage Domain Name The report refers to the storage domain selected. A report is only for a single storage domain and the user must select a storage domain. The list of options shows only storage domains that were attached to the data center during the selected period.

23.3.11.3. Inventory Reports: Virtual Machines Inventory

The Virtual Machines Inventory report shows a list of all virtual machines in the selected data center and cluster. The following parameters must be provided to run this report:

Table 23.31. Virtual Machines Inventory Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the report includes all virtual machine types.

23.3.11.4. Inventory Reports: Cloud Provider Virtual Machine Inventory

The Cloud Provider Virtual Machine Inventory report shows a list of all virtual machines in the selected data center and cluster, and is required by cloud providers to bill customers. The following parameters must be provided to run this report:

Table 23.32. Cloud Provider Virtual Machine Inventory Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the report includes all virtual machine types.

23.3.11.5. Inventory Reports: Storage Domains

The Storage Domains Inventory report shows a list of storage domains in the selected data center and of the selected type. The following parameters must be provided to run this report:

Table 23.33. Storage Domain Inventory Parameters

Parameter Description
Show DeletedDetached Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Data Center The options list for the Storage Domain Name parameter shows only storage domains in this selected data center.
Storage Domain Type The options list for the Storage Domain Name parameter shows only storage domains of this selected type.

23.3.12. Service Level Reports

23.3.12.1. Service Level Reports: Cluster Host Uptime

The Cluster Host Uptime report shows the weighted average uptime of hosts within a cluster for a given period of time. This report also provides a table listing the total planned (maintenance) and unplanned down time for each host. The following parameters must be provided to run this report:

Table 23.34. Cluster Host Uptime Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.

23.3.12.2. Service Level Reports: Cluster Quality of Service - Hosts

The Cluster Quality of Services - Hosts report shows the amount of time hosts sustain load above a specified threshold for a given time period. Load is defined in terms of CPU usage percent and memory usage percent. The following parameters must be provided to run this report:

Table 23.35. Cluster Quality of Service - Hosts Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.
CPU Threshold The report measures the quality of service as the amount of time hosts sustain load above a given threshold. The CPU Threshold defines a load threshold as a percentage of total CPU usage on the host. The load is measured by one-minute samples, averaged over an hour. The report therefore shows sustained load, not short term peaks. A CPU Threshold of 60 per cent is a suggested starting point to produce a meaningful quality of service report.
Memory Threshold The report measures the quality of service as the amount of time hosts sustain load above a given threshold. The Memory Threshold defines a load threshold as a percentage of total memory usage on the host. The load is measured by one-minute samples, averaged over an hour. The report therefore shows sustained load, not short term peaks. A Memory Threshold of 60 per cent is a suggested starting point to produce a meaningful quality of service report.

23.3.12.3. Service Level Reports: Cluster Quality of Service - Virtual Machines

The Cluster Quality of Service - Virtual Machines report shows the amount of time virtual machines sustain load above a specified threshold for a given time period. Load is defined in terms of CPU usage percent and memory usage percent. The following parameters must be provided to run this report:

Table 23.36. Cluster Quality of Service - Virtual Machines Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the report includes all virtual machine types.
CPU Threshold The report measures quality of service as the amount of time virtual machines sustain load above a given threshold. The CPU Threshold defines a load threshold as a percentage of total CPU usage on the virtual machine. The load is measured by one-minute samples, averaged over an hour. The report therefore shows sustained load, not short term peaks. A CPU Threshold of 60 per cent is a suggested starting point to produce a meaningful quality of service report.
Memory Threshold The reports measures quality of service as the amount of time virtual machines sustain load above a given threshold. The Memory Threshold defines a load threshold as a percentage of total memory usage on the virtual machine. The load is measured by one-minute samples, averaged over an hour. The report therefore shows sustained load, not short term peaks. A Memory Threshold of 60 per cent is a suggested starting point to produce a meaningful quality of service report.

23.3.12.4. Service Level Reports: Single Host Uptime

The Single Host Uptime report shows the total proportion of uptime, planned downtime and unplanned downtime for a single host. The following parameters must be provided to run this report:

Table 23.37. Single Host Uptime Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the Host Name parameter includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the Host Name parameter includes all hosts in the selected data center.
Host Type The list of options for the Host Name parameter includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the list of options for the Host Name parameter includes all host types.
Host Name The report refers to the host selected. A report is only for a single host and a user must select a host.

23.3.12.5. Service Level Reports: Top 10 Downtime Hosts

The Top 10 Downtime Hosts report shows the total proportion of uptime, planned downtime and unplanned downtime for the 10 hosts with the greatest amount of downtime. The following parameters must be provided to run this report:

Table 23.38. Top 10 Downtime Hosts Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list contains only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.

23.3.12.6. Service Level Reports: High Availability Virtual Servers Uptime

The High Availability Virtual Servers Uptime report shows the weighted average uptime of high availability virtual servers within a cluster for a given period of time. The report also provides a table listing the total uptime and unplanned down time for each virtual server. The following parameters must be provided to run this report:

Table 23.39. High Availability Virtual Servers Uptime Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only virtual servers in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual servers in the selected data center.

23.3.13. Trend Reports

23.3.13.1. Trend Reports: Five Least Utilized Hosts (Over Time)

The Five Least Utilized Hosts (Over Time) report shows the weighted average daily peak load, in terms of CPU and memory usage, for the five hosts with the lowest load factor for a given period of time. The following parameters must be provided to run this report:

Table 23.40. Five Least Utilized Hosts (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.

23.3.13.2. Trend Reports: Five Least Utilized Virtual Machines (Over Time)

The Five Least Utilized Virtual Machines (Over Time) report shows the weighted average daily peak load, in terms of CPU and memory usage, for the five virtual machines with the lowest load factor for a given period of time. The following parameters must be provided to run this report:

Table 23.41. Five Least Utilized Virtual Machines (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the report includes all virtual machine types.

23.3.13.3. Trend Reports: Five Most Utilized Hosts (Over Time)

The Five Most Utilized Hosts (Over Time) report shows the weighted average daily peak load, in terms of CPU and memory usage, for the five hosts with the highest load factor for a given period of time. The following parameters must be provided to run this report:

Table 23.42. Five Most Utilized Hosts (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The report includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all hosts in the selected data center.
Host Type The report includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the report includes all host types.

23.3.13.4. Trend Reports: Five Most Utilized Virtual Machines (Over Time)

The Five Most Utilized Virtual Machines (Over Time) report shows the weighted average daily peak load, in terms of CPU and memory usage, for the five virtual machines with the highest load factor for a given period of time. The following parameters must be provided to run this report:

Table 23.43. Five Most Utilized Virtual Machines (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers which contain clusters.
Cluster The report includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the report includes all virtual machines in the selected data center.
Virtual Machine Type The report includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the report includes all virtual machine types.

23.3.13.5. Trend Reports: Multiple Hosts Resource Usage (Over Time)

The Multiple Hosts Resource Usage (Over Time) report shows the daily peak load, in terms of CPU and memory usage, for up to five selected hosts over a given period of time. The following parameters must be provided to run this report:

Table 23.44. Multiple Hosts Resource Usage (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the Hosts list parameter includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the Hosts list parameter includes all hosts in the selected data center.
Host Type The list of options for the Hosts list parameter includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the list of options for the Hosts list parameter includes all host types.
Hosts list The report includes all hosts selected in the host list. Select any number of hosts up to a maximum of five.

23.3.13.6. Trend Reports: Multiple Virtual Machines Resource Usage (Over Time)

The Multiple Virtual Machines Resource Usage (Over Time) report shows the daily peak load, in terms of CPU and memory usage, for up to five selected virtual machines over a given period of time. The following parameters must be provided to run this report:

Table 23.45. Multiple Virtual Machines Resource Usage (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the VM List parameter include only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the VM List parameter includes all virtual machines in the selected data center.
Virtual Machine Type The list of options for the VM List parameter includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the list of options for the VM List parameter includes all virtual machine types.
Virtual Machine List The report includes all virtual machines selected in the virtual machine list. Select any number of virtual machines up to a maximum of five.

23.3.13.7. Trend Reports: Single Host Resource Usage (Days of Week)

The Single Host Resource Usage (Days of Week) report shows various resource utilization metrics for a single host over a given period of time and broken down by day of the week. The metrics include CPU usage, memory usage, number of active virtual machines and network usage. The following parameters must be provided to run this report:

Table 23.46. Single Host Resource Usage (Days of Week) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the Host Name parameter includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the Host Name parameter includes all hosts in the selected data center.
Host Type The list of options for the Host Name parameter includes only hosts of the selected type. The options list shows only host types present in the selected data center and cluster. If All is selected, the list of options for the Host Name parameter includes all host types.
Host Name The report refers to the host selected. A report is only for a single host and the user must select a host.

23.3.13.8. Trend Reports: Single Host Resource Usage (Hour of Day)

The Single Host Resource Usage (Hour of Day) report shows a variety of resource utilization metrics for a single host over a given period of time, broken down by hour of the day (0-23). The metrics include CPU usage, memory usage, number of active virtual machines and network usage. The following parameters must be provided to run this report:

Table 23.47. Single Host Resource Usage (Hour of Day) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the Host Name parameter includes only hosts in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the Host Name parameter includes all hosts in the selected data center.
Host Type Only hosts of the selected type will be included in the list of options for the Host Name parameter. The options list shows only host types present in the selected data center and cluster. If All is selected, the list of options for the Host Name parameter includes all host types.
Host Name The report refers to the host selected. A report is only for a single host and the user must select a host.

23.3.13.9. Trend Reports: Single Virtual Machine Resources (Days of Week)

The Single Virtual Machine Resources (Days of Week) report shows a variety of resource utilization metrics for a single virtual machine over a given period of time, broken down by day of the week. The metrics include CPU usage, memory usage, disk usage and network usage. The following parameters must be provided to run this report:

Table 23.48. Single Virtual Machine Resources (Days of Week) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the VM Name parameter includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the VM Name parameter includes all virtual machines in the selected data center.
Virtual Machine Type The list of options for the VM Name parameter includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the list of options for the VM Name parameter includes all virtual machine types.
Virtual Machine Name The report refers to the virtual machine selected. A report is only for a single virtual machine and the user must select a virtual machine.

23.3.13.10. Trend Reports: Single Virtual Machine Resources (Hour of Day)

The Single Virtual Machine Resources (Hour of Day) report shows a variety of resource utilization metrics for a single virtual machine over a given period of time, broken down by hour of the day (0-23). The metrics include CPU usage, memory usage, disk usage and network usage. The following parameters must be provided to run this report:

Table 23.49. Single Virtual Machine Resources (Hour of Day) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers which contain clusters.
Cluster The list of options for the VM Name parameter includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the VM Name parameter includes all virtual machines in the selected data center.
Virtual Machine Type The list of options for the VM Name parameter includes only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the list of options for the VM Name parameter includes all virtual machine types.
Virtual Machine Name The report refers to the virtual machine selected. A report is only for a single virtual machine and the user must select a virtual machine.

23.3.13.11. Trend Reports: Single Virtual Machine Resources (Over Time)

The Single Virtual Machine Resources (Over Time) report shows a variety of resource utilization metrics for a single virtual machine over a given period of time. The metrics include CPU usage, memory usage, disk usage and network usage. The following parameters must be provided to run this report:

Table 23.50. Single Virtual Machine Resources (Over Time) Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The report is for the period range selected. Daily reports cover a single day. Monthly reports cover a single month. Quarterly reports cover a three-month quarter, beginning on the month specified in the Dates parameter. Yearly reports cover a year, beginning on the month specified in the Dates parameter.
Dates The report covers the selected period range, beginning on this date. Daily period ranges pass in one day increments. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month. A yearly period range also starts on the selected month.
Data Center The list of options for the Cluster parameter includes only clusters in the selected data center. The options list shows only data centers that contain clusters.
Cluster The list of options for the VM Name parameter includes only virtual machines in the selected cluster. The options list shows only clusters in the selected data center. If All is selected, the list of options for the VM Name parameter includes all virtual machines in the selected data center.
Virtual Machine Type The list of options for the VM Name parameter lists only virtual machines of the selected type. The options list shows only virtual machine types present in the selected data center and cluster. If All is selected, the list of options for the VM Name parameter includes all virtual machine types.
Virtual Machine Name The report refers to the virtual machine selected. A report is only for a single virtual machine and the user must select a virtual machine.

23.3.14. Ad Hoc Reports

Red Hat Enterprise Virtualization Reports provides you with a tool to create customized ad hoc reports. This tool is a component of JasperServer. To create an Ad Hoc Report as an administrator, navigate to the Create drop-down menu on the top menu bar and select Ad Hoc View to open the Data Chooser: Source window.
Create Ad Hoc Report - Administrator's View

Figure 23.12. Create Ad Hoc Report - Administrator's View

The Working with the Ad Hoc Editor section of the online help explains the ad hoc report interface in detail.

23.3.15. Reports Schema: Tag History and ENUM Views

This section describes the tag history and ENUM views available to the user for querying and generating reports. Latest tag views show only living tags relations and the latest details version.

Note

delete_date and detach_date do not appear in latest views because these views provide the latest configuration of living entities, which, by definition, have not been deleted.
Tag relations and latest tag relations history views

Table 23.51. Tag Relations History in the System

Name Type Description
history_id integer The unique ID of this row in the table.
entity_id UUID Unique ID of the entity or tag in the system.
entity_type smallint
  • 2 - VM
  • 3 - Host
  • 5 - VM pool
  • 18 - Tag
parent_id UUID Unique ID of the entity or tag in the system.
attach_date timestamp with time zone The date the entity or tag was attached to the entity or tag.
detach_date timestamp with time zone The date the entity or tag was detached from the entity or tag.
Tag details and latest tag details views
Tag details history in the system.

Table 23.52. v3_5_tag_details_view\v3_5_latest_tag_details_view

Name Type Description
history_id integer The unique ID of this row in the table.
tag_id UUID Unique ID of the tag in the system.
tag_name varchar(50) Name of the tag, as displayed in the tag tree.
tag_description varchar(4000) Description of the tag, as displayed in the edit dialog.
tag_path varchar(4000) The path to the tag in the tree.
tag_level smallint The tag level in the tree.
create_date timestamp with time zone The date this tag was added to the system.
update_date timestamp with time zone The date this tag was changed in the system.
delete_date timestamp with time zone The date this tag was deleted from the system.
Enum translator view
The ENUM table is used to easily translate column numeric types to their meanings and lists ENUM values for columns in the history database.

Table 23.53. v3_5_enum_translator_view

Name Type Description
enum_type varchar(40) The type of ENUM.
enum_key smallint The key of the ENUM.
value varchar(40) The value of the ENUM.

23.4. Dashboards

23.4.1. Dashboards

A dashboard is a collection of related reports that provide a summary of resource usage in the virtualized environment. Dashboards feature an active control panel, allowing quick adjustment of the parameters. Though a dashboard cannot be exported or printed, each of the reports in a dashboard can be opened separately to export, print, save, or adjust the data.
Dashboards can be created and configured using the Designer, in the Reports Portal. For more information on dashboards, consult the JasperReports documentation by clicking the Help in the top menu bar of the Reports Portal.

23.4.2. Inventory Dashboard

The Inventory Dashboard provides an executive summary of the inventory of a data center over a given period of time. The dashboard includes average disk use, number of active virtual machines, and a breakdown of host operating systems. The following parameters can be modified for this dashboard:

Table 23.54. Inventory Dashboard Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The dashboard shows data for the period range selected. Monthly dashboards cover a single month. Quarterly dashboards cover a three-month quarter, beginning on the month specified in the Dates parameter.
Dates The dashboard covers the selected period range, beginning on this date. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month.
Data Center The report refers to the selected data center. The list of options shows only data centers containing either hosts, storage domains or virtual machines. The list of options for the Cluster parameter includes only clusters in the selected data center.

23.4.3. Trends Dashboard

The Trends Dashboard provides an executive summary of the trends in a data center over a given period of time. The dashboard includes graphs of CPU and memory usage over time for the most highly utilized hosts and virtual machines in the data center. The following parameters can be modified for this dashboard:

Table 23.55. Trends Dashboard Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The dashboard shows data for the period range selected. Monthly dashboards cover a single month. Quarterly dashboards cover a three-month quarter, beginning on the month specified in the Dates parameter.
Dates The dashboard covers the selected period range, beginning on this date. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month.
Data Center The report refers to the selected data center. The list of options shows only data centers containing either hosts, storage domains or virtual machines. The list of options for the Cluster parameter includes only clusters in the selected data center.

23.4.4. Uptime Dashboard

The Uptime Dashboard provides an executive summary of the service level and uptime for a data center over a given period of time. The dashboard includes details on total uptime for each cluster in the data center for the period. The following parameters can be modified for this dashboard:

Table 23.56. Uptime Dashboard Parameters

Parameter Description
Show Deleted Entities? The report includes deleted objects, such as data centers, clusters, and hosts removed from the environment.
Period Range The dashboard shows data for the period range selected. Monthly dashboards cover a single month. Quarterly dashboards cover a three-month quarter, beginning on the month specified in the Dates parameter.
Dates The dashboard covers the selected period range, beginning on this date. For a Monthly period range, the selected month is used. For a Quarterly period range, the quarter is determined as beginning on the selected month.
Data Center The report refers to the selected data center. The list of options shows only data centers containing either hosts, storage domains or virtual machines. The list of options for the Cluster parameter includes only clusters in the selected data center.

23.4.5. Integrated Reporting Dashboard in the Red Hat Enterprise Virtualization Administration Portal

The Administration Portal also features dashboards for data centers, clusters, and the overall environment. Select the appropriate resource in tree mode and click the Dashboard resource tab to display the dashboard information in the results list.
Reports Dashboard

Figure 23.13. Reports Dashboard

The dashboards accessible in the Administration Portal are used for viewing data, as such they do not have an active control panel. Configure these dashboards in the Reports Portal by editing Datacenter Dashboard, Cluster Dashboard, and System Dashboard.

Appendix A. Firewalls

A.1. Red Hat Enterprise Virtualization Manager Firewall Requirements

The Red Hat Enterprise Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically, but this overwrites any pre-existing firewall configuration.
Where an existing firewall configuration exists, you must manually insert the firewall rules required by the Manager instead. The engine-setup command saves a list of the iptables rules required in the /usr/share/ovirt-engine/conf/iptables.example file.
The firewall configuration documented here assumes a default configuration. Where non-default HTTP and HTTPS ports are chosen during installation, adjust the firewall rules to allow network traffic on the ports that were selected - not the default ports (80 and 443) listed here.

Table A.1. Red Hat Enterprise Virtualization Manager Firewall Requirements

Port(s) Protocol Source Destination Purpose
- ICMP
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Manager
When registering to the Red Hat Enterprise Virtualization Manager, virtualization hosts send an ICMP ping request to the Manager to confirm that it is online.
22 TCP
System(s) used for maintenance of the Manager including backend configuration, and software upgrades.
Red Hat Enterprise Virtualization Manager
Secure Shell (SSH) access.
Optional.
80, 443 TCP
Administration Portal clients
User Portal clients
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
REST API clients
Red Hat Enterprise Virtualization Manager
Provides HTTP and HTTPS access to the Manager.
6100 TCP
Administration Portal clients
User Portal clients
Red Hat Enterprise Virtualization Manager
Provides websocket proxy access for web-based console clients (noVNC and spice-html5) when the websocket proxy is running on the Manager. If the websocket proxy is running on a different host, however, this port is not used.
7410 UDP
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Manager
Must be open for the Manager to receive Kdump notifications.

Important

In environments where the Red Hat Enterprise Virtualization Manager is also required to export NFS storage, such as an ISO Storage Domain, additional ports must be allowed through the firewall. Grant firewall exceptions for the ports applicable to the version of NFS in use:

NFSv4

  • TCP port 2049 for NFS.

NFSv3

  • TCP and UDP port 2049 for NFS.
  • TCP and UDP port 111 (rpcbind/sunrpc).
  • TCP and UDP port specified with MOUNTD_PORT="port"
  • TCP and UDP port specified with STATD_PORT="port"
  • TCP port specified with LOCKD_TCPPORT="port"
  • UDP port specified with LOCKD_UDPPORT="port"
The MOUNTD_PORT, STATD_PORT, LOCKD_TCPPORT, and LOCKD_UDPPORT ports are configured in the /etc/sysconfig/nfs file.

A.2. Virtualization Host Firewall Requirements

Red Hat Enterprise Linux hosts and Red Hat Enterprise Virtualization Hypervisors require a number of ports to be opened to allow network traffic through the system's firewall. In the case of the Red Hat Enterprise Virtualization Hypervisor these firewall rules are configured automatically. For Red Hat Enterprise Linux hosts however it is necessary to manually configure the firewall.

Table A.2. Virtualization Host Firewall Requirements

Port(s) Protocol Source Destination Purpose
22 TCP
Red Hat Enterprise Virtualization Manager
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Secure Shell (SSH) access.
Optional.
161 UDP
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Manager
Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the hypervisor to one or more external SNMP managers.
Optional.
5900 - 6923 TCP
Administration Portal clients
User Portal clients
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines.
5989 TCP, UDP
Common Information Model Object Manager (CIMOM)
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the hypervisor. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment.
Optional.
16514 TCP
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Virtual machine migration using libvirt.
49152 - 49216 TCP
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Virtual machine migration and fencing using VDSM. These ports must be open facilitate both automated and manually initiated migration of virtual machines.
54321 TCP
Red Hat Enterprise Virtualization Manager
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
Red Hat Enterprise Virtualization Hypervisor(s)
Red Hat Enterprise Linux host(s)
VDSM communications with the Manager and other virtualization hosts.

A.3. Directory Server Firewall Requirements

Red Hat Enterprise Virtualization requires a directory server to support user authentication. A number of ports must be opened in the directory server's firewall to support GSS-API authentication as used by the Red Hat Enterprise Virtualization Manager.

Table A.3. Host Firewall Requirements

Port(s) Protocol Source Destination Purpose
88, 464 TCP, UDP
Red Hat Enterprise Virtualization Manager
Directory server
Kerberos authentication.
389, 636 TCP
Red Hat Enterprise Virtualization Manager
Directory server
Lightweight Directory Access Protocol (LDAP) and LDAP over SSL.

A.4. Database Server Firewall Requirements

Red Hat Enterprise Virtualization supports the use of a remote database server. If you plan to use a remote database server with Red Hat Enterprise Virtualization then you must ensure that the remote database server allows connections from the Manager.

Table A.4. Host Firewall Requirements

Port(s) Protocol Source Destination Purpose
5432 TCP, UDP
Red Hat Enterprise Virtualization Manager
PostgreSQL database server
Default port for PostgreSQL database connections.
If you plan to use a local database server on the Manager itself, which is the default option provided during installation, then no additional firewall rules are required.

Appendix B. VDSM and Hooks

B.1. VDSM

The VDSM service is used by the Red Hat Enterprise Virtualization Manager to manage Red Hat Enterprise Virtualization Hypervisors and Red Hat Enterprise Linux hosts. VDSM manages and monitors the host's storage, memory and network resources. It also co-ordinates virtual machine creation, statistics gathering, log collection and other host administration tasks. VDSM is run as a daemon on each hypervisor host managed by Red Hat Enterprise Virtualization Manager. It answers XML-RPC calls from clients. The Red Hat Enterprise Virtualization Manager functions as a VDSM client.

B.2. VDSM Hooks

VDSM is extensible via hooks. Hooks are scripts executed on the host when key events occur. When a supported event occurs VDSM runs any executable hook scripts in /usr/libexec/vdsm/hooks/nn_event-name/ on the host in alphanumeric order. By convention each hook script is assigned a two digit number, included at the front of the file name, to ensure that the order in which the scripts will be run in is clear. You are able to create hook scripts in any programming language, Python will however be used for the examples contained in this chapter.
Note that all scripts defined on the host for the event are executed. If you require that a given hook is only executed for a subset of the virtual machines which run on the host then you must ensure that the hook script itself handles this requirement by evaluating the Custom Properties associated with the virtual machine.

Warning

VDSM hooks can interfere with the operation of Red Hat Enterprise Virtualization. A bug in a VDSM hook has the potential to cause virtual machine crashes and loss of data. VDSM hooks should be implemented with caution and tested rigorously. The Hooks API is new and subject to significant change in the future.

B.3. Extending VDSM with Hooks

This chapter describes how to extend VDSM with event-driven hooks. Extending VDSM with hooks is an experimental technology, and this chapter is intended for experienced developers. Note that at this time hooks are not able to run on Red Hat Enterprise Virtualization Hypervisors, they must only be used on Red Hat Enterprise Linux hosts. By setting custom properties on virtual machines it is possible to pass additional parameters, specific to a given virtual machine, to the hook scripts.

B.4. Supported VDSM Events

Table B.1. Supported VDSM Events

Name Description
before_vm_start Before virtual machine starts.
after_vm_start After virtual machine starts.
before_vm_cont Before virtual machine continues.
after_vm_cont After virtual machine continues.
before_vm_pause Before virtual machine pauses.
after_vm_pause After virtual machine pauses.
before_vm_hibernate Before virtual machine hibernates.
after_vm_hibernate After virtual machine hibernates.
before_vm_dehibernate Before virtual machine dehibernates.
after_vm_dehibernate After virtual machine dehibernates.
before_vm_migrate_source Before virtual machine migration, run on the source hypervisor host from which the migration is occurring.
after_vm_migrate_source After virtual machine migration, run on the source hypervisor host from which the migration is occurring.
before_vm_migrate_destination Before virtual machine migration, run on the destination hypervisor host to which the migration is occurring.
after_vm_migrate_destination After virtual machine migration, run on the destination hypervisor host to which the migration is occurring.
after_vm_destroy After virtual machine destruction.
before_vdsm_start Before VDSM is started on the hypervisor host. before_vdsm_start hooks are executed as the user root, and do not inherit the environment of the VDSM process.
after_vdsm_stop After VDSM is stopped on the hypervisor host. after_vdsm_stop hooks are executed as the user root, and do not inherit the environment of the VDSM process.
before_nic_hotplug Before the NIC is hot plugged into the virtual machine.
after_nic_hotplug After the NIC is hot plugged into the virtual machine.
before_nic_hotunplug Before the NIC is hot unplugged from the virtual machine
after_nic_hotunplug After the NIC is hot unplugged from the virtual machine.
after_nic_hotplug_fail After hot plugging the NIC to the virtual machine fails.
after_nic_hotunplug_fail After hot unplugging the NIC from the virtual machine fails.
before_disk_hotplug Before the disk is hot plugged into the virtual machine.
after_disk_hotplug After the disk is hot plugged into the virtual machine.
before_disk_hotunplug Before the disk is hot unplugged from the virtual machine
after_disk_hotunplug After the disk is hot unplugged from the virtual machine.
after_disk_hotplug_fail After hot plugging the disk to the virtual machine fails.
after_disk_hotunplug_fail After hot unplugging the disk from the virtual machine fails.
before_device_create Before creating a device that supports custom properties.
after_device_create After creating a device that supports custom properties.
before_update_device Before updating a device that supports custom properties.
after_update_device After updating a device that supports custom properties.
before_device_destroy Before destroying a device that supports custom properties.
after_device_destroy After destroying a device that supports custom properties.
before_device_migrate_destination Before device migration, run on the destination hypervisor host to which the migration is occurring.
after_device_migrate_destination After device migration, run on the destination hypervisor host to which the migration is occurring.
before_device_migrate_source Before device migration, run on the source hypervisor host from which the migration is occurring.
after_device_migrate_source After device migration, run on the source hypervisor host from which the migration is occurring.

B.5. The VDSM Hook Environment

Most hook scripts are run as the vdsm user and inherit the environment of the VDSM process. The exceptions are hook scripts triggered by the before_vdsm_start and after_vdsm_stop events. Hook scripts triggered by these events run as the root user and do not inherit the environment of the VDSM process.

B.6. The VDSM Hook Domain XML Object

When hook scripts are started, the _hook_domxml variable is appended to the environment. This variable contains the path of the libvirt domain XML representation of the relevant virtual machine. Several hooks are an exception to this rule, as outlined below.
The _hook_domxml variable of the following hooks contains the XML representation of the NIC and not the virtual machine.
  • *_nic_hotplug_*
  • *_nic_hotunplug_*
  • *_update_device
  • *_device_create
  • *_device_migrate_*

Important

The before_migration_destination and before_dehibernation hooks currently receive the XML of the domain from the source host. The XML of the domain at the destination will have various differences.
The libvirt domain XML format is used by VDSM to define virtual machines. Details on the libvirt domain XML format can be found at http://libvirt.org/formatdomain.html. The UUID of the virtual machine may be deduced from the domain XML, but it is also available as the environment variable vmId.

B.7. Defining Custom Properties

The custom properties that are accepted by the Red Hat Enterprise Virtualization Manager - and in turn passed to custom hooks - are defined using the engine-config command. Run this command as the root user on the host where Red Hat Enterprise Virtualization Manager is installed.
The UserDefinedVMProperties and CustomDeviceProperties configuration keys are used to store the names of the custom properties supported. Regular expressions defining the valid values for each named custom property are also contained in these configuration keys.
Multiple custom properties are separated by a semi-colon. Note that when setting the configuration key, any existing value it contained is overwritten. When combining new and existing custom properties, all of the custom properties in the command used to set the key's value must be included.
Once the configuration key has been updated, the ovirt-engine service must be restarted for the new values to take effect.

Example B.1. Virtual Machine Properties - Defining the smartcard Custom Property

  1. Check the existing custom properties defined by the UserDefinedVMProperties configuration key using the following command:
    # engine-config -g UserDefinedVMProperties
    As shown by the output below, the custom property memory is already defined. The regular expression ^[0-9]+$ ensures that the custom property will only ever contain numeric characters.
    # engine-config -g UserDefinedVMProperties
    UserDefinedVMProperties:  version: 3.0
    UserDefinedVMProperties:  version: 3.1
    UserDefinedVMProperties:  version: 3.2
    UserDefinedVMProperties:  version: 3.3
    UserDefinedVMProperties : memory=^[0-9]+$ version: 3.2
  2. Because the memory custom property is already defined in the UserDefinedVMProperties configuration key, the new custom property must be appended to it. The additional custom property, smartcard, is added to the configuration key's value. The new custom property is able to hold a value of true or false.
    # engine-config -s UserDefinedVMProperties='memory=^[0-9]+$;smartcard=^(true|false)$' --cver=3.2
  3. Verify that the custom properties defined by the UserDefinedVMProperties configuration key have been updated correctly.
    # engine-config -g UserDefinedVMProperties
    UserDefinedVMProperties:  version: 3.0
    UserDefinedVMProperties:  version: 3.1
    UserDefinedVMProperties:  version: 3.2
    UserDefinedVMProperties:  version: 3.3
    UserDefinedVMProperties : memory=^[0-9]+$;smartcard=^(true|false)$ version: 3.2
  4. Finally, the ovirt-engine service must be restarted for the configuration change to take effect.
    # service ovirt-engine restart

Example B.2. Device Properties - Defining the interface Custom Property

  1. Check the existing custom properties defined by the CustomDeviceProperties configuration key using the following command:
    # engine-config -g CustomDeviceProperties
    As shown by the output below, no custom properties have yet been defined.
    # engine-config -g CustomDeviceProperties
    CustomDeviceProperties:  version: 3.0
    CustomDeviceProperties:  version: 3.1
    CustomDeviceProperties:  version: 3.2
    CustomDeviceProperties:  version: 3.3
  2. The interface custom property does not already exist, so it can be appended as is. In this example, the value of the speed sub-property is set to a range of 0 to 99999, and the value of the duplex sub-property is set to a selection of either full or half.
    # engine-config -s CustomDeviceProperties="{type=interface;prop={speed=^([0-9]{1,5})$;duplex=^(full|half)$}}" --cver=3.3
  3. Verify that the custom properties defined by the CustomDeviceProperties configuration key have been updated correctly.
    # engine-config -g CustomDeviceProperties
    UserDefinedVMProperties:  version: 3.0
    UserDefinedVMProperties:  version: 3.1
    UserDefinedVMProperties:  version: 3.2
    UserDefinedVMProperties : {type=interface;prop={speed=^([0-9]{1,5})$;duplex=^(full|half)$}} version: 3.3
  4. Finally, the ovirt-engine service must be restarted for the configuration change to take effect.
    # service ovirt-engine restart

B.8. Setting Virtual Machine Custom Properties

Once custom properties are defined in the Red Hat Enterprise Virtualization Manager, you can begin setting them on virtual machines. Custom properties are set on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows in the Administration Portal.
You can also set custom properties from the Run Virtual Machine(s) dialog box. Custom properties set from the Run Virtual Machine(s) dialog box will only apply to the virtual machine until it is next shutdown.
The Custom Properties tab provides a facility for you to select from the list of defined custom properties. Once you select a custom property key an additional field will display allowing you to enter a value for that key. Add additional key/value pairs by clicking the + button and remove them by clicking the - button.

B.9. Evaluating Virtual Machine Custom Properties in a VDSM Hook

Each key set in the Custom Properties field for a virtual machine is appended as an environment variable when calling hook scripts. Although the regular expressions used to validate the Custom Properties field provide some protection you should ensure that your scripts also validate that the inputs provided match their expectations.

Example B.3. Evaluating Custom Properties

This short Python example checks for the existence of the custom property key1. If the custom property is set then the value is printed to standard error. If the custom property is not set then no action is taken.
#!/usr/bin/python

import os
import sys

if os.environ.has_key('key1'):
	sys.stderr.write('key1 value was : %s\n' % os.environ['key1'])
else:
    sys.exit(0)

B.10. Using the VDSM Hooking Module

VDSM ships with a Python hooking module, providing helper functions for VDSM hook scripts. This module is provided as an example, and is only relevant to VDSM hooks written in Python.
The hooking module supports reading of a virtual machine's libvirt XML into a DOM object. Hook scripts can then use Python's built in xml.dom library (http://docs.python.org/release/2.6/library/xml.dom.html) to manipulate the object.
The modified object can then be saved back to libvirt XML using the hooking module. The hooking module provides the following functions to support hook development:

Table B.2. Hooking module functions

Name Argument Description
tobool string Converts a string "true" or "false" to a Boolean value
read_domxml - Reads the virtual machine's libvirt XML into a DOM object
write_domxml DOM object Writes the virtual machine's libvirt XML from a DOM object

B.11. VDSM Hook Execution

before_vm_start scripts can edit the domain XML in order to change VDSM's definition of a virtual machine before it reaches libvirt. Caution must be exercised in doing so. Hook scripts have the potential to disrupt the operation of VDSM, and buggy scripts can result in outages to the Red Hat Enterprise Virtualization environment. In particular, ensure you never change the UUID of the domain, and do not attempt to remove a device from the domain without sufficient background knowledge.
Both before_vdsm_start and after_vdsm_stop hook scripts are run as the root user. Other hook scripts that require root access to the system must be written to use the sudo command for privilege escalation. To support this the /etc/sudoers must be updated to allow the vdsm user to use sudo without reentering a password. This is required as hook scripts are executed non-interactively.

Example B.4. Configuring sudo for VDSM Hooks

In this example the sudo command will be configured to allow the vdsm user to run the /bin/chown command as root.
  1. Log into the virtualization host as root.
  2. Open the /etc/sudoers file in a text editor.
  3. Add this line to the file:
    vdsm ALL=(ALL) NOPASSWD: /bin/chown
    This specifies that the vdsm user has the ability to run the /bin/chown command as the root user. The NOPASSWD parameter indicates that the user will not be prompted to enter their password when calling sudo.
Once this configuration change has been made VDSM hooks are able to use the sudo command to run /bin/chown as root. This Python code uses sudo to execute /bin/chown as root on the file /my_file.
retcode = subprocess.call( ["/usr/bin/sudo", "/bin/chown", "root", "/my_file"] )
The standard error stream of hook scripts is collected in VDSM's log. This information is used to debug hook scripts.

B.12. VDSM Hook Return Codes

Hook scripts must return one of the return codes shown in Table B.3, “Hook Return Codes”. The return code will determine whether further hook scripts are processed by VDSM.

Table B.3. Hook Return Codes

Code Description
0 The hook script ended successfully
1 The hook script failed, other hooks should be processed
2 The hook script failed, no further hooks should be processed
>2 Reserved

B.13. VDSM Hook Examples

The example hook scripts provided in this section are strictly not supported by Red Hat. You must ensure that any and all hook scripts that you install to your system, regardless of source, are thoroughly tested for your environment.

Example B.5. NUMA Node Tuning

Purpose:
This hook script allows for tuning the allocation of memory on a NUMA host based on the numaset custom property. Where the custom property is not set no action is taken.
Configuration String:
numaset=^(interleave|strict|preferred):[\^]?\d+(-\d+)?(,[\^]?\d+(-\d+)?)*$
The regular expression used allows the numaset custom property for a given virtual machine to specify both the allocation mode (interleave, strict, preferred) and the node to use. The two values are separated by a colon (:). The regular expression allows specification of the nodeset as:
  • that a specific node (numaset=strict:1, specifies that only node 1 be used), or
  • that a range of nodes be used (numaset=strict:1-4, specifies that nodes 1 through 4 be used), or
  • that a specific node not be used (numaset=strict:^3, specifies that node 3 not be used), or
  • any comma-separated combination of the above (numaset=strict:1-4,6, specifies that nodes 1 to 4, and 6 be used).
Script:
/usr/libexec/vdsm/hooks/before_vm_start/50_numa
#!/usr/bin/python

import os
import sys
import hooking
import traceback

'''
numa hook
=========
add numa support for domain xml:

<numatune>
    <memory mode="strict" nodeset="1-4,^3" />
</numatune>

memory=interleave|strict|preferred

numaset="1" (use one NUMA node)
numaset="1-4" (use 1-4 NUMA nodes)
numaset="^3" (don't use NUMA node 3)
numaset="1-4,^3,6" (or combinations)

syntax:
    numa=strict:1-4
'''

if os.environ.has_key('numa'):
    try:
        mode, nodeset = os.environ['numa'].split(':')

        domxml = hooking.read_domxml()

        domain = domxml.getElementsByTagName('domain')[0]
        numas = domxml.getElementsByTagName('numatune')

        if not len(numas) > 0:
            numatune = domxml.createElement('numatune')
            domain.appendChild(numatune)

            memory = domxml.createElement('memory')
            memory.setAttribute('mode', mode)
            memory.setAttribute('nodeset', nodeset)
            numatune.appendChild(memory)

            hooking.write_domxml(domxml)
        else:
            sys.stderr.write('numa: numa already exists in domain xml')
            sys.exit(2)
    except:
        sys.stderr.write('numa: [unexpected error]: %s\n' % traceback.format_exc())
        sys.exit(2)

Appendix C. Explanation of Network Bridge Parameters

C.1. Explanation of bridge_opts Parameters

Table C.1. bridge_opts parameters

Parameter Description
forward_delay Sets the time, in deciseconds, a bridge will spend in the listening and learning states. If no switching loop is discovered in this time, the bridge will enter forwarding state. This allows time to inspect the traffic and layout of the network before normal network operation.
gc_timer Sets the garbage collection time, in deciseconds, after which the forwarding database is checked and cleared of timed-out entries.
group_addr Set to zero when sending a general query. Set to the IP multicast address when sending a group-specific query, or group-and-source-specific query.
group_fwd_mask Enables bridge to forward link local group addresses. Changing this value from the default will allow non-standard bridging behavior.
hash_elasticity The maximum chain length permitted in the hash table. Does not take effect until the next new multicast group is added. If this cannot be satisfied after rehashing, a hash collision occurs and snooping is disabled.
hash_max The maximum amount of buckets in the hash table. This takes effect immediately and cannot be set to a value less than the current number of multicast group entries. Value must be a power of two.
hello_time Sets the time interval, in deciseconds, between sending 'hello' messages, announcing bridge position in the network topology. Applies only if this bridge is the Spanning Tree root bridge.
hello_timer Time, in deciseconds, since last 'hello' message was sent.
max_age Sets the maximum time, in deciseconds, to receive a 'hello' message from another root bridge before that bridge is considered dead and takeover begins.
multicast_last_member_count Sets the number of 'last member' queries sent to the multicast group after receiving a 'leave group' message from a host.
multicast_last_member_interval Sets the time, in deciseconds, between 'last member' queries.
multicast_membership_interval Sets the time, in deciseconds, that a bridge will wait to hear from a member of a multicast group before it stops sending multicast traffic to the host.
multicast_querier Sets whether the bridge actively runs a multicast querier or not. When a bridge receives a 'multicast host membership' query from another network host, that host is tracked based on the time that the query was received plus the multicast query interval time. If the bridge later attempts to forward traffic for that multicast membership, or is communicating with a querying multicast router, this timer confirms the validity of the querier. If valid, the multicast traffic is delivered via the bridge's existing multicast membership table; if no longer valid, the traffic is sent via all bridge ports. Broadcast domains with, or expecting, multicast memberships should run at least one multicast querier for improved performance.
multicast_querier_interval Sets the maximum time, in deciseconds, between last 'multicast host membership' query received from a host to ensure it is still valid.
multicast_query_use_ifaddr Boolean. Defaults to '0', in which case the querier uses 0.0.0.0 as source address for IPv4 messages. Changing this sets the bridge IP as the source address.
multicast_query_interval Sets the time, in deciseconds, between query messages sent by the bridge to ensure validity of multicast memberships. At this time, or if the bridge is asked to send a multicast query for that membership, the bridge checks its own multicast querier state based on the time that a check was requested plus multicast_query_interval. If a multicast query for this membership has been sent within the last multicast_query_interval, it is not sent again.
multicast_query_response_interval Length of time, in deciseconds, a host is allowed to respond to a query once it has been sent. Must be less than or equal to the value of the multicast_query_interval.
multicast_router Allows you to enable or disable ports as having multicast routers attached. A port with one or more multicast routers will receive all multicast traffic. A value of 0 disables completely, a value of 1 enables the system to automatically detect the presence of routers based on queries, and a value of 2 enables ports to always receive all multicast traffic.
multicast_snooping Toggles whether snooping is enabled or disabled. Snooping allows the bridge to listen to the network traffic between routers and hosts to maintain a map to filter multicast traffic to the appropriate links. This option allows the user to re-enable snooping if it was automatically disabled due to hash collisions, however snooping will not be re-enabled if the hash collision has not been resolved.
multicast_startup_query_count Sets the number of queries sent out at startup to determine membership information.
multicast_startup_query_interval Sets the time, in deciseconds, between queries sent out at startup to determine membership information.

Appendix D. Red Hat Enterprise Virtualization User Interface Plugins

D.1. Red Hat Enterprise Virtualization User Interface Plug-ins

Red Hat Enterprise Virtualization supports plug-ins that expose non-standard features. This makes it easier to use the Red Hat Enterprise Virtualization Administration Portal to integrate with other systems. Each interface plug-in represents a set of user interface extensions that can be packaged and distributed for use with Red Hat Enterprise Virtualization.
Red Hat Enterprise Virtualization's User Interface plug-ins integrate with the Administration Portal directly on the client using the JavaScript programming language. Plug-ins are invoked by the Administration Portal and executed in the web browser's JavaScript runtime. User Interface plug-ins can use the JavaScript language and its libraries.
At key events during runtime, the Administration Portal invokes individual plug-ins via event handler functions representing Administration-Portal-to-plug-in communication. Although the Administration Portal supports multiple event-handler functions, a plug-in declares functions which are of interest only to its implementation. Each plug-in must register relevant event handler functions as part of the plug-in bootstrap sequence before the plug-in is put to use by the administration portal.
To facilitate the plug-in-to-Administration-Portal communication that drives the User Interface extension, the Administration Portal exposes the plug-in API as a global (top-level) pluginApi JavaScript object that individual plug-ins can consume. Each plug-in obtains a separate pluginApi instance, allowing the Administration Portal to control plug-in API-function invocation for each plug-in with respect to the plug-in's life cycle.

D.2. Red Hat Enterprise Virtualization User Interface Plugin Lifecycle

D.2.1. Red Hat Enterprise Virtualization User Interface Plug-in Life cycle

The basic life cycle of a User Interface Plug-in divides into three stages:
  1. Plug-in discovery.
  2. Plug-in loading.
  3. Plug-in bootstrapping.

D.2.2. Red Hat Enterprise Virtualization User Interface Plug-in Discovery

Creating plug-in descriptors is the first step in the plug-in discovery process. Plug-in descriptors contain important plug-in metadata and optional default plug-in-specific configurations.
As part of handling administration portal HTML page requests (HTTP GET), User Interface plug-in infrastructure attempts to discover and load plug-in descriptors from your local file system. For each plug-in descriptor, the infrastructure also attempts to load corresponding plug-in user configurations used to override default plug-in-specific configurations (if any exist) and tweak plug-in runtime behavior. Plug-in user configuration is optional. After loading descriptors and corresponding user configuration files, oVirt Engine aggregates User Interface plug-in data and embeds it into the administration portal HTML page for runtime evaluation.
By default, plug-in descriptors reside in $ENGINE_USR/ui-plug-ins, with a default mapping of ENGINE_USR=/usr/share/ovirt-engine as defined by oVirt Engine local configuration. Plug-in descriptors are expected to comply with JSON format specifications, but plug-in descriptors allow Java/C++ style comments (of both /* and // varieties) in addition to the JSON format specifications.
By default, plug-in user configuration files reside in $ENGINE_ETC/ui-plug-ins, with a default mapping of ENGINE_ETC=/etc/ovirt-engine as defined by oVirt Engine local configuration. Plug-in user configuration files are expected to comply with same content format rules as plug-in descriptors.

Note

Plug-in user configuration files generally follow the <descriptorFileName>-config.json naming convention.

D.2.3. Red Hat Enterprise Virtualization User Interface Plug-in Loading

After a plug-in has been discovered and its data is embedded into the administration portal HTML page, administration portal tries to load the plug-in as part of application startup (unless you have configured it not to load as part of application startup).
For each plug-in that has been discovered, the administration portal creates an HTML iframe element that is used to load its host page. The plug-in host page is necessary to begin the plug-in bootstrap process, which (the bootstrap process) is used to evaluate the plug-in code in the context of the plug-in's iframe element. User interface plug-in infrastructure supports serving plug-in resource files (such as the plug-in host page) from the local file system. The plug-in host page is loaded into the iframe element and the plug-in code is evaluated. After the plug-in code is evaluated, the plug-in communicates with the administration portal by means of the plug-in API.

D.2.4. Red Hat Enterprise Virtualization User Interface Plug-in Bootstrapping

A typical plug-in bootstrap sequence consists of following steps:

Procedure D.1. Plug-in Bootstrap Sequence

  1. Obtain pluginApi instance for the given plug-in
  2. Obtain runtime plug-in configuration object (optional)
  3. Register relevant event handler functions
  4. Notify UI plug-in infrastructure to proceed with plug-in initialization
The following code snippet illustrates the above mentioned steps in practice:
// Access plug-in API using 'parent' due to this code being evaluated within the context of an iframe element.
// As 'parent.pluginApi' is subject to Same-Origin Policy, this will only work when WebAdmin HTML page and plug-in
// host page are served from same origin. WebAdmin HTML page and plug-in host page will always be on same origin
// when using UI plug-in infrastructure support to serve plug-in resource files.
var api = parent.pluginApi('MyPlugin');

// Runtime configuration object associated with the plug-in (or an empty object).
var config = api.configObject();

// Register event handler function(s) for later invocation by UI plug-in infrastructure.
api.register({
	    // UiInit event handler function.
		UiInit: function() {
				// Handle UiInit event.
					window.alert('Favorite music band is ' + config.band);
					    }
});

// Notify UI plug-in infrastructure to proceed with plug-in initialization.
api.ready();

D.3. User Interface Plugin-related Files and Their Locations

Table D.1. UI Plugin-related Files and their Locations

File Location Remarks
Plug-in descriptor files (meta-data) /usr/share/ovirt-engine/ui-plugins/my-plugin.json  
Plug-in user configuration files /etc/ovirt-engine/ui-plugins/my-plugin-config.json
Plug-in resource files /usr/share/ovirt-enging/ui-plugins/<resourcePath>/PluginHostPage.html <resourcePath> is defined by the corresponding attribute in the plug-in descriptor.

D.4. Example User Interface Plug-in Deployment

Follow these instructions to create a user interface plug-in that runs a Hello World! program when you sign in to the Red Hat Enterprise Virtualization Manager administration portal.

Procedure D.2. Deploying a Hello World! Plug-in

  1. Create a plug-in descriptor by creating the following file in the Manager at /usr/share/ovirt-engine/ui-plugins/helloWorld.json:
    {
        "name": "HelloWorld",
        "url": "/ovirt-engine/webadmin/plugin/HelloWorld/start.html",
        "resourcePath": "hello-files"
    }
    
  2. Create the plug-in host page by creating the following file in the Manager at /usr/share/ovirt-engine/ui-plugins/hello-files/start.html:
    <!DOCTYPE html><html><head>
    <script>
        var api = parent.pluginApi('HelloWorld');
        api.register({
    	UiInit: function() { window.alert('Hello world'); }
        });
        api.ready();
    </script>
    </head><body></body></html>
    
If you have successfully implemented the Hello World! plug-in, you will see this screen when you sign in to the administration portal:
A Successful Implementation of the Hello World! Plug-in

Figure D.1. A Successful Implementation of the Hello World! Plug-in

D.5. Installing the Red Hat Support Plug-in

The Red Hat Support Plug-in provides access to Red Hat Access services from the Red Hat Enterprise Virtualization Administration Portal.

Procedure D.3. Installing the Red Hat Support Plug-in

Note

The Red Hat Support Plug-in is installed by default in Red Hat Enterprise Virtualization 3.3. The Red Hat Support Plug-in is not installed by default in Red Hat Enterprise Virtualization 3.2. It is not necessary to run this procedure in Red Hat Enterprise Virtualization 3.3.
  • Use yum to install the redhat-support-plugin-rhev plug-in:
    # yum install redhat-support-plugin-rhev

D.6. Using Red Hat Support Plug-in

The Red Hat Access Plug-in allows you to use Red Hat access services from the Red Hat Enterprise Virtualization Administration Portal. You must log in using your Red Hat login credentials. The Red Hat Access Plug-in detects when you are not logged in; if you are not logged in, a login window opens.

Note

Red Hat Enterprise Virtualization Administration Portal credentials are not the same as a user's Red Hat login.
The Red Hat Support Plug-in Login Window

Figure D.2. Red Hat Support Plug-in - Login Window

After logging in, you will be able to access the Red Hat Customer Portal. Red Hat Support Plug-in is available in the details pane as well as in several context menus in the Red Hat Enterprise Virtualization Administration Portal. Search the Red Hat Access database using the Search bar. Search results display in the left-hand navigation list in the details pane.
Query results in the left-hand navigation list of the Red Hat Support Plug-in

Figure D.3. Red Hat Support Plug-in - Query Results in the Left-Hand Navigation List

Right-click on context menus in the Red Hat Enterprise Virtualization Administrator Portal to access the Red Hat Support Plug-in.
Right-clicking on a context menu to access Red Hat Support Plug-in

Figure D.4. Right-clicking on a Context Menu to Access Red Hat Support Plug-in

Open a new support case or modify an existing case by selecting the Open New Support Case or Modify Existing Case buttons.
Red Hat Support Plug-in Opening a New Support Case

Figure D.5. Red Hat Support Plug-in - Opening a New Support Case

Select the Red Hat Documentation tab to open the documentation relevant to the part of the Administration Portal currently on the screen.
Red Hat Support Plug-in - a picture showing how to access documentation through the Red Hat Support Plug-in

Figure D.6. Red Hat Support Plug-in - Accessing Documentation

Appendix E. Red Hat Enterprise Virtualization and SSL

E.1. Replacing the Red Hat Enterprise Virtualization Manager SSL Certificate

Warning

Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and the /etc/pki/ovirt-engine directory must remain as the default 755.
Summary
You want to use your organization's commercially signed certificate to identify your Red Hat Enterprise Virtualization Manager to users connecting over HTTPS.

Note

Using a commercially issued certificate for https connections does not affect the certificate used for authentication between your Manager and hosts, they will continue to use the self-signed certificate generated by the Manager.
Prerequisites
This procedure requires a PEM formatted certificate from your commercial certificate issuing authority, a .nokey file, and a .cer file. The .nokey and .cer files are sometimes distributed as a certificate-key bundle in the P12 format.
This procedure assumes that you have a certificate-key bundle in the P12 format.

Procedure E.1. Replacing the Red Hat Enterprise Virtualization Manager Apache SSL Certificate

  1. The Manager has been configured to use /etc/pki/ovirt-engine/apache-ca.pem, which is symbolically linked to /etc/pki/ovirt-engine/ca.pem. Remove the symbolic link.
    # rm /etc/pki/ovirt-engine/apache-ca.pem
  2. Save your commercially issued certificate as /etc/pki/ovirt-engine/apache-ca.pem. The certificate chain must be complete up to the root certificate. The chain order is important and should be from the last intermediate certificate to the root certificate.
    mv YOUR-3RD-PARTY-CERT.pem /etc/pki/ovirt-engine/apache-ca.pem
  3. Move your P12 bundle to /etc/pki/ovirt-engine/keys/apache.p12.
  4. Extract the key from the bundle.
    # openssl pkcs12 -in  /etc/pki/ovirt-engine/keys/apache.p12 -nocerts -nodes > /etc/pki/ovirt-engine/keys/apache.key.nopass
  5. Extract the certificate from the bundle.
    # openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nokeys > /etc/pki/ovirt-engine/certs/apache.cer
  6. Restart the Apache server.
    # service httpd restart
Result
Your users can now connect to the portals without being warned about the authenticity of the certificate used to encrypt https traffic.

E.2. Setting Up SSL or TLS Connections between the Manager and an LDAP Server

To set up a secure connection between the Red Hat Enterpriser Virtualization Manager and an LDAP server, obtain the server's root CA certificate. Import it to the Manager to create a public keystore file to store the information. The following procedure uses the Java KeyStore (JKS) format. The keystore type can be anything Java supports.

Note

For more information on creating a keystore file and importing certificates, see the X.509 CERTIFICATE TRUST STORE section of the README file at /usr/share/doc/ovirt-engine-extension-aaa-ldap-version.
Obtain the LDAP server's root CA certificate and copy it to the Manager's /tmp directory, then use the following procedure to create a public keystore file on the Manager. Update the LDAP property configuration file with the public keystore file details.

Procedure E.2. Creating a Keystore File

  1. On the Red Hat Enterprise Virtualization Manager, import the certificate and create a public keystore file. The following command imports the root CA certificate at /tmp/myrootca.pem, and creates a public keystore file myrootca.jks under /etc/ovirt-engine/aaa/.
    $ keytool -importcert -noprompt -trustcacerts -alias myrootca -file /tmp/myrootca.pem -keystore /etc/ovirt-engine/aaa/myrootca.jks -storepass changeit
  2. Update the /etc/ovirt-engine/aaa/profile1.properties file with the keystore file information.

    Note

    ${local:_basedir} is the directory where the LDAP property configuration file resides and points to the /etc/ovirt-engine/aaa directory. If you created the public keystore file in a different directory, replace ${local:_basedir} with the full path to the public keystore file.
    • To use startTLS (recommended):
      # Create keystore, import certificate chain and uncomment
      pool.default.ssl.startTLS = true
      pool.default.ssl.truststore.file = ${local:_basedir}/myrootca.jks
      pool.default.ssl.truststore.password = changeit
    • To use SSL:
      # Create keystore, import certificate chain and uncomment
      pool.default.serverset.single.port = 636
      pool.default.ssl.enable = true
      pool.default.ssl.truststore.file = ${local:_basedir}/myrootca.jks
      pool.default.ssl.truststore.password = changeit
To continue configuring a generic LDAP provider, see Section 17.2.2, “Configuring a Generic LDAP Provider”. To continue configuring LDAP and Kerberos for Single Sign-on, see Section 17.2.3.1, “Configuring LDAP and Kerberos for Single Sign-on”.

Appendix F. Using Search, Bookmarks, and Tags

F.1. Searches

F.1.1. Performing Searches in Red Hat Enterprise Virtualization

The Administration Portal enables the management of thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) in the search bar. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are needed.

Note

In versions previous to Red Hat Enterprise Virtualization 3.4, searches in the Administration Portal were case sensitive. Now, the search bar supports case insensitive searches.

F.1.2. Search Syntax and Examples

The syntax of the search queries for Red Hat Enterprise Virtualization resources is as follows:
result type: {criteria} [sortby sort_spec]
Syntax Examples
The following examples describe how the search query is used and help you to understand how Red Hat Enterprise Virtualization assists with building search queries.

Table F.1. Example Search Queries

Example Result
Hosts: Vms.status = up Displays a list of all hosts running virtual machines that are up.
Vms: domain = qa.company.com Displays a list of all virtual machines running on the specified domain.
Vms: users.name = Mary Displays a list of all virtual machines belonging to users with the user name Mary.
Events: severity > normal sortby time Displays the list of all Events whose severity is higher than Normal, sorted by time.

F.1.3. Search Auto-Completion

The Administration Portal provides auto-completion to help you create valid and powerful search queries. As you type each part of a search query, a drop-down list of choices for the next part of the search opens below the Search Bar. You can either select from the list and then continue typing/selecting the next part of the search, or ignore the options and continue entering your query manually.
The following table specifies by example how the Administration Portal auto-completion assists in constructing a query:
Hosts: Vms.status = down

Table F.2. Example Search Queries Using Auto-Completion

Input List Items Displayed Action
h Hosts (1 option only)
Select Hosts or;
Type Hosts
Hosts:
All host properties
Type v
Hosts: v host properties starting with a v Select Vms or type Vms
Hosts: Vms All virtual machine properties Type s
Hosts: Vms.s All virtual machine properties beginning with s Select status or type status
Hosts: Vms.status
=
=!
Select or type =
Hosts: Vms.status = All status values Select or type down

F.1.4. Search Result Type Options

The result type allows you to search for resources of any of the following types:
  • Vms for a list of virtual machines
  • Host for a list of hosts
  • Pools for a list of pools
  • Template for a list of templates
  • Event for a list of events
  • Users for a list of users
  • Cluster for a list of clusters
  • Datacenter for a list of data centers
  • Storage for a list of storage domains
As each type of resource has a unique set of properties and a set of other resource types that it is associated with, each search type has a set of valid syntax combinations. You can also use the auto-complete feature to create valid queries easily.

F.1.5. Search Criteria

You can specify the search criteria after the colon in the query. The syntax of {criteria} is as follows:
<prop><operator><value>
or
<obj-type><prop><operator><value>
Examples
The following table describes the parts of the syntax:

Table F.3. Example Search Criteria

Part Description Values Example Note
prop The property of the searched-for resource. Can also be the property of a resource type (see obj-type), or tag (custom tag). Limit your search to objects with a certain property. For example, search for objects with a status property. Status N/A
obj-type A resource type that can be associated with the searched-for resource. These are system objects, like data centers and virtual machines. Users N/A
operator Comparison operators.
=
!= (not equal)
>
<
>=
<=
N/A Value options depend on obj-type.
Value What the expression is being compared to.
String
Integer
Ranking
Date (formatted according to Regional Settings)
Jones
256
normal
  • Wildcards can be used within strings.
  • "" (two sets of quotation marks with no space between them) can be used to represent an un-initialized (empty) string.
  • Double quotes should be used around a string or date containing spaces

F.1.6. Search: Multiple Criteria and Wildcards

Wildcards can be used in the <value> part of the syntax for strings. For example, to find all users beginning with m, enter m*.
You can perform a search having two criteria by using the Boolean operators AND and OR. For example:
Vms: users.name = m* AND status = Up
This query returns all running virtual machines for users whose names begin with "m".
Vms: users.name = m* AND tag = "paris-loc"
This query returns all virtual machines tagged with "paris-loc" for users whose names begin with "m".
When two criteria are specified without AND or OR, AND is implied. AND precedes OR, and OR precedes implied AND.

F.1.7. Search: Determining Search Order

You can determine the sort order of the returned information by using sortby. Sort direction (asc for ascending, desc for descending) can be included.
For example:
events: severity > normal sortby time desc
This query returns all Events whose severity is higher than Normal, sorted by time (descending order).

F.1.8. Searching for Data Centers

The following table describes all search options for Data Centers.

Table F.4. Searching for Data Centers

Property (of resource or resource-type) Type Description (Reference)
Clusters.clusters-prop Depends on property type The property of the clusters associated with the data center.
name String The name of the data center.
description String A description of the data center.
type String The type of data center.
status List The availability of the data center.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Datacenter: type = nfs and status != up
This example returns a list of data centers with:
  • A storage type of NFS and status other than up

F.1.9. Searching for Clusters

The following table describes all search options for clusters.

Table F.5. Searching Clusters

Property (of resource or resource-type) Type Description (Reference)
Datacenter.datacenter-prop Depends on property type The property of the data center associated with the cluster.
Datacenter String The data center to which the cluster belongs.
name String The unique name that identifies the clusters on the network.
description String The description of the cluster.
initialized String True or False indicating the status of the cluster.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Clusters: initialized = true or name = Default
This example returns a list of clusters which are:
  • initialized; or
  • named Default

F.1.10. Searching for Hosts

The following table describes all search options for hosts.

Table F.6. Searching for Hosts

Property (of resource or resource-type) Type Description (Reference)
Vms.Vms-prop Depends on property type The property of the virtual machines associated with the host.
Templates.templates-prop Depends on property type The property of the templates associated with the host.
Events.events-prop Depends on property type The property of the events associated with the host.
Users.users-prop Depends on property type The property of the users associated with the host.
name String The name of the host.
status List The availability of the host.
cluster String The cluster to which the host belongs.
address String The unique name that identifies the host on the network.
cpu_usage Integer The percent of processing power used.
mem_usage Integer The percentage of memory used.
network_usage Integer The percentage of network usage.
load Integer Jobs waiting to be executed in the run-queue per processor, in a given time slice.
version Integer The version number of the operating system.
cpus Integer The number of CPUs on the host.
memory Integer The amount of memory available.
cpu_speed Integer The processing speed of the CPU.
cpu_model String The type of CPU.
active_vms Integer The number of VMs currently running.
migrating_vms Integer The number of VMs currently being migrated.
committed_mem Integer The percentage of committed memory.
tag String The tag assigned to the host.
type String The type of host.
datacenter String The data center to which the host belongs.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Hosts: cluster = Default and Vms.os = rhel6
This example returns a list of hosts which:
  • Are part of the Default cluster and host virtual machines running the Red Hat Enterprise Linux 6 operating system.

F.1.11. Searching for Networks

The following table describes all search options for networks.

Table F.7. Searching for Networks

Property (of resource or resource-type) Type Description (Reference)
Cluster_network.clusternetwork-prop Depends on property type The property of the cluster associated with the network.
Host_Network.hostnetwork-prop Depends on property type The property of the host associated with the network.
name String The human readable name that identifies the network.
description String Keywords or text describing the network, optionally used when creating the network.
vlanid Integer The VLAN ID of the network.
stp String Whether Spanning Tree Protocol (STP) is enabled or disabled for the network.
mtu Integer The maximum transmission unit for the logical network.
vmnetwork String Whether the network is only used for virtual machine traffic.
datacenter String The data center to which the network is attached.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Network: mtu > 1500 and vmnetwork = true
This example returns a list of networks:
  • with a maximum transmission unit greater than 1500 bytes
  • which are set up for use by only virtual machines.

F.1.12. Searching for Storage

The following table describes all search options for storage.

Table F.8. Searching for Storage

Property (of resource or resource-type) Type Description (Reference)
Hosts.hosts-prop Depends on property type The property of the hosts associated with the storage.
Clusters.clusters-prop Depends on property type The property of the clusters associated with the storage.
name String The unique name that identifies the storage on the network.
status String The status of the storage domain.
datacenter String The data center to which the storage belongs.
type String The type of the storage.
size Integer The size of the storage.
used Integer The amount of the storage that is used.
committed Integer The amount of the storage that is committed.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Storage: size > 200 or used < 50
This example returns a list of storage with:
  • total storage space greater than 200 GB; or
  • used storage space less than 50 GB.

F.1.13. Searching for Disks

The following table describes all search options for disks.

Table F.9. Searching for Disks

Property (of resource or resource-type) Type Description (Reference)
Datacenters.datacenters-prop Depends on property type The property of the data centers associated with the disk.
Storages.storages-prop Depends on property type The property of the storage associated with the disk.
alias String The human readable name that identifies the storage on the network.
description String Keywords or text describing the disk, optionally used when creating the disk.
provisioned_size Integer The virtual size of the disk.
size Integer The size of the disk.
actual_size Integer The actual size allocated to the disk.
creation_date Integer The date the disk was created.
bootable String Whether the disk can or cannot be booted. Valid values are one of 0, 1, yes, or no
shareable String Whether the disk can or cannot be attached to more than one virtual machine at a time. Valid values are one of 0, 1, yes, or no
format String The format of the disk. Can be one of unused, unassigned, cow, or raw.
status String The status of the disk. Can be one of unassigned, ok, locked, invalid, or illegal.
disk_type String The type of the disk. Can be one of image or lun.
number_of_vms Integer The number of virtual machine(s) to which the disk is attached.
vm_names String The name(s) of the virtual machine(s) to which the disk is attached.
quota String The name of the quota enforced on the virtual disk.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Disks: format = cow and provisioned_size > 8
This example returns a list of virtual disks with:
  • Qcow, also known as thin provisioning, format; and
  • an allocated disk size greater than 8 GB.

F.1.14. Searching for Volumes

The following table describes all search options for volumes.

Table F.10. Searching for Volumes

Property (of resource or resource-type) Type Description (Reference)
Volume.cluster-prop Depends on property type The property of the clusters associated with the volume.
Cluster String The name of the cluster associated with the volume.
name String The human readable name that identifies the volume.
type String Can be one of distribute, replicate, distributed_replicate, stripe, or distributed_stripe.
transport_type Integer Can be one of TCP or RDMA.
replica_count Integer Number of replica.
stripe_count Integer Number of stripes.
status String The status of the volume. Can be one of Up or Down.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Volume: transport_type = rdma and stripe_count >= 2
This example returns a list of volumes with:
  • Transport type set to RDMA; and
  • with 2 or more stripes.

F.1.15. Searching for Virtual Machines

The following table describes all search options for virtual machines (VMs). VMs can be either virtual servers or virtual desktops.

Table F.11. Searching for Virtual Machines

Property (of resource or resource-type) Type Description (Reference)
Hosts.hosts-prop Depends on property type The property of the hosts associated with the virtual machine.
Templates.templates-prop Depends on property type The property of the templates associated with the virtual machine.
Events.events-prop Depends on property type The property of the events associated with the virtual machine.
Users.users-prop Depends on property type The property of the users associated with the virtual machine.
Storage.storage-prop Depends on the property type The property of storage devices associated with the virtual machine.
Vnic.mac-prop Depends on the property type The property of the MAC address associated with the virtual machine.
name String The name of the virtual machine.
status List The availability of the virtual machine.
ip Integer The IP address of the virtual machine.
uptime Integer The number of minutes that the virtual machine has been running.
domain String The domain (usually Active Directory domain) that groups these machines.
os String The operating system selected when the virtual machine was created.
creationdate Date The date on which the virtual machine was created.
address String The unique name that identifies the virtual machine on the network.
cpu_usage Integer The percent of processing power used.
mem_usage Integer The percentage of memory used.
network_usage Integer The percentage of network used.
memory Integer The maximum memory defined.
apps String The applications currently installed on the virtual machine.
cluster List The cluster to which the virtual machine belongs.
pool List The virtual machine pool to which the virtual machine belongs.
loggedinuser String The name of the user currently logged in to the virtual machine.
tag List The tags to which the virtual machine belongs.
datacenter String The data center to which the virtual machine belongs.
type List The virtual machine type (server or desktop).
quota String The name of the quota associated with the virtual machine.
description String Keywords or text describing the virtual machine, optionally used when creating the virtual machine.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Vms: template.name = Win* and user.name = ""
This example returns a list of VMs, where:
  • The template on which the virtual machine is based begins with Win and the virtual machine is assigned to any user.
Example
Vms: cluster = Default and os = windowsxp
This example returns a list of VMs, where:
  • The cluster to which the virtual machine belongs is named Default and the virtual machine is running the Windows XP operating system.

F.1.16. Searching for Pools

The following table describes all search options for Pools.

Table F.12. Searching for Pools

Property (of resource or resource-type) Type Description (Reference)
name String The name of the pool.
description String The description of the pool.
type List The type of pool.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Pools: type = automatic
This example returns a list of pools with:
  • Type of automatic

F.1.17. Searching for Templates

The following table describes all search options for templates.

Table F.13. Searching for Templates

Property (of resource or resource-type) Type Description (Reference)
Vms.Vms-prop String The property of the virtual machines associated with the template.
Hosts.hosts-prop String The property of the hosts associated with the template.
Events.events-prop String The property of the events associated with the template.
Users.users-prop String The property of the users associated with the template.
name String The name of the template.
domain String The domain of the template.
os String The type of operating system.
creationdate Integer
The date on which the template was created.
Date format is mm/dd/yy.
childcount Integer The number of VMs created from the template.
mem Integer Defined memory.
description String The description of the template.
status String The status of the template.
cluster String The cluster associated with the template.
datacenter String The data center associated with the template.
quota String The quota associated with the template.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Template: Events.severity >= normal and Vms.uptime > 0
This example returns a list of templates, where:
  • Events of normal or greater severity have occurred on VMs derived from the template, and the VMs are still running.

F.1.18. Searching for Users

The following table describes all search options for users.

Table F.14. Searching for Users

Property (of resource or resource-type) Type Description (Reference)
Vms.Vms-prop Depends on property type The property of the virtual machines associated with the user.
Hosts.hosts-prop Depends on property type The property of the hosts associated with the user.
Templates.templates-prop Depends on property type The property of the templates associated with the user.
Events.events-prop Depends on property type The property of the events associated with the user.
name String The name of the user.
lastname String The last name of the user.
usrname String The unique name of the user.
department String The department to which the user belongs.
group String The group to which the user belongs.
title String The title of the user.
status String The status of the user.
role String The role of the user.
tag String The tag to which the user belongs.
pool String The pool to which the user belongs.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Users: Events.severity > normal and Vms.status = up or Vms.status = pause
This example returns a list of users where:
  • Events of greater than normal severity have occurred on their virtual machines AND the virtual machines are still running; or
  • The users' virtual machines are paused.

F.1.19. Searching for Events

The following table describes all search options you can use to search for events. Auto-completion is offered for many options as appropriate.

Table F.15. Searching for Events

Property (of resource or resource-type) Type Description (Reference)
Vms.Vms-prop Depends on property type The property of the virtual machines associated with the event.
Hosts.hosts-prop Depends on property type The property of the hosts associated with the event.
Templates.templates-prop Depends on property type The property of the templates associated with the event.
Users.users-prop Depends on property type The property of the users associated with the event.
Clusters.clusters-prop Depends on property type The property of the clusters associated with the event.
Volumes.Volumes-prop Depends on property type The property of the volumes associated with the event.
type List Type of the event.
severity List The severity of the event: Warning/Error/Normal.
message String Description of the event type.
time List Day the event occurred.
usrname String The user name associated with the event.
event_host String The host associated with the event.
event_vm String The virtual machine associated with the event.
event_template String The template associated with the event.
event_storage String The storage associated with the event.
event_datacenter String The data center associated with the event.
event_volume String The volume associated with the event.
correlation_id Integer The identification number of the event.
sortby List Sorts the returned results by one of the resource properties.
page Integer The page number of results to display.
Example
Events: Vms.name = testdesktop and Hosts.name = gonzo.example.com
This example returns a list of events, where:
  • The event occurred on the virtual machine named testdesktop while it was running on the host gonzo.example.com.

F.2. Bookmarks

F.2.1. Saving a Query String as a Bookmark

Summary
A bookmark can be used to remember a search query, and shared with other users.

Procedure F.1. Saving a Query String as a Bookmark

  1. Enter the desired search query in the search bar and perform the search.
  2. Click the star-shaped Bookmark button to the right of the search bar to open the New Bookmark window.
    Bookmark Icon

    Figure F.1. Bookmark Icon

  3. Enter the Name of the bookmark.
  4. Edit the Search string field (if applicable).
  5. Click OK to save the query as a bookmark and close the window.
  6. The search query is saved and displays in the Bookmarks pane.
Result
You have saved a search query as a bookmark for future reuse. Use the Bookmark pane to find and select the bookmark.

F.2.2. Editing a Bookmark

Summary
You can modify the name and search string of a bookmark.

Procedure F.2. Editing a Bookmark

  1. Click the Bookmarks tab on the far left side of the screen.
  2. Select the bookmark you wish to edit.
  3. Click the Edit button to open the Edit Bookmark window.
  4. Change the Name and Search string fields as necessary.
  5. Click OK to save the edited bookmark.
Result
You have edited a bookmarked search query.

F.2.3. Deleting a Bookmark

Summary
When a bookmark is no longer needed, remove it.

Procedure F.3. Deleting a Bookmark

  1. Click the Bookmarks tab on the far left side of the screen.
  2. Select the bookmark you wish to remove.
  3. Click the Remove button to open the Remove Bookmark window.
  4. Click OK to remove the selected bookmark.
Result
You have removed a bookmarked search query.

F.3. Tags

F.3.1. Using Tags to Customize Interactions with Red Hat Enterprise Virtualization

After your Red Hat Enterprise Virtualization platform is set up and configured to your requirements, you can customize the way you work with it using tags. Tags provide one key advantage to system administrators: they allow system resources to be arranged into groups or categories. This is useful when many objects exist in the virtualization environment and the administrator would like to concentrate on a specific set of them.
This section describes how to create and edit tags, assign them to hosts or virtual machines and search using the tags as criteria. Tags can be arranged in a hierarchy that matches a structure, to fit the needs of the enterprise.
Administration Portal Tags can be created, modified, and removed using the Tags pane.

F.3.2. Creating a Tag

Summary
Create a tag.

Procedure F.4. Creating a Tag

  1. Click the Tags tab on the left side of the screen.
  2. Select the node under which you wish to create the tag. For example, to create it at the highest level, click the root node.
  3. Click the New button to open the New Tag window.
  4. Enter the Name and Description of the new tag.
  5. Click OK to create the tag.
Result
The new tag is created and displays on the Tags tab.

F.3.3. Modifying a Tag

Summary
You can edit the name and description of a tag.

Procedure F.5. Modifying a Tag

  1. Click the Tags tab on the left side of the screen.
  2. Select the tag you wish to modify.
  3. Click Edit to open the Edit Tag window.
  4. Change the Name and Description fields as necessary.
  5. Click OK to save the edited tag.
Result
You have modified the properties of a tag.

F.3.4. Deleting a Tag

Summary
When a tag is no longer needed, remove it.

Procedure F.6. Deleting a Tag

  1. Click the Tags tab on the left side of the screen.
  2. Select the tag you wish to delete.
  3. Click Remove to open the Remove Tag(s) window. The message warns you that removing the tag will also remove all descendants of the tag.
  4. Click OK to delete the selected tag.
Result
You have removed the tag and all its descendants. The tag is also removed from all the objects that it was attached to.

F.3.5. Adding and Removing Tags to and from Objects

Summary
You can assign tags to and remove tags from hosts, virtual machines, and users.

Procedure F.7. Adding and Removing Tags to and from Objects

  1. Use the resource tab, tree mode, or the search function to find and select the object(s) you wish to tag or untag.
  2. Click the Assign Tags button to open the Assign Tags window.
  3. Select the check box to assign a tag to the object, or clear the check box to detach the tag from the object.
  4. Click OK.
Result
The specified tag is now added or removed as a custom property of the selected object(s).

F.3.6. Searching for Objects Using Tags

  • Enter a search query using tag as the property and the desired value or set of values as criteria for the search.
    The objects tagged with the specified criteria are listed in the results list.

Appendix G. Branding

G.1. Branding

G.1.1. Re-Branding the Manager

Various aspects of the Red Hat Enterprise Virtualization Manager can be customized, such as the icons used by and text displayed in pop-up windows and the links shown on the Welcome Page. This allows you to re-brand the Manager and gives you fine-grained control over the end look and feel presented to administrators and users.
The files required to customize the Manager are located in the /etc/ovirt-engine/branding/ directory on the system on which the Manager is installed. The files comprise a set of cascading style sheet files that are used to style various aspects of the graphical user interface and a set of properties files that contain messages and links that are incorporated into various components of the Manager.
To customize a component, edit the file for that component and save the changes. The next time you open or refresh that component, the changes will be applied.

G.1.2. Login Screen

The login screen is the login screen used by both the Administration Portal and User Portal. The elements of the login screen that can be customized are as follows:
  • The border
  • The header image on the left
  • The header image on the right
  • The header text
The classes for the login screen are located in common.css.

G.1.3. Administration Portal Screen

The administration portal screen is the main screen that is shown when you log into the Administration Portal. The elements of the administration portal screen that can be customized are as follows:
  • The logo
  • The left background image
  • The center background image
  • The right background image
  • The text to the right of the logo
The classes for the administration portal screen are located in web_admin.css.

G.1.4. User Portal Screen

The user portal screen is the screen that is shown when you log into the User Portal. The elements of the user portal screen that can be customized are as follows:
  • The logo
  • The center background image
  • The right background image
  • The border around the main grid
  • The text above the Logged in user label
The classes for the user portal screen are located in user_portal.css.

G.1.5. Pop-Up Windows

Pop-up windows are all windows in the Manager that allow you to create, edit or update an entity such as a host or virtual machine. The elements of pop-up windows that can be customized are as follows:
  • The border
  • The header image on the left
  • The header center image (repeated)
The classes for pop-up windows are located in common.css.

G.1.6. Tabs

There are two types of tab elements in the User Portal - the main tabs for switching between the Basic view and the Extended view, and the tabs on the left side of the screen when the Extended view is selected. Many pop-up windows in the Administration Portal also include tabs. The elements of these tabs that can be customized are as follows:
  • Active
  • Inactive
The classes for tabs are located in common.css and user_portal.css.

G.1.7. The Welcome Page

The Welcome Page is the page that is initially displayed when you visit the homepage of the Manager. In addition to customizing the overall look and feel, you can also make other changes such as adding links to the page for additional documentation or internal websites by editing a template file. The elements of the Welcome Page that can be customized are as follows:
  • The page title
  • The header (left, center and right)
  • The error message
  • The link to forward and the associated message for that link
The classes for the Welcome Page are located in welcome_style.css.
The Template File
The template file for the Welcome Page is a regular HTML file of the name welcome_page.template that does not contain HTML, HEAD or BODY tags. This file is inserted directly into the Welcome Page itself, and acts as a container for the content that is displayed in the Welcome Page. As such, you must edit this file to add new links or change the content itself. Another feature of the template file is that it contains placeholder text such as {user_portal} that is replaced by corresponding text in the messages.properties file when the Welcome Page is processed.

G.1.8. The Page Not Found Page

The Page Not Found page is a page that is displayed when you open a link to a page that cannot be found in the Red Hat Enterprise Virtualization Manager. The elements of the Page Not Found page that can be customized are as follows:
  • The page title
  • The header (left, center and right)
  • The error message
  • The link to forward and the associated message for that link
The classes for the Page Not Found page are located in welcome_style.css.

Appendix H. Revision History

Revision History
Revision 3.5-95Wed 27 Jul 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1359544 - Updated the links to the Storage Administration Guide and DM Multipath Guide.
Revision 3.5-94Wed 29 Jun 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1303812 - Added information about sealing virtual machines
Revision 3.5-93Tue 31 May 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1330203 - Added RHEL 7 changes for sealing virtual machines.
BZ#1180603 - Added the template seal procedure for Windows 2012.
Revision 3.5-92Wed 25 May 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1339407 - Updated the iSCSI chapter to remove outdated content and make clear that all paths to targets must be logged in.
Revision 3.5-91Mon 8 Feb 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1124128 - Added NUMA content to the guide.
Revision 3.5-90Mon 25 Jan 2016Red Hat Enterprise Virtualization Documentation Team
BZ#1276715 - Removed references to UserVmManager being able to migrate virtual machines.
BZ#1292309 - Added information on bridge_opts parameters.
Revision 3.5-89Thu 10 Dec 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1271237 - Updated the Editing VM Pool content.
Revision 3.5-88Wed 09 Dec 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1285598 - Added procedures for 'Migrating the Data Warehouse Database to a Remote Server Database' and 'Migrating the Reports Database to a Remote Server Database'.
Revision 3.5-87Wed 02 Dec 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1285598 - Added procedure 'Migrating the Engine Database to a Remote Server Database'.
Revision 3.5-86Tue 01 Dec 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1281642 - Added a step to disable all repositories after subscribing to a pool id.
BZ#1040550 - Updated the SPICE log file content.
BZ#1202130 - Added information on creating a public keystore file.
Revision 3.5-85Mon 26 Oct 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1156347 - Added note on updating logcollector.conf if the SSL certificate has been replaced.
BZ#1134589 - Added information on changing the IP address and fully qualified domain name of hosts.
BZ#1259804 - Updated browser requirements.
BZ#1092388 - Updated procedures for restoring Data Warehouse and Reports.
BZ#1234784 - Added information on the Sync MoM Policy button.
BZ#1175406 - Clarified the Hypervisor upgrade procedure.
BZ#1247486 - Updated the upgrade instructions.
BZ#1249163 - Added the word optional to the SNMP and CIM port descriptions.
Revision 3.5-84Mon 07 Sep 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1259568 - Added if using cisco_ucs as the power management device on Red Hat Enterprise Linux 7 hosts, ssl_insecure=1 needs to be appended to the options field.
Revision 3.5-83Fri 04 Sep 2015Red Hat Enterprise Virtualization Documentation Team
Minor updates for RHEV 3.5.4.
Revision 3.5-82Tue 01 Sep 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1228249 - Added information about the orange icon that reminds users of pending changes to the VM.
BZ#1240212 - Updated the RHEV API support statement.
Revision 3.5-81Fri 21 Aug 2015Red Hat Enterprise Virtualization Documentation Team
Added a note in the Migrating Virtual Machines Between Hosts section to link to the supported use case of live migrating virtual machines between different clusters.
Revision 3.5-80Tue 04 Aug 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1229797 - Updated information on setting host CPU at the cluster level.
BZ#1229797 - Added the virtualized CPU support limits for Red Hat Enterprise Linux 7 guests.
BZ#1213305 - Updated the introduction to storage topic.
Revision 3.5-79Tue 21 Jul 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1231025 - Updated considerations for upgrading to 3.5.
BZ#1234190 - Updated information on exporting a virtual machine to correct information about template dependency.
BZ#1076109 - Added details on editing a storage domain.
BZ#1236427 - Added instructions for resetting the admin@internal password.
BZ#1228591 - Updated CPU Pinning topology related sections to match the GUI.
BZ#1229486 - Updated SPICE support information.
BZ#1242738 - Updated the Supported Virtual Machine Operating Systems table.
Revision 3.5-78Tue 02 Jun 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1147960 - Updated information regarding user permissions when exporting and importing virtual machines.
BZ#1218773 - Removed 'Connecting to the Data Warehouse Database' from the guide.
BZ#1216289 - Added a link to the RHEV Upgrade Helper lab application.
BZ#1073583 - Added information about the parameters in the Application Settings section of the ovirt-engine-dwhd.conf file.
BZ#1215782 - Added a link to the Gluster/RHEV compatibility matrix.
BZ#1190653 - Updated several cluster settings to match the latest GUI.
Revision 3.5-77Tue 19 May 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1208114 - Included information about database-only backups as an alternative to full backups.
BZ#1213215 - Clarified information on extending the size of a virtual disk.
BZ#1214990 - Updated JasperReports System Requirements to include a link to the JasperReports documentation.
BZ#1094002 - Removed the link to a KCS article that is no longer required.
BZ#1213845 - Updated the preallocated disk description to clarify when users should to choose preallocated over thin provision.
BZ#1047377 - Added steps for updating the guest agents and drivers.
Revision 3.5-76Tue 28 Apr 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1124127 - Added information and procedures for configuring kdump integration.
BZ#1214552 - Added steps for starting and enabling the ovirt-guest-agent service.
BZ#1207311 - Clarified a note that all hosts in a data center must have access to the storage device before the storage domain can be configured.
BZ#1209333 - Checked all repo list and updated terminology for RHSM.
BZ#1206392 - Changed all instances of 'Red Hat Storage' to 'Red Hat Gluster Storage'.
BZ#1172295 - Clarified reference to Red Hat Enterprise Linux OpenStack Platform when assigning security groups to vNIC profiles.
BZ#1193251 - Updated the browser and client requirements.
Revision 3.5-75Tue 21 Apr 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1075466 - Added information related to CPU hot plugging.
BZ#1179653 - Updated explanation of cluster resilience policy settings and virtual machine high availability settings.
BZ#1198488 - Removed the regedit workaround from the sealing a Windows template section.
BZ#1147258 - Added explanation for virtual size and actual size.
BZ#1204579 - Updated yum-config-manager commands with subscription-manager commands for consistency.
BZ#1035366 - Clarification that installation and configuration of QXL drivers is applicable to RHEL 5 systems as of RHEL 5.4, and unnecessary in RHEL 6.0 onwards.
BZ#1200313 - Updated live migration information.
Revision 3.5-74Wed 18 Mar 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1201524 - Updated the generic LDAP procedure and updated the directory services section to make clear about domain management tool and the new generic LDAP provider implementation.
Revision 3.5-73Wed 11 Mar 2015Red Hat Enterprise Virtualization Documentation Team
BZ#1100832 - Updated information for the Wipe After Delete action.
BZ#1195756 - Added information pertaining to creation of auto-generated snapshot as a result of live storage migration.
BZ#1180932 - Updated the file names for the LDAP extension configuration files and provided an example of configuring Kerberos for single sign-on.
BZ#1151893 - Updated screen shots for the Administration Guide.
BZ#1126054 - Added procedures for increasing NFS and SAN storage domain sizes.
BZ#1160040 - Added procedure for configuring iSCSI multipathing.
BZ#1194099 - Added the 'Provide custom serial number policy' check box and option description.
BZ#1172331 - Updated upgrade procedures and added the 'Reinstalling Virtualization Hosts' topic.
BZ#1194142 - Updated and reworked the cluster policy procedure.
BZ#1075497 - Created a new topic on 'Configuring the Red Hat Enterprise Virtualization Manager to Send SNMP Traps'.
BZ#1092741 - Added an admonition pointing to the knowledgebase solution for resizing LUNs.
Revision 3.5-72Mon 09 Feb 2015Julie Wu
BZ#1075477 - Minor edits on single sign-on.
Revision 3.5-71Mon 09 Feb 2015Tahlia Richardson
BZ#1120896 - Updated the procedure for adding a Foreman host provider host.
Revision 3.5-70Fri 06 Feb 2015Lucy Bopf
BZ#1189298 - Reinstated notices regarding Technology Preview for OpenStack integration.
Revision 3.5-69Mon 02 Feb 2015Julie Wu
BZ#1187363 - Removed section: Manually Backing Up and Restoring the Red Hat Enterprise Virtualization Manager.
Revision 3.5-68Fri 30 Jan 2015Lucy Bopf
BZ#1146079 - Updated the options and examples for the engine-backup command.
Revision 3.5-67Thu 22 Jan 2015Lucy Bopf
BZ#1075497 - Added a procedure for configuring Simple Network Management Protocol (SNMP) traps on the Manager.
Revision 3.5-66Mon 19 Jan 2015David Ryan
BZ#1153351 - Updated the supported management client configurations.
Revision 3.5-65Sun 18 Jan 2015Laura Novich
BZ#1156015 - Changed some wording and location of text.
Revision 3.5-64Tue 13 Jan 2015Laura Novich
BZ#1156015 - Added reference to Jasper documentation.
Revision 3.5-63Mon 5 Jan 2015David Ryan
BZ#1166478 - Updated ovirt-engine configuration section.
Revision 3.5-62Mon 5 Jan 2015David Ryan
BZ#1094276 - Updated configuration settings for virtual disks.
Revision 3.5-61Sun 4 Jan 2015Laura Novich
BZ#1156015 - Add Migration of DWH and Reports.
Revision 3.5-60Wed 31 Dec 2014Laura Novich
BZ#1148745 - Edit to DWH section following review.
Revision 3.5-59Mon 22 Dec 2014Laura Novich
BZ#1148745 - Edit to DWH section following review.
Revision 3.5-58Fri 19 Dec 2014David Ryan
BZ#1156286 - Redesigned the content structure to improve clarity.
Revision 3.5-57Fri 12 Dec 2014Julie Wu
BZ#1164246 - Clarified that there needs to be one user created in the directory server with permissions to browse all users and groups.
Revision 3.5-56Thu 11 Dec 2014Laura Novich
BZ#1148745 - Edit to DWH section following review.
Revision 3.5-55Thurs 11 Dec 2014Tahlia Richardson
BZ#1172299 - Updated the command for saving iptables rules persistently.
Revision 3.5-54Thu 11 Dec 2014Julie Wu
BZ#1124081 - Created a topic on generic LDAP provider.
Revision 3.5-53Tue 9 Dec 2014Laura Novich
BZ1148745 - Edit to section following review.
Revision 3.5-52Mon 8 Dec 2014Julie Wu
BZ#1161209 - Added a warning note on /etc/pki permissions.
BZ#1148298 - Updated the third party CA section with the chain order reminder.
BZ#1136187 - Updated the read-only disk section to clarify the vitual machine and disk relationship.
Revision 3.5-51Sun 7 Dec 2014Laura Novich
BZ#1148745 - Addition of User History and accompanying table to chapter. New Section created.
Revision 3.5-50Mon 1 Dec 2014Laura Novich
BZ#1148745 - Merge and Rebase of topics in Reports Chapter.
Revision 3.5-49Sun 30 Nov 2014Laura Novich
Merge and Rebase of topics in Reports Chapter.
Revision 3.5-48Tues 25 Nov 2014Tahlia Richardson
BZ#1120921 - Updated the Initial Run row of the Run Once settings table.
BZ#1149970 - Adjusted the description of firewall port 6100.
Revision 3.5-47Mon 24 Nov 2014Laura Novich
BZ#1121840 - Added new section to Proxy Chapter to include the installation and configuration of websocket proxies.
Revision 3.5-46Thu 20 Nov 2014Andrew Dahms
BZ#1075511 - Added a note explaining why sealing virtual machines before creating a template is required.
Revision 3.5-45Wed 19 Nov 2014Julie Wu
BZ#1163517 - Updated the section with the download link for USBFilterEditor.msi.
Revision 3.5-44Tues 18 Nov 2014Lucy Bopf
BZ#1157459 - Corrected the procedure for setting a Windows product key.
Revision 3.5-43Tues 18 Nov 2014Tahlia Richardson
BZ#1149970 - Added rows to the Manager firewall table for ports 6100 and 7410.
BZ#1157934 - Adjusted the layout of all of the firewall tables.
Revision 3.5-42Sun 9 Nov 2014Laura Novich
BZ#1123921 - Added new instructions to include an option for setting a host network bridge.
Revision 3.5-41Thu 6 Nov 2014Andrew Burden
BZ#1100832 - Added new topic 'Settings to Wipe Virtual Disks After Deletion'.'
Revision 3.5-40Wed 5 Nov 2014Andrew Burden
BZ#1100832 - Updated information regarding the 'Wipe After Delete' check box in 'Explanation of Settings in the New Virtual Disk Window' and 'Creating Floating Virtual Disks'.
Revision 3.5-39Tue 04 Nov 2014Laura Novich
BZ#1152527 - Fixed content following SME review.
Revision 3.5-38Tue 04 Nov 2014Laura Novich
BZ#1140832 - Revised topic following QA review.
BZ#1152527 - Rewrite of the Networking chapter with the first round of changes added.
Revision 3.5-37Tue 04 Nov 2014Lucy Bopf
BZ#1138480 - Removed information suggesting that the default data center should not be removed.
BZ#1155377 - Revised section on installing and configuring a Squid proxy.
Revision 3.5-36Thu 23 Oct 2014David Ryan
BZ#1099214 - Updated supported PostgreSQL versions.
Revision 3.5-35Wed 22 Oct 2014Laura Novich
BZ#1140832 - Fixed all associated topics.
BZ#1140805 - Changed topic following QA feedback.
Revision 3.5-34Wed 22 Oct 2014Andrew Burden
BZ#1095110 - Admonition added to 'Optimizing Red Hat Gluster Storage Volumes to Store Virtual Machine Images': In environments that utilize a Replicate volume across three or more nodes, it is necessary to ensure the volume is optimized for virtual storage to avoid data inconsistencies (split-brain) arising across the nodes.
Revision 3.5-32Mon 20 Oct 2014Laura Novich
BZ#1120970 - fixed topic.
Revision 3.5-31Mon 20 Oct 2014Julie Wu
BZ#1121487 - Added a note on setting up custom power management device.
Revision 3.5-30Sun 19 Oct 2014Laura Novich
BZ#1120970 - Added entry for Vnic and Storage properties to the VM search table.
Revision 3.5-29Fri 17 Oct 2014Tahlia Richardson
BZ#1120916 - Expanded the Random Generator Settings Explained table with further options, and added a row to the New Cluster Window table.
Revision 3.5-28Fri 10 Oct 2014Andrew Dahms
BZ#1108245 - Added a description of and procedures outlining how to use the backup and restore API.
Revision 3.5-27Thu 09 Oct 2014David Ryan
BZ#1150028 - Updated the display of vNIC entries.
BZ#1147596 - Updated table of permitted storage combinations.
BZ#1140472 - Epic QA pass on spelling and grammar.
BZ#1091802 - Improved the explanation of virtual disk allocation.
BZ#1149983 - Updated multiple code example references to improve clarity and localization.
Revision 3.5-26Wed 08 Oct 2014Andrew Dahms
BZ#1150334 - Added entries for locale, language, and keyboard to the Initial Run tab for virtual machines.
Revision 3.5-25Tue 07 Oct 2014Julie Wu
BZ#1147294 - Removed outdated reference to older RHEL versions.
Removed Beta references and added 3.5 to the compatibility list.
Revision 3.5-24Fri 03 Oct 2014Tahlia Richardson
BZ#1120945 - Added a description of the migration progress bar to the section on migrating virtual machines.
BZ#1143768 - Added a note clarifying the difference between exporting virtual machines provisioned or cloned from a template.
BZ#1144260 - Expanded the steps for adding a floating virtual disk.
Revision 3.5-23Thu 02 Oct 2014Andrew Dahms
BZ#1123960 - Outlined how to create and work with storage quality of service.
BZ#1123956 - Outlined how to create and work with CPU quality of service.
BZ#1098595 - Created a new section on the Configure window.
Revision 3.5-22Mon 29 Sep 2014Laura Novich
BZ#1140805 - Updated the instructions for using the Sessions tab.
Revision 3.5-21Mon 29 Sep 2014Laura Novich
BZ#1140832 - Updated the list of requirements for restoring a backup.
Revision 3.5-20Tue 23 Sep 2014Andrew Dahms
BZ#1081296 - Updated the syntax of the engine-manage-domains command.
Revision 3.5-19Fri 19 Sep 2014Tahlia Richardson
BZ#1143776 - Removed references to RHN classic and changed "Red Hat Network" to "Content Delivery Network".
BZ#1090226 - Updated maximum guest RAM to 4000 GB.
BZ#1094766 - Added a note about squid proxy connection timeout.
Revision 3.5-18Thu 18 Sep 2014Lucy Bopf
BZ#1123972 - Updated screen shots for Administration Portal.
Revision 3.5-17Thu 18 Sep 2014Andrew Burden
Brewing for 3.5-Beta.
Revision 3.5-16Thu 11 Sep 2014Julie Wu
BZ#1124077 - Edited the Editing Instance Types topic to reflect the new patch.
Revision 3.5-15Tue 09 Sep 2014Julie Wu
Building for Splash page.
Revision 3.5-14Fri 29 Aug 2014Lucy Bopf
BZ#1120950 - Added the new option to enable SPICE clipboard copy and paste to the virtual machine console options.
BZ#1074346 - Added a warning that disks with journaled file systems should not be marked read-only.
Revision 3.5-13Fri 29 Aug 2014Tahlia Richardson
BZ#1124077 - Added a set of new topics detailing instance types.
Revision 3.5-12Mon 25 Aug 2014Tahlia Richardson
BZ#1123927 - Removed the Linux Bridge option from the procedure for Adding an OpenStack Networking (Neutron) Instance for Network Provisioning.
BZ#1120944 - Amended virtual machine name maximum character limit to 255.
BZ#1124819 - Changed "Ovirt" to "oVirt" where necessary.
BZ#1120916 - Added a new topic detailing the Random Generator tab, and added Random Generator to lists of tabs in other topics.
BZ#1120918 - Added new rows to the table in Virtual Machine System Settings Explained, detailing the new 'custom serial number' option.
Revision 3.5-11Mon 25 Aug 2014Andrew Dahms
BZ#1122304 - Added a note on new command line options for installing the guest agents and drivers on Windows.
Revision 3.5-10Wed 20 Aug 2014Andrew Dahms
BZ#1131761 - Updated the list of options available in the Initial Run tab for virtual machines.
BZ#1131718 - Added a note about storage space requirements when deleting virtual machine snapshots.
Revision 3.5-9Wed 20 Aug 2014Lucy Bopf
BZ#1122338 - Created a new topic for reinstalling Virtualization Hosts.
BZ#1120924 - Added a note that the number of hosts and number of virtual machines in a cluster now appears in the results list.
BZ#1120999 - Added a note that SELinux status is now visible under the General tab of a host.
BZ#1129919 - Removed Technology Preview boxes from external provider topics.
Revision 3.5-8Tue 19 Aug 2014Andrew Dahms
BZ#1131301 - Restructured and updated the section on working with the engine-config command.
BZ#1130850 - Updated references to the engine-backup command.
Revision 3.5-7Mon 18 Aug 2014Tahlia Richardson
BZ#1120955 - Updated the section on 'editing a virtual machine' to reflect new behaviour.
BZ#1124078 - Added a procedure for the new Clone VM button in the Administration Portal.
Revision 3.5-6Tue 05 Aug 2014Andrew Dahms
BZ#1127943 - Updated the procedure for deleting virtual machine snapshots.
BZ#1081195 - Revised the sections on adding external providers.
Revision 3.5-5Tue 22 Jul 2014Lucy Bopf
BZ#1112933 - Updated section on required networks to reflect the actual virtual machine migration behavior when a required network becomes non-operational.
BZ#1111230 - Changed instances of rhevm-guest-agent to rhevm-guest-agent-common.
BZ#1114787 - Updated links to access.redhat.com to exclude '/site'.
BZ#1113950 - Updated definitions of User and Administrator roles to reflect their actual permissions in the User Portal.
BZ#1044912 - Updated the Application Provisioning Tool section to include a formal installation procedure, and more information regarding automatic reboot.
BZ#1061668 - Removed reference to vdsm bootstrap script.
Revision 3.5-4Mon 30 Jun 2014Lucy Bopf
BZ#1101580 - Updated description of chmod 0755 command to reflect the actual permissions granted.
BZ#1109383 - Added RHEL 7 to the list of compatible and supported virtual machine operating systems.
Revision 3.5-3Thu 26 Jun 2014Andrew Dahms
BZ#1096681 - Added a section outlining how to create new cluster policies.
BZ#1096666 - Added a note that the cluster policy must support VM affinity groups for these groups to take effect.
BZ#1093492 - Outlined the difference between the methods for sealing a Linux virtual machine.
BZ#1068759 - Added information on the name of services and location of configuration files for the guest agents.
Revision 3.5-2Fri 6 Jun 2014Andrew Dahms
BZ#1101313 - Updated the output from and steps for using the engine-backup command.
BZ#1094069 - Updated the procedure for upgrading to Red Hat Enterprise Virtualization 3.4.
BZ#1093843 - Corrected the name of the log file for MOM.
BZ#1092463 - Added more detail to the description of what the SPICE proxy for virtual machine pools overrides.
BZ#1092438 - Updated the description of the SPICE proxy format for virtual machine pools.
BZ#1092437 - Updated the description of the SPICE proxy format for clusters.
BZ#1092430 - Updated the sections regarding console options.
BZ#1075516 - Updated the description of custom properties for vNIC profiles.
Revision 3.5-1Thu 5 Jun 2014Lucy Bopf
Initial creation for the Red Hat Enterprise Virtualization 3.5 release.