Red Hat Enterprise Virtualization 3.3

Administration Guide

Administrating Red Hat Enterprise Virtualization Environments.

Andrew Burden

Steve Gordon

Anjana Sriram

Red Hat Engineering Content Services

Legal Notice

Copyright © 2014 Red Hat, Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.

Abstract

This book contains information and procedures relevant to Red Hat Enterprise Virtualization administrators.
Preface
1. Document Conventions
2. Getting Help and Giving Feedback
1. Using this Guide
1.1. Administration Guide Prerequisites
1.2. Administration Guide Layout
1.3. Example Workflows
2. Basics
2.1. Introduction
2.2. Using the Administration Portal Graphical Interface
I. Administering the Resources
3. Data Centers
3.1. Introduction to Data Centers
3.2. The Storage Pool Manager (SPM)
3.3. SPM Priority
3.4. Using the Events Tab to Identify Problem Objects in Data Centers
3.5. Data Center Tasks
3.6. Data Centers and Storage Domains
3.7. Data Centers and Permissions
4. Clusters
4.1. Introduction to Clusters
4.2. Cluster Tasks
4.3. Clusters and Permissions
4.4. Clusters and Gluster Hooks
5. Logical Networks
5.1. Introduction to Logical Networks
5.2. Port Mirroring
5.3. Required Networks, Optional Networks, and Virtual Machine Networks
5.4. VNIC Profiles and QoS
5.5. Logical Network Tasks
5.6. Logical Networks and Permissions
6. Hosts
6.1. Introduction to Red Hat Enterprise Virtualization Hosts
6.2. Red Hat Enterprise Virtualization Hypervisor Hosts
6.3. Foreman Host Provider Hosts
6.4. Red Hat Enterprise Linux Hosts
6.5. Host Tasks
6.6. Hosts and Networking
6.7. Host Resilience
6.8. Hosts and Permissions
7. Storage Domains
7.1. Understanding Storage Domains
7.2. Storage Metadata Versions in Red Hat Enterprise Virtualization
7.3. Preparing and Adding File-based Storage
7.4. Preparing and Adding Red Hat Storage (GlusterFS) Storage Domains
7.5. Adding POSIX Compliant File System Storage
7.6. Preparing and Adding Block-based Storage
7.7. Storage Tasks
7.8. Storage and Permissions
8. Virtual Machines
8.1. Introduction to Virtual Machines
8.2. Supported Virtual Machine Operating Systems
8.3. Virtual Machine Performance Parameters
8.4. Creating Virtual Machines
8.5. Using Virtual Machines
8.6. Shutting Down or Pausing Virtual Machines
8.7. Managing Virtual Machines
8.8. Virtual Machines and Permissions
8.9. Backing Up and Restoring Virtual Machines with Snapshots
8.10. Importing and Exporting Virtual Machines
8.11. Migrating Virtual Machines Between Hosts
8.12. Improving Uptime with Virtual Machine High Availability
8.13. Other Virtual Machine Tasks
9. Templates
9.1. Introduction to Templates
9.2. Template Tasks
9.3. Sealing Templates in Preparation for Deployment
9.4. Templates and Permissions
10. Pools
10.1. Introduction to Virtual Machine Pools
10.2. Virtual Machine Pool Tasks
10.3. Pools and Permissions
11. Virtual Machine Disks
11.1. Understanding Virtual Machine Storage
11.2. Understanding Virtual Disks
11.3. Shareable Disks in Red Hat Enterprise Virtualization
11.4. Virtual Disk Tasks
11.5. Virtual Disks and Permissions
12. Red Hat Storage (GlusterFS) Volumes
12.1. Introduction to Red Hat Storage (GlusterFS) Volumes
12.2. Introduction to Red Hat Storage (GlusterFS) Bricks
12.3. Optimizing Red Hat Storage Volumes to Store Virtual Machine Images
12.4. Red Hat Storage (GlusterFS) Tasks
13. External Providers
13.1. Introduction to External Providers in Red Hat Enterprise Virtualization
13.2. Enabling the Authentication of OpenStack Providers
13.3. Adding an External Provider
13.4. Removing an External Provider
13.5. Explanation of Settings and Controls in the Add Provider Window
13.6. Explanation of Settings and Controls in the Edit Provider Window
II. Administering the Environment
14. Users and Roles
14.1. Introduction to Users
14.2. Directory Users
14.3. User Authorization
14.4. Red Hat Enterprise Virtualization Manager User Properties and Roles
14.5. Red Hat Enterprise Virtualization Manager User Tasks
14.6. User Role and Authorization Examples
15. Quotas and Service Level Agreement Policy
15.1. Introduction to Quota
15.2. Shared Quota and Individually-defined Quota
15.3. Quota Accounting
15.4. Enabling and Changing a Quota Mode in a Data Center
15.5. Creating a New Quota Policy
15.6. Explanation of Quota Threshold Settings
15.7. Assigning a Quota to an Object
15.8. Using Quota to Limit Resources by User
15.9. Editing Quotas
15.10. Removing Quotas
15.11. Service-level Agreement Policy Enforcement
16. Event Notifications
16.1. Configuring Event Notifications
16.2. Parameters for Event Notifications in notifier.conf
16.3. Canceling Event Notifications
17. Utilities
17.1. Renaming the Manager with the Ovirt Engine Rename Tool
17.2. Managing Domains with the Domain Management Tool
17.3. Editing the Configuration of the Red Hat Virtualization Manager with the Configuration Tool
17.4. Uploading Virtual Machine Images with the Image Uploader Tool
17.5. Editing USB Filters with the USB Filter Editor
17.6. Collecting Logs with the Log Collector Tool
17.7. Uploading ISO Files with the ISO Uploader Tool
17.8. Guest Drivers and Agents
18. Log Files
18.1. Red Hat Enterprise Virtualization Manager Installation Log Files
18.2. Red Hat Enterprise Virtualization Manager Log Files
18.3. SPICE Log Files
18.4. Red Hat Enterprise Virtualization Host Log Files
18.5. Remotely Logging Host Activities
19. Updating the Red Hat Enterprise Virtualization Environment
19.1. Upgrades between Minor Releases
19.2. Upgrading to Red Hat Enterprise Virtualization 3.3
19.3. Upgrading to Red Hat Enterprise Virtualization Manager 3.2
19.4. Upgrading to Red Hat Enterprise Virtualization Manager 3.1
19.5. Post-upgrade Tasks
20. Backups
20.1. Backing Up and Restoring the Red Hat Enterprise Virtualization Manager
20.2. Manually Backing Up and Restoring the Red Hat Enterprise Virtualization Manager
III. Gathering Information About the Environment
21. Reports, History Database Reports, and Dashboards
21.1. Reports
21.2. History Database Reports
21.3. Dashboards
A. Firewalls
A.1. Red Hat Enterprise Virtualization Manager Firewall Requirements
A.2. Virtualization Host Firewall Requirements
A.3. Directory Server Firewall Requirements
A.4. Database Server Firewall Requirements
B. VDSM and Hooks
B.1. VDSM
B.2. VDSM Hooks
B.3. Extending VDSM with Hooks
B.4. Supported VDSM Events
B.5. The VDSM Hook Environment
B.6. The VDSM Hook Domain XML Object
B.7. Defining Custom Properties
B.8. Setting Custom Device Properties
B.9. Setting Virtual Machine Custom Properties
B.10. Evaluating Virtual Machine Custom Properties in a VDSM Hook
B.11. Using the VDSM Hooking Module
B.12. VDSM Hook Execution
B.13. VDSM Hook Return Codes
B.14. VDSM Hook Examples
C. Red Hat Enterprise Virtualization User Interface Plugins
C.1. Red Hat Enterprise Virtualization User Interface Plugins
C.2. Red Hat Enterprise Virtualization User Interface Plugin Lifecycle
C.3. User Interface Plugin-related Files and Their Locations
C.4. Example User Interface Plugin Deployment
C.5. Using Red Hat Support Plugin
D. Red Hat Enterprise Virtualization and SSL
D.1. Replacing the Red Hat Enterprise Virtualization Manager SSL Certificate
E. Red Hat Storage (GlusterFS) Terminology
F. Using Search, Bookmarks, and Tags to Find Your Way Around
F.1. Search
F.2. Bookmarks
F.3. Tags
G. Branding
G.1. Branding
H. Revision History

Preface

1. Document Conventions

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.

1.1. Typographic Conventions

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog-box text; labeled buttons; check-box and radio-button labels; menu titles and submenu titles. For example:
Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above: username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.

1.2. Pull-quote Conventions

Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books        Desktop   documentation  drafts  mss    photos   stuff  svn
books_tests  Desktop1  downloads      images  notes  scripts  svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
static int kvm_vm_ioctl_deassign_device(struct kvm *kvm,
                 struct kvm_assigned_pci_dev *assigned_dev)
{
         int r = 0;
         struct kvm_assigned_dev_kernel *match;

         mutex_lock(&kvm->lock);

         match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
                                       assigned_dev->assigned_dev_id);
         if (!match) {
                 printk(KERN_INFO "%s: device hasn't been assigned before, "
                   "so cannot be deassigned\n", __func__);
                 r = -EINVAL;
                 goto out;
         }

         kvm_deassign_device(kvm, match);

         kvm_free_assigned_device(kvm, match);

out:
         mutex_unlock(&kvm->lock);
         return r;
}

1.3. Notes and Warnings

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled “Important” will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
  • search or browse through a knowledgebase of technical support articles about Red Hat products.
  • submit a support case to Red Hat Global Support Services (GSS).
  • access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. We Need Feedback!

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Virtualization Manager.
When submitting a bug report, be sure to mention the manual's identifier: Guides-Admin
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Chapter 1. Using this Guide

1.1. Administration Guide Prerequisites

You need a functioning Red Hat Enterprise Virtualization environment to use this guide. You can use the Red Hat Enterprise Virtualization Installation Guide or the Red Hat Enterprise Virtualization Quick Start Guide to install your environment and complete the initial configuration tasks.
A basic Red Hat Enterprise Virtualization environment has:
  • at least one data center,
  • at least one cluster,
  • at least one host,
  • at least one data storage domain,
  • at least one logical network: the rhevm management network,
  • and at least one user: the internal admin user.
The Red Hat Enterprise Virtualization Administration Guide contains information about managing existing Red Hat Enterprise Virtualization environments. If your environment is missing one of the listed elements, please find the topic in this guide or in the Installation Guide or Quick Start Guide that describes how to add what your environment is missing.

1.2. Administration Guide Layout

In the Red Hat Enterprise Virtualization Administration Guide it is assumed that administrators want to perform actions on objects or with objects. For example, you want to add a new logical network to a cluster. "Add a new logical network" is an action, and "cluster" is an object.
The Red Hat Enterprise Virtualization Administration Guide uses objects to group content. The objects are ordered according to their likely order of usage by administrators. The objects are:
  • Data Centers;
  • Clusters;
  • Networks;
  • Hosts;
  • Storage;
  • Virtual Machines, Templates, and Pools;
  • Users and Roles;
  • Quotas;
  • Monitoring, Reports, and Dashboards;
  • Firewalls;
  • VDSM and Hooks;
  • Utilities; and
  • Backups.
To use this guide, find the object you are interested in affecting, then find the action or task you want to perform.

1.3. Example Workflows

1.3.1. Administration Guide Example Workflows Overview

Example workflows can help you become comfortable with using the Red Hat Enterprise Virtualization Administration Guide. They are common tasks performed by administrators of Red Hat Enterprise Virtualization environments. Each workflow begins with a scenario, and then gives links to the tasks for each scenario in the order that they should be performed.

1.3.2. Administration Guide Example Workflow: New iSCSI Data Center

Your employer has purchased some new hypervisors and storage to add to your environment. All the hardware has been configured by your IT department. The storage is deployed as iSCSI storage. The hypervisors run Red Hat Enterprise Linux. The storage traffic is carried over a storage network separate from management traffic. Control over this hardware is delegated to one of your colleagues.

1.3.3. Administration Guide Example Workflow: Newly Virtualized Workload

You have recently virtualized an important workload. You need to maximize the uptime of the virtual machine it runs on. You clone the virtual machine to a template so that it is easy to re-provision if necessary. You hand control of the virtual machine and the cluster it runs on to another administrator.

1.3.4. Administration Guide Example Workflow: Template for Group Use

You have a group of users who want to provision virtual machines running Red Hat Enterprise Linux 6. You have to add an ISO storage domain and upload an ISO to it. You install Red Hat Enterprise Linux 6 on a virtual machine, and make a template out of it. You make the group template users.

Chapter 2. Basics

2.1. Introduction

2.1.1. Red Hat Enterprise Virtualization Architecture

A Red Hat Enterprise Virtualization environment consists of:
  • Virtual machine hosts using the Kernel-based Virtual Machine (KVM).
  • Agents and tools running on hosts including VDSM, QEMU, and libvirt. These tools provide local management for virtual machines, networks and storage.
  • The Red Hat Enterprise Virtualization Manager; a centralized management platform for the Red Hat Enterprise Virtualization environment. It provides a graphical interface where you can view, provision and manage resources.
  • Storage domains to hold virtual resources like virtual machines, templates, ISOs.
  • A database to track the state of and changes to the environment.
  • Access to an external Directory Server to provide users and authentication.
  • Networking to link the environment together. This includes physical network links, and logical networks.
Red Hat Enterprise Virtualization Platform Overview

Figure 2.1. Red Hat Enterprise Virtualization Platform Overview

2.1.2. Red Hat Enterprise Virtualization System Components

The Red Hat Enterprise Virtualization version 3.3 environment consists of one or more hosts (either Red Hat Enterprise Linux 6.5 or later hosts or Red Hat Enterprise Virtualization Hypervisor 6.5 or later hosts) and at least one Red Hat Enterprise Virtualization Manager.
Hosts run virtual machines using KVM (Kernel-based Virtual Machine) virtualization technology.
The Red Hat Enterprise Virtualization Manager runs on a Red Hat Enterprise Linux 6 server and provides interfaces for controlling the Red Hat Enterprise Virtualization environment. It manages virtual machine and storage provisioning, connection protocols, user sessions, virtual machine images, and high availability virtual machines.
The Red Hat Enterprise Virtualization Manager is accessed through the Administration Portal using a web browser.

2.1.3. Red Hat Enterprise Virtualization Resources

The components of the Red Hat Enterprise Virtualization environment fall into two categories: physical resources, and logical resources. Physical resources are physical objects, such as host and storage servers. Logical resources are nonphysical groupings and processes, such as logical networks and virtual machine templates.
  • Data Center - A data center is the highest level container for all physical and logical resources within a managed virtual environment. It is a collection of clusters, virtual machines, storage, and networks.
  • Clusters - A cluster is a set of physical hosts that are treated as a resource pool for virtual machines. Hosts in a cluster share the same network infrastructure and storage. They form a migration domain within which virtual machines can be moved from host to host.
  • Logical Networks - A logical network is a logical representation of a physical network. Logical networks group network traffic and communication between the Manager, hosts, storage, and virtual machines.
  • Hosts - A host is a physical server that runs one or more virtual machines. Hosts are grouped into clusters. Virtual machines can be migrated from one host to another within a cluster.
  • Storage Pool - The storage pool is a logical entity that contains a standalone image repository of a certain type, either iSCSI, Fibre Channel, NFS, or POSIX. Each storage pool can contain several domains, for storing virtual machine disk images, ISO images, and for the import and export of virtual machine images.
  • Virtual Machines - A virtual machine is a virtual desktop or virtual server containing an operating system and a set of applications. Multiple identical virtual machines can be created in a Pool. Virtual machines are created, managed, or deleted by power users and accessed by users.
  • Template - A template is a model virtual machine with predefined settings. A virtual machine that is based on a particular template acquires the settings of the template. Using templates is the quickest way of creating a large number of virtual machines in a single step.
  • Virtual Machine Pool - A virtual machine pool is a group of identical virtual machines that are available on demand by each group member. Virtual machine pools can be set up for different purposes. For example, one pool can be for the Marketing department, another for Research and Development, and so on.
  • Snapshot - A snapshot is a view of a virtual machine's operating system and all its applications at a point in time. It can be used to save the settings of a virtual machine before an upgrade or installing new applications. In case of problems, a snapshot can be used to restore the virtual machine to its original state.
  • User Types - Red Hat Enterprise Virtualization supports multiple levels of administrators and users with distinct levels of permissions. System administrators can manage objects of the physical infrastructure, such as data centers, hosts, and storage. Users access virtual machines available from a virtual machine pool or standalone virtual machines made accessible by an administrator.
  • Events and Monitors - Alerts, warnings, and other notices about activities help the administrator to monitor the performance and status of resources.
  • Reports - A range of reports either from the reports module based on JasperReports, or from the data warehouse. Preconfigured or ad hoc reports can be generated from the reports module. Users can also generate reports using any query tool that supports SQL from a data warehouse that collects monitoring data for hosts, virtual machines, and storage.

2.1.4. Red Hat Enterprise Virtualization API Support Statement

Red Hat Enterprise Virtualization exposes a number of interfaces for interacting with the components of the virtualization environment. These interfaces are in addition to the user interfaces provided by the Red Hat Enterprise Virtualization Manager Administration, User, and Reports Portals. Many of these interfaces are fully supported. Some however are supported only for read access or only when your use of them has been explicitly requested by Red Hat Support.

Supported Interfaces for Read and Write Access

Direct interaction with these interfaces is supported and encouraged for both read and write access:
Representational State Transfer (REST) API
The REST API exposed by the Red Hat Enterprise Virtualization Manager is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
Software Development Kit (SDK)
The SDK provided by the rhevm-sdk package is a fully supported interface for interacting with Red Hat Enterprise Virtualization Manager.
Command Line Shell
The command line shell provided by the rhevm-cli package is a fully supported interface for interacting with the Red Hat Enterprise Virtualization Manager.
VDSM Hooks
The creation and use of VDSM hooks to trigger modification of virtual machines based on custom properties specified in the Administration Portal is supported on Red Hat Enterprise Linux virtualization hosts. The use of VDSM Hooks on virtualization hosts running Red Hat Enterprise Virtualization Hypervisor is not currently supported.

Supported Interfaces for Read Access

Direct interaction with these interfaces is supported and encouraged only for read access. Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support:
Red Hat Enterprise Virtualization Manager History Database
Read access to the Red Hat Enterprise Virtualization Manager history database using the database views specified in the Administration Guide is supported. Write access is not supported.
Libvirt on Virtualization Hosts
Read access to libvirt using the virsh -r command is a supported method of interacting with virtualization hosts. Write access is not supported.

Unsupported Interfaces

Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support:
The vdsClient Command
Use of the vdsClient command to interact with virtualization hosts is not supported unless explicitly requested by Red Hat Support.
Red Hat Enterprise Virtualization Hypervisor Console
Console access to Red Hat Enterprise Virtualization Hypervisor outside of the provided text user interface for configuration is not supported unless explicitly requested by Red Hat Support.
Red Hat Enterprise Virtualization Manager Database
Direct access to and manipulation of the Red Hat Enterprise Virtualization Manager database is not supported unless explicitly requested by Red Hat Support.

Important

Red Hat Support will not debug user created scripts or hooks except where it can be demonstrated that there is an issue with the interface being used rather than the user created script itself. For more general information about Red Hat support policies see https://access.redhat.com/support/offerings/production/soc.html.

2.1.5. SPICE

The SPICE (Simple Protocol for Independent Computing Environments) protocol facilitates graphical connections to virtual machines. The SPICE protocol allows:
  • video at more than 30 frames per second
  • bidirectional audio (for softphones/IP phones)
  • bidirectional video (for video telephony/video conferencing)
  • connection to multiple monitors with a single virtual machine
  • USB redirection from the client's USB port into the virtual machine
  • connection to a proxy from outside of the network the hypervisor is attached to

2.1.6. Administering and Maintaining the Red Hat Enterprise Virtualization Environment

The Red Hat Enterprise Virtualization environment requires an administrator to keep it running. As an administrator, your tasks include:
  • Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.
  • Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).
  • Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).
  • Managing customized object properties using tags.
  • Managing searches saved as public bookmarks.
  • Managing user setup and setting permission levels.
  • Troubleshooting for specific users or virtual machines for overall system functionality.
  • Generating general and specific reports.

2.2. Using the Administration Portal Graphical Interface

2.2.1. Red Hat Enterprise Virtualization Manager Client Requirements

Use a client with a supported web browser to access the Administration Portal, and the User Portal. The portals support the following clients and browsers:
  • Mozilla Firefox 17, and later, on Red Hat Enterprise Linux is required to access both portals.
  • Internet Explorer 8, and later, on Microsoft Windows is required to access the User Portal. Use the desktop version, not the touchscreen version of Internet Explorer 10.
  • Internet Explorer 9, and later, on Microsoft Windows is required to access the Administration Portal. Use the desktop version, not the touchscreen version of Internet Explorer 10.
Install a supported SPICE client to access virtual machine consoles. Supported SPICE clients are available on the following operating systems:
  • Red Hat Enterprise Linux 5.8+ (i386, AMD64 and Intel 64)
  • Red Hat Enterprise Linux 6.2+ (i386, AMD64 and Intel 64)
  • Red Hat Enterprise Linux 6.5+ (i386, AMD64 and Intel 64)
  • Windows XP
  • Windows XP Embedded (XPe)
  • Windows 7 (x86, AMD64 and Intel 64)
  • Windows 8 (x86, AMD64 and Intel 64)
  • Windows Embedded Standard 7
  • Windows 2008/R2 (x86, AMD64 and Intel 64)
  • Windows Embedded Standard 2009
  • Red Hat Enterprise Virtualization Certified Linux-based thin clients

Note

Check the Red Hat Enterprise Virtualization Manager Release Notes to see which SPICE features your client supports.
When you access the portal(s) using Mozilla Firefox the SPICE client is provided by the spice-xpi package, which you must manually install using yum.
When you access the portal(s) using Internet Explorer the SPICE ActiveX control will automatically be downloaded and installed.

2.2.2. Graphical User Interface Elements

The Red Hat Enterprise Virtualization Administration Portal consists of contextual panes and menus and can be used in two modes, tree mode and flat mode. Tree mode allows you to browse the object hierarchy of a data center and is the recommended manner of operation. The elements of the GUI are shown in the diagram below.
User Interface Elements of the Administration Portal

Figure 2.2. User Interface Elements of the Administration Portal

User Interface Elements

  • Header
    The Header bar contains the name of the current logged in user and the Sign Out button. The About button shows version information. The Configure button allows you to configure user roles. The Guide button provides a shortcut to the book you are reading now.
  • Search Bar
    The Search bar allows you to build queries to find the resources that you need. Queries can be as simple as a list of all the hosts in the system, or much more complex. As you type each part of the search query, you are offered choices to assist you in building the search. The star icon can be used to save the search as a bookmark.
  • Resource Tabs
    All resources, such as hosts and clusters, can be managed using the appropriate tab. Additionally, the Events tabs allow you to view events for each resource.
    The Administration Portal provides the following tabs: Data Centers, Clusters, Hosts, Storage, Disks, Virtual Machines, Pools, Templates, Users, and Events, and a Dashboard tab if you have installed the Data Warehouse and Reporting services.
  • Results List
    Perform a task on an individual item, multiple items, or all the items in the results list, by selecting the item(s) and then clicking the relevant action button. Information on a selected item is displayed in the details pane.
  • Details Pane
    The Details pane shows detailed information about a selected item in the results list. If multiple items are selected, the details pane displays information on the first selected item only.
  • Tree/Bookmarks/Tags Pane
    The Tree pane displays a navigable hierarchy of the resources in the virtualized environment.
    Bookmarks are used to save frequently used or complicated searches for repeated use. Bookmarks can be added, edited, or removed.
    Tags are applied to groups of resources and are used to search for all resources associated with that tag.
  • Alerts/Events Pane
    The Alerts tab lists all high severity events such as errors or warnings. The Events tab shows an audit of events for all resources. The Tasks tab lists the current running tasks. You can view this panel by clicking the maximize/ minimize button.

Important

The minimum supported resolution viewing the Administration Portal in a web browser is 1024x768. The Administration Portal will not render correctly when viewed at a lower resolution.

2.2.3. Tree Mode and Flat Mode

The Administration Portal provides two different modes for managing your resources: tree mode and flat mode. Tree mode displays resources in a hierarchical view per data center, from the highest level of the data center down to the individual virtual machine. Working in tree mode is highly recommended for most operations.
Tree Mode

Figure 2.3. Tree Mode

Flat mode allows you to search across data centers, or storage domains. It does not limit you to viewing the resources of a single hierarchy. For example, you may need to find all virtual machines that are using more than 80% CPU across clusters and data centers, or locate all hosts that have the highest utilization. Flat mode makes this possible. In addition, certain objects, such as Pools and Users are not in the data center hierarchy and can be accessed only in flat mode.
To access flat mode, click on the System item in the Tree pane on the left side of the screen. You are in flat mode if the Pools and Users resource tabs appear.
Flat Mode

Figure 2.4. Flat Mode

2.2.4. Using the Guide Me Facility

When setting up resources such as data centers and clusters, a number of tasks must be completed in sequence. The context-sensitive Guide Me window prompts for actions that are appropriate to the resource being configured. The Guide Me window can be accessed at any time by clicking the Guide Me button on the resource toolbar.
New Data Center Guide Me Window

Figure 2.5. New Data Center Guide Me Window

2.2.5. Performing Searches in Red Hat Enterprise Virtualization

The Administration Portal enables the management of thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) in the search bar. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are needed.

2.2.6. Saving a Query String as a Bookmark

Summary
A bookmark can be used to remember a search query, and shared with other users.

Procedure 2.1. Saving a Query String as a Bookmark

  1. Enter the desired search query in the search bar and perform the search.
  2. Click the star-shaped Bookmark button to the right of the search bar to open the New Bookmark window.
    Bookmark Icon

    Figure 2.6. Bookmark Icon

  3. Enter the Name of the bookmark.
  4. Edit the Search string field (if applicable).
  5. Click OK to save the query as a bookmark and close the window.
  6. The search query is saved and displays in the Bookmarks pane.
Result
You have saved a search query as a bookmark for future reuse. Use the Bookmark pane to find and select the bookmark.

Part I. Administering the Resources

Table of Contents

3. Data Centers
3.1. Introduction to Data Centers
3.2. The Storage Pool Manager (SPM)
3.3. SPM Priority
3.4. Using the Events Tab to Identify Problem Objects in Data Centers
3.5. Data Center Tasks
3.6. Data Centers and Storage Domains
3.7. Data Centers and Permissions
4. Clusters
4.1. Introduction to Clusters
4.2. Cluster Tasks
4.3. Clusters and Permissions
4.4. Clusters and Gluster Hooks
5. Logical Networks
5.1. Introduction to Logical Networks
5.2. Port Mirroring
5.3. Required Networks, Optional Networks, and Virtual Machine Networks
5.4. VNIC Profiles and QoS
5.5. Logical Network Tasks
5.6. Logical Networks and Permissions
6. Hosts
6.1. Introduction to Red Hat Enterprise Virtualization Hosts
6.2. Red Hat Enterprise Virtualization Hypervisor Hosts
6.3. Foreman Host Provider Hosts
6.4. Red Hat Enterprise Linux Hosts
6.5. Host Tasks
6.6. Hosts and Networking
6.7. Host Resilience
6.8. Hosts and Permissions
7. Storage Domains
7.1. Understanding Storage Domains
7.2. Storage Metadata Versions in Red Hat Enterprise Virtualization
7.3. Preparing and Adding File-based Storage
7.4. Preparing and Adding Red Hat Storage (GlusterFS) Storage Domains
7.5. Adding POSIX Compliant File System Storage
7.6. Preparing and Adding Block-based Storage
7.7. Storage Tasks
7.8. Storage and Permissions
8. Virtual Machines
8.1. Introduction to Virtual Machines
8.2. Supported Virtual Machine Operating Systems
8.3. Virtual Machine Performance Parameters
8.4. Creating Virtual Machines
8.5. Using Virtual Machines
8.6. Shutting Down or Pausing Virtual Machines
8.7. Managing Virtual Machines
8.8. Virtual Machines and Permissions
8.9. Backing Up and Restoring Virtual Machines with Snapshots
8.10. Importing and Exporting Virtual Machines
8.11. Migrating Virtual Machines Between Hosts
8.12. Improving Uptime with Virtual Machine High Availability
8.13. Other Virtual Machine Tasks
9. Templates
9.1. Introduction to Templates
9.2. Template Tasks
9.3. Sealing Templates in Preparation for Deployment
9.4. Templates and Permissions
10. Pools
10.1. Introduction to Virtual Machine Pools
10.2. Virtual Machine Pool Tasks
10.3. Pools and Permissions
11. Virtual Machine Disks
11.1. Understanding Virtual Machine Storage
11.2. Understanding Virtual Disks
11.3. Shareable Disks in Red Hat Enterprise Virtualization
11.4. Virtual Disk Tasks
11.5. Virtual Disks and Permissions
12. Red Hat Storage (GlusterFS) Volumes
12.1. Introduction to Red Hat Storage (GlusterFS) Volumes
12.2. Introduction to Red Hat Storage (GlusterFS) Bricks
12.3. Optimizing Red Hat Storage Volumes to Store Virtual Machine Images
12.4. Red Hat Storage (GlusterFS) Tasks
13. External Providers
13.1. Introduction to External Providers in Red Hat Enterprise Virtualization
13.2. Enabling the Authentication of OpenStack Providers
13.3. Adding an External Provider
13.4. Removing an External Provider
13.5. Explanation of Settings and Controls in the Add Provider Window
13.6. Explanation of Settings and Controls in the Edit Provider Window

Chapter 3. Data Centers

3.1. Introduction to Data Centers

A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.
A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. A Red Hat Enterprise Virtualization environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.
All data centers are managed from the single Administration Portal.
Data Centers

Figure 3.1. Data Centers

Red Hat Enterprise Virtualization creates a default data center during installation. It is recommended that you do not remove the default data center; instead, set up new appropriately named data centers.
Data Center Objects

Figure 3.2. Data Center Objects

3.2. The Storage Pool Manager (SPM)

The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the Red Hat Enterprise Virtualization Manager grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.
The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.
The Red Hat Enterprise Virtualization Manager ensures that the SPM is always available. The Manager moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.

3.3. SPM Priority

The SPM role uses some of a host's available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.
You can change a host's SPM priority by editing the host.

3.4. Using the Events Tab to Identify Problem Objects in Data Centers

The Events tab for a data center displays all events associated with that data center; events include audits, warnings, and errors. The information displayed in the results list will enable you to identify problem objects in your Red Hat Enterprise Virtualization environment.
The Events results list has two views: Basic and Advanced. Basic view displays the event icon, the time of the event, and the description of the events. Advanced view displays these also and includes, where applicable, the event ID; the associated user, host, virtual machine, template, data center, storage, and cluster; the Gluster volume, and the correlation ID.

3.5. Data Center Tasks

3.5.1. Creating a New Data Center

Summary
This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.

Note

The storage Type can be edited until the first storage domain is added to the data center. Once a storage domain has been added, the storage Type cannot be changed.
If you set the Compatibility Version as 3.1, it cannot be changed to 3.0 at a later time; version regression is not allowed.

Procedure 3.1. Creating a New Data Center

  1. Select the Data Centers resource tab to list all data centers in the results list.
  2. Click New to open the New Data Center window.
  3. Enter the Name and Description of the data center.
  4. Select the storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
  5. Click OK to create the data center and open the New Data Center - Guide Me window.
  6. The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the data center and clicking the Guide Me button.
Result
The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage domain is configured for it; use Guide Me to configure these entities.

3.5.2. Explanation of Settings in the New Data Center and Edit Data Center Windows

The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 3.1. Data Center Properties

Field
Description/Action
Name
The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the data center. This field is recommended but not mandatory.
Type
The storage type. Choose one of
  • NFS
  • iSCSI
  • Fibre Channel
  • Local on Host
  • POSIX compliant FS
  • GlusterFS
The type of data domain dictates the type of the data center and cannot be changed after creation without significant disruption. All storage in a data center must be of one type only. For example, if iSCSI is selected as the type, only iSCSI data domains can be attached to the data center.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of:
  • 3.0
  • 3.1
  • 3.2
  • 3.3
After upgrading the Red Hat Enterprise Virtualization Manager, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center.
Quota Mode
Quota is a resource limitation tool provided with Red Hat Enterprise Virtualization. Choose one of:
  • Disabled - Select if you do not want to implement Quota
  • Audit - Select if you want to edit the Quota settings
  • Enforced - Select to implement Quota

3.5.3. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 3.2. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

3.5.4. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 3.3. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
  3. From the Data Centers details pane, click New to open the New Logical Network window.
    From the Clusters details pane, click Add Network to open the New Logical Network window.
  4. Enter a Name, Description and Comment for the logical network.
  5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down list and enter a Network Label for the logical network.
  6. Select the Enable VLAN tagging, VM network and Override MTU to enable these options.
  7. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  8. From the Profiles tab, add vNIC profiles to the logical network as required.
  9. Click OK.
Result
You have defined this logical network as a resource required by a cluster or clusters in the data center. You can now add this resource to the hosts in the cluster.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

3.5.5. Re-Initializing a Data Center: Recovery Procedure

Summary
This recovery procedure replaces the master data domain of your data center with a new master data domain; necessary in the event of data corruption of your master data domain. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.
You can import any backup or exported virtual machines or templates into your new master data domain.

Procedure 3.4. Re-Initializing a Data Center

  1. Click the Data Centers resource tab and select the data center to re-initialize.
  2. Ensure that any storage domains attached to the data center are in maintenance mode.
  3. Right-click the data center and select Re-Initialize Data Center from the drop-down menu to open the Data Center Re-Initialize window.
  4. The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
  5. Select the Approve operation check box.
  6. Click OK to close the window and re-initialize the data center.
Result
The storage domain is attached to the data center as the master data domain and activated. You can now import any backup or exported virtual machines or templates into your new master data domain.

3.5.6. Removing a Data Center

Summary
An active host is required to remove a data center. Removing a data center will not remove the associated resources.

Procedure 3.5. Removing a Data Center

  1. Ensure the storage domains attached to the data center is in maintenance mode.
  2. Click the Data Centers resource tab and select the data center to remove.
  3. Click Remove to open the Remove Data Center(s) confirmation window.
  4. Click OK.
Result
The data center has been removed.

3.5.7. Force Removing a Data Center

Summary
A data center becomes Non Responsive if the attached storage domain is corrupt or if the host becomes Non Responsive. You cannot Remove the data center under either circumstance.
Force Remove does not require an active host. It also permanently removes the attached storage domain.
It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.

Procedure 3.6. Force Removing a Data Center

  1. Click the Data Centers resource tab and select the data center to remove.
  2. Click Force Remove to open the Force Remove Data Center confirmation window.
  3. Select the Approve operation check box.
  4. Click OK
Result
The data center and attached storage domain are permanently removed from the Red Hat Enterprise Virtualization environment.

3.6. Data Centers and Storage Domains

3.6.1. Attaching an Existing Data Domain to a Data Center

Summary
Data domains that are Unattached can be attached to a data center. The data domain must be of the same Storage Type as the data center.

Procedure 3.7. Attaching an Existing Data Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Data to open the Attach Storage window.
  4. Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
  5. Click OK.
Result
The data domain is attached to the data center and is automatically activated.

3.6.2. Attaching an Existing ISO domain to a Data Center

Summary
An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.
Only one ISO domain can be attached to a data center.

Procedure 3.8. Attaching an Existing ISO Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach ISO to open the Attach ISO Library window.
  4. Click the radio button for the appropriate ISO domain.
  5. Click OK.
Result
The ISO domain is attached to the data center and is automatically activated.

3.6.3. Attaching an Existing Export Domain to a Data Center

Summary
An export domain that is Unattached can be attached to a data center.
Only one export domain can be attached to a data center.

Procedure 3.9. Attaching an Existing Export Domain to a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains already attached to the data center.
  3. Click Attach Export to open the Attach Export Domain window.
  4. Click the radio button for the appropriate Export domain.
  5. Click OK.
Result
The Export domain is attached to the data center and is automatically activated.

3.6.4. Detaching a Storage Domain from a Data Center

Summary
Detaching a storage domain from a data center will stop the data center from associating with that storage domain. The storage domain is not removed from the Red Hat Enterprise Virtualization environment; it can be attached to another data center.
Data, such as virtual machines and templates, remains attached to the storage domain.

Note

The master storage, if it is the last available storage domain, cannot be removed.

Procedure 3.10. Detaching a Storage Domain from a Data Center

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains attached to the data center.
  3. Select the storage domain to detach. If the storage domain is Active, click Maintenance to move the domain into maintenance mode.
  4. Click Detach to open the Detach Storage confirmation window.
  5. Click OK.
Result
You have detached the storage domain from the data center. It can take up to several minutes for the storage domain to disappear from the details pane.

3.6.5. Activating a Storage Domain from Maintenance Mode

Summary
Storage domains in maintenance mode must be activated to be used.

Procedure 3.11. Activating a Data Domain from Maintenance Mode

  1. Click the Data Centers resource tab and select the appropriate data center.
  2. Select the Storage tab in the details pane to list the storage domains attached to the data center.
  3. Select the appropriate storage domain and click Activate.
Result
The storage domain is activated and can be used in the data center.

3.7. Data Centers and Permissions

3.7.1. Managing System Permissions for a Data Center

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, a StorageAdmin has administrator privileges only for the assigned storage domain, and so forth.
A data center administrator is a system administration role for a specific data center only. This is useful in virtualized environments with multiple data center, where each data center requires a system administrator. The DataCenterAdmin role is a hierarchical model: a user assigned the data center administrator role for a data center can manage all objects in the data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.
The data center administrator role permits the following actions:
  • Create and remove clusters associated with the data center;
  • Add and remove hosts, virtual machines, and pools associated with the data center; and
  • Edit user permissions for virtual machines associated with the data center.

Note

You can only assign roles and permissions to existing users.
You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.

3.7.2. Data Center Administrator Roles Explained

Data Center Permission Roles
The table below describes the administrator roles and privileges applicable to data center administration.

Table 3.2. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DataCenterAdmin Data Center Administrator Can use, create, delete, manage all physical and virtual resources within a specific data center, including clusters, hosts, templates and virtual machines.
NetworkAdmin Network Administrator Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well.

3.7.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 3.12. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

3.7.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 3.13. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 4. Clusters

4.1. Introduction to Clusters

A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.
Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the Clusters tab and in the Configuration tool during runtime. The cluster is the highest level at which power and load-sharing policies can be defined.
Clusters run virtual machines or Red Hat Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.
The Red Hat Enterprise Virtualization platform installs a default cluster in the default data center by default during the installation process.
Cluster

Figure 4.1. Cluster

4.2. Cluster Tasks

4.2.1. Creating a New Cluster

Summary
A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button.

Procedure 4.1. Creating a New Cluster

  1. Select the Clusters resource tab.
  2. Click New to open the New Cluster window.
  3. Select the Data Center the cluster will belong to from the drop-down list.
  4. Enter the Name and Description of the cluster.
  5. Select the CPU Name and Compatibility Version from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.
  6. Select either the Enable Virt Service or Enable Gluster Service radio box depending on whether the cluster should be populated with virtual machine hosts or Gluster-enabled nodes. Note that you cannot add Red Hat Enterprise Virtualization Hypervisor hosts to a Gluster-enabled cluster.
  7. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
  8. Click the Cluster Policy tab to optionally configure a cluster policy, scheduler optimization settings, and enable trusted service for hosts in the cluster.
  9. Click the Resilience Policy tab to select the virtual machine migration policy.
  10. Click OK to create the cluster and open the New Cluster - Guide Me window.
  11. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the cluster and clicking the Guide Me button.
Result
The new cluster is added to the virtualization environment.

4.2.2. Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

4.2.2.1. General Cluster Settings Explained

New Cluster window

Figure 4.2. New Cluster window

The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Table 4.1. General Cluster Settings

Field
Description/Action
Data Center
The data center that will contain the cluster.
Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the cluster. This field is recommended but not mandatory.
CPU Name
The CPU type of the cluster. Choose one of:
  • Intel Conroe Family
  • Intel Penryn Family
  • Intel Nehalem Family
  • Intel Westmere Family
  • Intel SandyBridge Family
  • Intel Haswell
  • AMD Opteron G1
  • AMD Opteron G2
  • AMD Opteron G3
  • AMD Opteron G4
  • AMD Opteron G5
All hosts in a cluster must run the same CPU type (Intel or AMD); this cannot be changed after creation without significant disruption. The CPU type should be set for the least powerful host. For example: an Intel SandyBridge host can attach to an Intel Penryn cluster; an Intel Conroe host cannot. Hosts with different CPU models will only use features present in all models.
Compatibility Version
The version of Red Hat Enterprise Virtualization. Choose one of:
  • 3.1
  • 3.2
  • 3.3
You will not be able to select a version older than the version specified for the data center.
Enable Virt Service
If this radio button is selected, hosts in this cluster will be used to run virtual machines.
Enable Gluster Service
If this radio button is selected, hosts in this cluster will be used as Red Hat Storage Server nodes, and not for running virtual machines. You cannot add a Red Hat Enterprise Virtualization Hypervisor host to a cluster with this option enabled.
Import existing gluster configuration
This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Enterprise Virtualization Manager.
The following options are required for each host in the cluster that is being imported:
  • Address: Enter the IP or fully qualified domain name of the Gluster host server.
  • Fingerprint: Red Hat Enterprise Virtualization Manager fetches the host's fingerprint, to ensure you are connecting with the correct host.
  • Root Password: Enter the root password required for communicating with the host.

4.2.2.2. Optimization Settings Explained

Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Enterprise Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.
CPU Thread Handling allows hosts to run virtual machines with a total number of processor cores greater than number of cores in the host. This is useful for non-CPU-intensive workloads, where allowing a greater number of virtual machines to run can reduce hardware requirements. It also allows virtual machines to run with CPU topologies that would otherwise not be possible, specifically if the number of guest cores is between the number of host cores and number of host threads.
The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.

Table 4.2. Optimization Settings

Field
Description/Action
Memory Optimization
  • None - Disable memory page sharing: Disables memory page sharing.
  • For Server Load - Enable memory page sharing to 150%: Sets the memory page sharing threshold to 150% of the system memory on each host.
  • For Desktop Load - Enable memory page sharing to 200%: Sets the memory page sharing threshold to 200% of the system memory on each host.
CPU Threads
Selecting the Count Threads As Cores check box allows hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host.
The exposed host threads would be treated as cores which can be utilized by virtual machines. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores.
Memory Balloon
Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this option is set, the Memory Overcommit Manager (MoM) will start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.
To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine in cluster level 3.2 and higher includes a balloon device, unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up.
It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution.

4.2.2.3. Resilience Policy Settings Explained

The resilience policy sets the virtual machine migration policy in the event of host failure. Virtual machines running on a host that unexpectedly shuts down or is put into maintenance mode are migrated to other hosts in the cluster; this migration is dependent upon your cluster policy.

Note

Virtual machine migration is a network-intensive operation. For instance, on a setup where a host is running ten or more virtual machines, migrating all of them can be a long and resource-consuming process. Therefore, select the policy action to best suit your setup. If you prefer a conservative approach, disable all migration of virtual machines. Alternatively, if you have many virtual machines, but only several which are running critical workloads, select the option to migrate only highly available virtual machines.
The table below describes the settings for the Resilience Policy tab in the New Cluster and Edit Cluster windows.

Table 4.3. Resilience Policy Settings

Field
Description/Action
Migrate Virtual Machines
Migrates all virtual machines in order of their defined priority.
Migrate only Highly Available Virtual Machines
Migrates only highly available virtual machines to prevent overloading other hosts.
Do Not Migrate Virtual Machines
Prevents virtual machines from being migrated.

4.2.2.4. Cluster Policy Settings Explained

Cluster policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the cluster policy to enable automatic load balancing across the hosts in a cluster.
Editing a cluster's load balancing policy.

Figure 4.3. Cluster Policy Settings

The table below describes the settings for the Edit Policy window.

Table 4.4. Cluster Policy Tab Properties

Field/Tab
Description/Action
None
Set the policy value to None to have no load or power sharing between hosts. This is the default mode.
Evenly_Distributed
Distributes the CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined Maximum Service Level.
Power_Saving
Distributes the CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.
CpuOverCommitDurationMinutes
Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the cluster policy takes action. The defined time interval protects against temporary spikes in CPU load activating cluster policies and instigating unnecessary virtual machine migration. Maximum two characters.
HighUtilization
Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold.
LowUtilization
Expressed as a percentage. If the host runs below the low utilization value for the defined time interval, the Red Hat Enterprise Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will not power down the original host machine, which may negate any potential power saving. The original host must be powered down manually.
Enable Trusted Service
Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details.
When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /etc/vdsm/mom.conf. /etc/vdsm/mom.conf is the Memory Overcommit Manager log file.

4.2.3. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 4.2. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

4.2.4. Importing an Existing Red Hat Storage Cluster

Summary
You can import a Red Hat Storage cluster and all hosts belonging to the cluster into Red Hat Enterprise Virtualization Manager.
When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.

Important

Currently, a Red Hat Storage node can only be added to a cluster which has its compatibility level set to 3.1, 3.2, or 3.3.

Procedure 4.3. Importing an Existing Red Hat Storage Cluster to Red Hat Enterprise Virtualization Manager

  1. Select the Clusters resource tab to list all clusters in the results list.
  2. Click New to open the New Cluster window.
  3. Select the Data Center the cluster will belong to from the drop-down menu.
  4. Enter the Name and Description of the cluster.
  5. Select the Enable Gluster Service radio button and the Import existing gluster configuration check box.
    The Import existing gluster configuration field is displayed only if you select Enable Gluster Service radio button.
    Import Existing Cluster Configuration

    Figure 4.4. Import Existing Cluster Configuration

  6. In the Address field, enter the hostname or IP address of any server in the cluster.
    The host Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.
  7. Enter the Root Password for the server, and click OK.
  8. The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.
    Add Hosts Window

    Figure 4.5. Add Hosts Window

  9. For each host, enter the Name and the Root Password.
  10. If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.
    Click Apply to set the entered password all hosts.
    Make sure the fingerprints are valid and submit your changes by clicking OK.
Result
The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Storage cluster into Red Hat Enterprise Virtualization Manager.

4.2.5. Explanation of Settings in the Add Hosts Window

The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.
Add Hosts Window

Figure 4.6. Add Hosts Window

Table 4.5. Add Gluster Hosts Settings

Field Description
Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts.
Name Enter the name of the host.
Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window.
Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster.
Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window.

4.2.6. Setting Load and Power Management Policies for Hosts in a Cluster

Summary
Cluster policies allow you to specify acceptable CPU usage values, both high and low, and what happens when those levels are reached. Define the cluster policy to enable automatic load balancing across the hosts in a cluster.
A host with CPU usage that exceeds the HighUtilization value will reduce its CPU processor load by migrating virtual machines to other hosts.
A host with CPU usage below its LowUtilization value will migrate all of its virtual machines to other hosts so it can be powered down until such time as it is required again.

Procedure 4.4. Setting Load and Power Management Policies for Hosts

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Click the Edit button to open the Edit Cluster window.
    Edit Cluster Policy

    Figure 4.7. Edit Cluster Policy

  3. Select one of the following policies:
    • None
    • Evenly_Distributed - Enter CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization text field.
    • Power Saving - Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization text field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization text field
  4. Specify the time interval in minutes at which the selected policy will be triggered in the CpuOverCommitDurationMinutes text field.
  5. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box.
  6. Click OK.
Result
You have updated the cluster policy for the cluster.

4.2.7. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 4.5. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
  3. From the Data Centers details pane, click New to open the New Logical Network window.
    From the Clusters details pane, click Add Network to open the New Logical Network window.
  4. Enter a Name, Description and Comment for the logical network.
  5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down list and enter a Network Label for the logical network.
  6. Select the Enable VLAN tagging, VM network and Override MTU to enable these options.
  7. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  8. From the Profiles tab, add vNIC profiles to the logical network as required.
  9. Click OK.
Result
You have defined this logical network as a resource required by a cluster or clusters in the data center. You can now add this resource to the hosts in the cluster.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

4.2.8. Removing a Cluster

Summary
Move all hosts out of a cluster before removing it.

Note

You cannot remove the Default cluster, as it holds the Blank template. You can however rename the Default cluster and add it to a new data center.

Procedure 4.6. Removing a Cluster

  1. Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.
  2. Ensure there are no hosts in the cluster.
  3. Click Remove to open the Remove Cluster(s) confirmation window.
  4. Click OK
Result
The cluster is removed.

4.2.9. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary
Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 4.7. Assigning or Unassigning a Logical Network to a Cluster

  1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.
  2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
  3. Click Manage Networks to open the Manage Networks window.
    The Manage Networks window

    Figure 4.8. Manage Networks

  4. Select appropriate check boxes.
  5. Click OK to save the changes and close the window.
Result
You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note

Networks offered by external providers cannot be used as display networks.

4.2.10. Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 4.6. Manage Networks Settings

Field
Description/Action
Assign
Assigns the logical network to all hosts in the cluster.
Required
A logical network becomes operational when it is attached to an active NIC on all hosts in the cluster.
VM Network
The logical network carries the virtual machine network traffic.
Display Network
The logical network carries the virtual machine SPICE and virtual network controller traffic.
Migration Network
The logical network carries virtual machine and storage migration traffic.

4.3. Clusters and Permissions

4.3.1. Managing System Permissions for a Cluster

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, and so forth.
A cluster administrator is a system administration role for a specific data center only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.
The cluster administrator role permits the following actions:
  • Create and remove associated clusters;
  • Add and remove hosts, virtual machines, and pools associated with the cluster; and
  • Edit user permissions for virtual machines associated with the cluster.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.

4.3.2. Cluster Administrator Roles Explained

Cluster Permission Roles
The table below describes the administrator roles and privileges applicable to cluster administration.

Table 4.7. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
ClusterAdmin Cluster Administrator
Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required.
However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required.
NetworkAdmin Network Administrator Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well.

4.3.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 4.8. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

4.3.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 4.9. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

4.4. Clusters and Gluster Hooks

4.4.1. Managing Gluster Hooks

Gluster hooks are volume lifecycle extensions. You can manage Gluster hooks from the Manager. The content of the hook can be viewed if the hook content type is Text.
Through the Manager, you can perform the following:
  • View a list of hooks available in the hosts.
  • View the content and status of hooks.
  • Enable or disable hooks.
  • Resolve hook conflicts.

4.4.2. Listing Hooks

Procedure 4.10. Listing a Hook

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Click the Gluster Hooks sub-tab. The Gluster Hooks sub-tab lists the hooks in the cluster.

4.4.3. Viewing the Content of Hooks

Procedure 4.11. Viewing the Content of a Hook

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select a hook with content type Text and click View Content. The Hook Content window displays with the content of the hook.

4.4.4. Enabling or Disabling Hooks

Procedure 4.12. Enabling or Disabling a Hook

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select a hook and click Enable or Disable. The hook is enabled or disabled on all nodes of the cluster.
    The status of hooks as either enabled or disabled updates and displays in the Gluster Hooks sub-tab.

4.4.5. Refreshing Hooks

By default, the Manager checks the status of installed hooks on all servers in the cluster and detects new hooks by running a periodic job every hour. You can refresh hooks manually by clicking the Sync button.

Procedure 4.13. Refreshing a Hook

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Click Sync. The hooks are synchronized and displayed.

4.4.6. Resolving Conflicts

The hooks are displayed in the Gluster Hooks sub-tab of the Cluster tab. Hooks causing a conflict are displayed with an exclamation mark. This denotes either that there is a conflict in the content or the status of the hook across the servers in the cluster, or that the hook script is missing in one or more servers. These conflicts can be resolved via the Manager. The hooks in the servers are periodically synchronized with engine database and the following conflicts can occur for the hooks:
  • Content Conflict - the content of the hook is different across servers.
  • Status Conflict - the status of the hook is different across servers.
  • Missing Conflict - one or more servers of the cluster do not have the hook.
  • Content + Status Conflict - both the content and status of the hook are different across servers.
  • Content + Status + Missing Conflict - both the content and status of the hook are different across servers, or one or more servers of the cluster do not have the hook.

4.4.7. Resolving Missing Hook Conflicts

Procedure 4.14. Resolving a Missing Hook Conflict

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select a hook causing a conflict and click Resolve Conflicts. The Resolve Conflicts window displays.
  3. Select the option below to copy the hook to all servers:
    Copy the hook to all the servers
  4. Select the option below to remove the hook from all servers and the engine:
    Remove the missing hook
  5. Click OK. The conflict is resolved.

4.4.8. Resolving Content Conflicts

Procedure 4.15. Resolving a Content Conflict

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
  3. Select an option from the Use Content from drop-down list:
    • Select a server to copy the content of the hook from the selected server.
      Or
    • Select Engine (Master) to copy the content of the hook from the engine copy.

    Note

    The content of the hook will be overwritten in all servers and in the engine.
  4. Click OK. The conflict is resolved.

4.4.9. Resolving Status Conflicts

Procedure 4.16. Resolving a Status Conflict

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
  3. Set Hook Status to Enable or Disable.
  4. Click OK. The conflict is resolved.

4.4.10. Resolving Content and Status Conflicts

Procedure 4.17. Resolving a Content and Status Conflict

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select a hook causing a conflict and click Resolve Conflicts. The Resolve Conflicts window displays.
  3. Select an option from the Use Content from drop-down list to resolve the content conflict:
    • Select a server to copy the content of the hook from the selected server.
      Or
    • Select Engine (Master) to copy the content of the hook from the engine copy.

    Note

    The content of the hook will be overwritten in all the servers and in the engine.
  4. Set hook Status to Enable or Disable to resolve the status conflict.
  5. Click OK. The conflict is resolved.

4.4.11. Resolving Content, Status, and Missing Conflicts

Procedure 4.18. Resolving a Content, Status and Missing Conflict

  1. Click the Cluster tab and select a cluster. A Gluster Hooks sub-tab displays, listing the hooks in the cluster.
  2. Select the conflicted hook and click Resolve Conflicts. The Resolve Conflicts window displays.
  3. Select an option from the Use Content from drop-down list to resolve the content conflict:
    • Select a server to copy the content of the hook from the selected server.
      Or
    • Select Engine (Master) to copy the content of the hook from the engine copy.

    Note

    The content of the hook will be overwritten in all the servers and in the engine.
  4. Set Hook Status to Enable or Disable to resolve the status conflict.
  5. Select one of the options given below to resolve the missing hook conflict:
    • Copy the hook to all the servers.
    • Remove the missing hook.
  6. Click OK. The conflict is resolved.

4.4.12. Managing Gluster Sync

The Gluster Sync feature periodically fetches the latest cluster configuration from GlusterFS and syncs the same with the engine DB. This process can be performed through the Manager. When a cluster is selected, the user is provided with the option to import hosts or detach existing hosts from the selected cluster. You can perform Gluster Sync if there is a host in the cluster.

Note

The Manager continuously monitors if hosts are added to or removed from the storage cluster. If the addition or removal of a host is detected, an action item is shown in the General tab for the cluster, where you can either to choose to Import the host into or Detach the host from the cluster.

4.4.13. Importing Hosts to Clusters

Procedure 4.19. Importing a Host to a Cluster

  1. Click the Cluster tab and select a cluster. A General sub-tab is shown with the details of the cluster.
  2. In Action Items, click Import. The Add Servers window displays.
  3. Enter the Name and Root Password.

    Note

    Select Use a common password, enter the root password in the Root Password field and click Apply to use a common password for all the hosts.
  4. Click OK. The host is added to the cluster.

4.4.14. Detaching Hosts from Clusters

Procedure 4.20. Detaching a Host from a Cluster

  1. Click the Cluster tab and select a cluster. A General sub-tab is shown with the details of the cluster.
  2. In Action Items, click Detach. The Detach Gluster Hosts window displays.
  3. Select the host to be detached. To force detach, select Force Detach.
  4. Click OK. The host is detached from the cluster.

Chapter 5. Logical Networks

5.1. Introduction to Logical Networks

A logical network is a named set of global network connectivity properties in your data center. When a logical network added to a host, it may be further configured with host-specific network parameters. Logical networks optimize network flow by grouping network traffic by usage, type, and requirements.
Logical networks allow both connectivity and segregation. You can create a logical network for storage communication to optimize network traffic between hosts and storage domains, a logical network specifically for all virtual machine traffic, or multiple logical networks to carry the traffic of groups of virtual machines.
The default logical network in all data centers is the management network called rhevm. The rhevm network carries all traffic, until another logical network is created. It is meant especially for management communication between the Red Hat Enterprise Virtualization Manager and hosts.
A logical network is a data center level resource; creating one in a data center makes it available to the clusters in a data center. A logical network that has been designated a Required must be configured in all of a cluster's hosts before it is operational. Optional networks can be used by any host they have been added to.
Data Center Objects

Figure 5.1. Data Center Objects

Warning

Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.

Important

If you plan to use Red Hat Enterprise Virtualization nodes to provide any services, remember that the services will stop if the Red Hat Enterprise Virtualization environment stops operating.
This applies to all services, but you should be especially aware of the hazards of running the following on Red Hat Enterprise Virtualization:
  • Directory Services
  • DNS
  • Storage

5.2. Port Mirroring

Port mirroring copies layer 3 network traffic on given a logical network and host to a virtual interface on a virtual machine. This virtual machine can be used for network debugging and tuning, intrusion detection, and monitoring the behavior of other virtual machines on the same host and logical network.
The only traffic copied is internal to one logical network on one host. There is no increase on traffic on the network external to the host; however a virtual machine with port mirroring enabled uses more host CPU and RAM than other virtual machines.
Enable and disable port mirroring by editing network interfaces on virtual machines.
Port mirroring requires an IPv4 IP address.

Important

You should be aware that enabling port mirroring reduces the privacy of any other network users.

5.3. Required Networks, Optional Networks, and Virtual Machine Networks

Red Hat Enterprise Virtualization 3.1 and higher distinguishes between required networks and optional networks.
Required networks must be applied to all hosts in a cluster for the cluster and network to be Operational. Logical networks are added to clusters as Required networks by default.
When a host's required network becomes non-operational, virtual machines running on that host are migrated to another host; the extent of this migration is dependent upon the chosen cluster policy. This is beneficial if you have machines running mission critical workloads.
When a non-required network becomes non-operational, the virtual machines running on the network are not migrated to another host. This prevents unnecessary I/O overload caused by mass migrations.
Optional networks are those logical networks that have not been explicitly declared Required networks. Optional networks can be implemented on only the hosts that use them. The presence or absence of these networks does not affect the Operational status of a host.
Use the Manage Networks button to change a network's Required designation.
Virtual machine networks (called a VM network in the user interface) are logical networks designated to carry only virtual machine network traffic. Virtual machine networks can be required or optional.

Note

A virtual machine with a network interface on an optional virtual machine network will not start on a host without the network.

5.4. VNIC Profiles and QoS

5.4.1. VNIC Profile Overview

A Virtual Network Interface Card (VNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. VNIC profiles allow you to apply Network QoS profiles to a VNIC, enable or disable port mirroring, and add or remove custom properties. VNIC profiles offer an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.

Note

Starting with Red Hat Enterprise Virtualization 3.3, virtual machines now access logical networks only through VNIC profiles and cannot access a logical network if no VNIC profiles exist for that logical network. When you create a new logical network in the Manager, a VNIC profile of the same name as the logical network is automatically created under that logical network.

5.4.2. Creating a VNIC Profile

Summary
Create a Virtual Network Interface Controller (VNIC) profile to regulate network bandwidth for users and groups.
  1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.
  2. Select the Profiles tab in the details pane to display available VNIC profiles. If you selected the logical network in tree mode, you can select the VNIC Profiles tab in the results pane.
  3. Click New to open the VM Interface Profile window.
    The VM Interface Profile window

    Figure 5.2. The VM Interface Profile window

  4. Enter the Name and Description of the profile.
  5. Use the QoS drop-down menu to select the relevant Quality of Service policy to apply to the VNIC profile.
  6. Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
  7. The custom device properties drop-down menu, which displays Please select a key... by default, will be active only if custom properties have been defined on the Manager. Use the drop-down menu to select the custom property, and the + and - buttons to add or remove custom properties.
  8. Click OK to save the profile and close the window.
Result
You have created a VNIC profile. Apply this profile to users and groups to regulate their network bandwidth.

5.4.3. Explanation of Settings in the VM Interface Profile Window

Table 5.1. VM Interface Profile Window

Field Name
Description
Network
A drop-down menu of the available networks to apply the VNIC profile.
Name
The name of the VNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters.
Description
The description of the VNIC profile. This field is recommended but not mandatory.
QoS
A drop-down menu of the available Network Quality of Service policies to apply to the VNIC profile. QoS policies regulate inbound and outbound network traffic of the VNIC.
Port Mirroring
A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default.
Device Custom Properties
A drop-down menu to select available custom properties to apply to the VNIC profile. Use the + and - buttons to add and remove properties respectively.
Allow all users to use this Profile
A check box to toggle the availability of the profile to all users in the environment. It is selected by default.

5.4.4. Removing a VNIC Profile

Summary
Remove a VNIC profile to delete it from your virtualized environment.
  1. Use the Networks resource tab, tree mode, or the search function to select a logical network in the results pane.
  2. Select the Profiles tab in the details pane to display available VNIC profiles. If you selected the logical network in tree mode, you can select the vNIC Profiles tab in the results pane.
  3. Select one or more profiles and click Remove to open the Remove VM Interface Profile(s) window.
  4. Click OK to remove the profile and close the window.
Result
You have removed the VNIC profile.

5.4.5. User Permissions for VNIC Profiles

Summary
Configure user permissions to assign users to certain VNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.
  1. Use tree mode to select a logical network.
  2. Select the vNIC Profiles resource tab to display the VNIC profiles.
  3. Select the Permissions tab in the details pane to show the current user permissions for the profile.
  4. Use the Add button to open the Add Permission to User window, and the Remove button to open the Remove Permission window, to affect user permissions for the VNIC profile.
Result
You have configured user permissions for a VNIC profile.

5.4.6. QoS Overview

Network QoS is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual NIC. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.

Important

Network QoS is only supported on cluster compatibility version 3.3 and higher.

5.4.7. Adding QoS

Summary
Create a QoS profile to regulate network traffic when applied to a VNIC (Virtual Network Interface Controller) profile, also known as VM (Virtual Machine) Interface profile.

Procedure 5.1. Creating a QoS profile

  1. Use the Data Centers resource tab, tree mode, or the search function to display and select a data center in the results list.
  2. Select the Network QoS tab in the details pane to display the available QoS profiles.
  3. Click New to open the New Network QoS window.
  4. Enter the Name of the profile.
  5. Enter the limits for the Inbound and Outbound network traffic.
  6. Click OK to save the changes and close the window.
Summary
You have created a QoS Profile that can be used in a VNIC (Virtual Network Interface Controller) profile, also known as VM (Virtual Machine) Interface profile.

5.4.8. Settings in the New Network QoS and Edit Network QoS Windows Explained

Network QoS settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.

Table 5.2. Network QoS Settings

Field Name
Description
Data Center
The data center to which the Network QoS policy is to be added. This field is configured automatically according to the selected data center.
Name
A name to represent the network QoS policy within the Manager.
Inbound
The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.
  • Average: The average speed of inbound traffic.
  • Peak: The speed of inbound traffic during peak times.
  • Burst: The speed of inbound traffic during bursts.
Outbound
The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.
  • Average: The average speed of outbound traffic.
  • Peak: The speed of outbound traffic during peak times.
  • Burst: The speed of outbound traffic during bursts.

5.4.9. Removing QoS

Summary
Remove a QoS profile from your virtualized environment.

Procedure 5.2. Removing a QoS profile

  1. Use the Data Centers resource tab, tree mode, or the search function to display and select a data center in the results list.
  2. Select the Network QoS tab in the details pane to display the available QoS profiles.
  3. Select the QoS profile to remove and click Remove to open the Remove Network QoS window. This window will list what, if any, VNIC profiles are using the selected QoS profile.
  4. Click OK to save the changes and close the window.
Result
You have removed the QoS profile.

5.5. Logical Network Tasks

5.5.1. Creating a New Logical Network in a Data Center or Cluster

Summary
Create a logical network and define its use in a data center, or in clusters in a data center.

Procedure 5.3. Creating a New Logical Network in a Data Center or Cluster

  1. Use the Data Centers or Clusters resource tabs, tree mode, or the search function to find and select a data center or cluster in the results list.
  2. Click the Logical Networks tab of the details pane to list the existing logical networks.
  3. From the Data Centers details pane, click New to open the New Logical Network window.
    From the Clusters details pane, click Add Network to open the New Logical Network window.
  4. Enter a Name, Description and Comment for the logical network.
  5. In the Export section, select the Create on external provider check box to create the logical network on an external provider. Select the external provider from the External Provider drop-down list and enter a Network Label for the logical network.
  6. Select the Enable VLAN tagging, VM network and Override MTU to enable these options.
  7. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  8. From the Profiles tab, add vNIC profiles to the logical network as required.
  9. Click OK.
Result
You have defined this logical network as a resource required by a cluster or clusters in the data center. You can now add this resource to the hosts in the cluster.

Note

When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

5.5.2. Explanation of Settings and Controls in the General Tab of the New Logical Network and Edit Logical Network Windows

The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.

Table 5.3. New Logical Network and Edit Logical Network Settings

Field Name
Description
Name
The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
The description of the logical network. This field is recommended but not mandatory.
Comment
A field for adding plain text, human-readable comments regarding the logical network.
Export
Allows you to export the logical network to an OpenStack Network Service that has been added to the Manager as an external provider.
External Provider - Allows you to select the external provider on which the logical network will be created.
Network Label - Allows you to specify the label of the logical network, such as eth0.
Enable VLAN tagging
VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.
VM Network
Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.
Override MTU
Set a custom maximum transmission unit for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if MTU override is enabled.

5.5.3. Explanation of Settings and Controls in the Cluster Tab of the New Logical Network and Edit Logical Network Windows

The table below describes the settings for the Cluster tab of the New Logical Network and Edit Logical Network window.

Table 5.4. New Logical Network and Edit Logical Network Settings

Field Name
Description
Attach/Detach Network to/from Cluster(s)
Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters.
Name - the name of the cluster to which the settings will apply. This value cannot be edited.
Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster.
Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster.

5.5.4. Explanation of Settings and Controls in the Profiles Tab of the New Logical Network and Edit Logical Network Windows

The table below describes the settings for the Profiles tab of the New Logical Network and Edit Logical Network window.

Table 5.5. New Logical Network and Edit Logical Network Settings

Field Name
Description
vNIC Profiles
Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile.
Public - Allows you to specify whether the profile is available to all users.
QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile.

5.5.5. Editing a Logical Network

Summary
Edit the settings of a logical network.

Procedure 5.4. Editing a Logical Network

  1. Use the Data Centers resource tab, tree mode, or the search function to find and select the data center of the logical network in the results list.
  2. Click the Logical Networks tab in the details pane to list the logical networks in the data center.
  3. Select a logical network and click Edit to open the Edit Logical Network window.
  4. Edit the necessary settings.
  5. Click OK to save the changes.
Result
You have updated the settings of your logical network.

5.5.6. Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Summary
Specify the traffic type for the logical network to optimize the network traffic flow.

Procedure 5.5. Assigning or Unassigning a Logical Network to a Cluster

  1. Use the Clusters resource tab, tree mode, or the search function to find and select the cluster in the results list.
  2. Select the Logical Networks tab in the details pane to list the logical networks assigned to the cluster.
  3. Click Manage Networks to open the Manage Networks window.
    The Manage Networks window

    Figure 5.3. Manage Networks

  4. Select appropriate check boxes.
  5. Click OK to save the changes and close the window.
Result
You have optimized the network traffic flow by assigning a specific type of traffic to be carried on a specific logical network.

Note

Networks offered by external providers cannot be used as display networks.

5.5.7. Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Table 5.6. Manage Networks Settings

Field
Description/Action
Assign
Assigns the logical network to all hosts in the cluster.
Required
A logical network becomes operational when it is attached to an active NIC on all hosts in the cluster.
VM Network
The logical network carries the virtual machine network traffic.
Display Network
The logical network carries the virtual machine SPICE and virtual network controller traffic.
Migration Network
The logical network carries virtual machine and storage migration traffic.

5.5.8. Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Summary
Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important

You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Procedure 5.6. Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Use the Hosts resource tab, tree mode, or the search function to find and select in the results list a host associated with the cluster to which your VLAN-tagged logical networks are assigned.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the data center.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
    Setup Host Networks

    Figure 5.4. Setup Host Networks

  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static,
      Provide the IP and Subnet Mask.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box
  8. Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational
Result
You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

5.5.9. Using the Networks Tab

The Networks resource tab provides a central location for users to perform network-related operations and search for networks based on each network's property or association with other resources.
All networks in the Red Hat Enterprise Virtualization environment display in the results list of the Networks tab. The New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.
Click on each network name and use the Clusters, Hosts, Virtual Machines, Templates, and Permissions tabs in the details pane to perform functions including:
  • Attaching or detaching the networks to clusters and hosts
  • Removing network interfaces from virtual machines and templates
  • Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource tab.

5.5.9.1. Importing Networks from External Providers

Summary
If an external provider offering networking services has been registered in the Manager, the networks provided by that provider can be imported into the Manager and used by virtual machines.

Procedure 5.7. Importing a Network

  1. Click on the Networks tab.
  2. Click the Import button. The Import Networks window appears.
  3. From the Network Provider drop-down list, select a provider. The networks offered by that provider are automatically discovered and display in the Provider Networks list.
  4. Select the network to import in the Provider Networks list and click the down arrow to move the network into the Networks to Import list.
  5. Click the Import button.
Result
The selected networks are imported and can now be used within the Manager.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

5.5.9.2. Limitations to Importing Networks from External Providers

While networks offered by external providers can be imported into the Manager, the following limitations apply to their usage:
  • Networks offered by external providers must be used as virtual machine networks.
  • Networks offered by external providers cannot be used as display networks.
  • The same network can be imported more than once, but only to different data centers.
  • Networks offered by external providers cannot be edited in the Manager. This is because the management of such networks is the responsibility of the external providers.
  • Port mirroring is not available for virtual NIC connected to networks offered by external providers.
  • If a virtual machine uses a network offered by an external provider, that provider cannot be deleted from the Manager while the network is still in use by the virtual machine.
  • Networks offered by external providers are non-required. As such, scheduling for clusters in which such networks have been imported will not take those networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the network on hosts in clusters in which such networks have been imported.

Important

External provider discovery and importing are Technology Preview features. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

5.6. Logical Networks and Permissions

5.6.1. Managing System Permissions for a Network

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a HostAdmin has administrator privileges only for the assigned host, and so forth.
A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment.
The network administrator role permits the following actions:
  • Create, edit and remove networks;
  • Edit the configuration of the network, including configuring port mirroring;
  • Attach and detach networks from resources including clusters and virtual machines.
The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator.

5.6.2. Network Administrator and User Roles Explained

Network Permission Roles
The table below describes the administrator and user roles and privileges applicable to network administration.

Table 5.7. Red Hat Enterprise Virtualization Network Administrator and User Roles

Role Privileges Notes
NetworkAdmin Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine.
NetworkUser Logical network and network interface user for virtual machine and template. Can attach or detach network interfaces from specific logical networks.

5.6.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 5.8. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

5.6.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 5.9. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 6. Hosts

6.1. Introduction to Red Hat Enterprise Virtualization Hosts

Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).
KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Enterprise Virtualization Manager. A Red Hat Enterprise Virtualization environment has one or more hosts attached to it.
Red Hat Enterprise Virtualization supports two methods of installing hosts. You can use the Red Hat Enterprise Virtualization Hypervisor installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation.
Red Hat Enterprise Virtualization hosts take advantage of tuned profiles, which provide virtualization optimizations. For more information on tuned, please refer to the Red Hat Enterprise Linux 6.0 Performance Tuning Guide.
The Red Hat Enterprise Virtualization Hypervisor has security features enabled. Security Enhanced Linux (SELinux) and the iptables firewall are fully configured and on by default. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment. For a full list of ports, see Virtualization Host Firewall Requirements.
A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 6.1 or later AMD64/Intel 64 version.

Important

Red Hat Enterprise Linux 5.4 and Red Hat Enterprise Linux 5.5 machines that belong to existing clusters are supported. Red Hat Enterprise Virtualization Guest Agent is now included in the virtio serial channel. Any Guest Agents installed on Windows guests on Red Hat Enterprise Linux hosts will lose their connection to the Manager when the Red Hat Enterprise Linux hosts are upgraded from version 5 to version 6.
A physical host on the Red Hat Enterprise Virtualization platform:
  • Must belong to only one cluster in the system.
  • Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
  • Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
  • Has a minimum of 2 GB RAM.
  • Can have an assigned system administrator with system permissions.
Administrators can receive the latest security advisories from the Red Hat Enterprise Virtualization watch list. Subscribe to the Red Hat Enterprise Virtualization watch list to receive new security advisories for Red Hat Enterprise Virtualization products by email. Subscribe by completing this form:

6.2. Red Hat Enterprise Virtualization Hypervisor Hosts

Red Hat Enterprise Virtualization Hypervisor hosts are installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. They run stateless, not writing any changes to disk unless explicitly required to.
Red hat Enterprise Virtualization Hypervisor hosts can be added directly to, and configured by, the Red Hat Enterprise Virtualization Manager. Alternatively a host can be configured locally to connect to the Manager; the Manager then is only used to approve the host to be used in the environment.
Unlike Red Hat Enterprise Linux hosts, Red Hat Enterprise Virtualization Hypervisor hosts cannot be added to clusters that have been enabled for Gluster service for use as Red Hat Storage nodes.

Important

The Red Hat Enterprise Virtualization Hypervisor is a closed system. Use a Red Hat Enterprise Linux host if additional rpm packages are required for your environment.

6.3. Foreman Host Provider Hosts

Hosts provided by a Foreman host provider can also be used as virtualization hosts by the Red Hat Enterprise Virtualization Manager. After a Foreman host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Enterprise Virtualization in the same way as Red Hat Enterprise Virtualization Hypervisor hosts and Red Hat Enterprise Linux hosts.

Important

Foreman host provider hosts are a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

6.4. Red Hat Enterprise Linux Hosts

You can use a standard Red Hat Enterprise Linux 6 installation on capable hardware as a host. Red Hat Enterprise Virtualization supports hosts running Red Hat Enterprise Linux 6 Server AMD64/Intel 64 version.
Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, creation of bridge and a reboot of the host. Use the Details pane to monitor the hand-shake process as the host and management system establish a connection.

6.5. Host Tasks

6.5.1. Adding a Red Hat Enterprise Linux Host

Summary
A Red Hat Enterprise Linux host is based on a standard "basic" installation of Red Hat Enterprise Linux. The physical host must be set up before you can add it the Red Hat Enterprise Virtualization environment.
The Red Hat Enterprise Virtualization Manager logs into the host to perform virtualization capability checks, install packages, create a network bridge, and reboot the host. The process of adding a new host can take up to 10 minutes.

Procedure 6.1. Adding a Red Hat Enterprise Linux Host

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the new host.
  4. Enter the Name, Address, and SSH Port of the new host.
  5. Select an authentication method to use with the host.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
  6. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters button to expand the advanced host settings.
    1. Optionally disable automatic firewall configuration.
    2. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  7. You can configure the Power Management and SPM using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  8. Click OK to add the host and close the window.
Result
The new host displays in the list of hosts with a status of Installing. Once installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.

Note

You can view the progress of the installation in the details pane.

6.5.2. Adding a Foreman Host Provider Host

Summary
The process for adding a Foreman host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Foreman host provider.

Procedure 6.2. Adding a Foreman Host Provider Host

  1. Click the Hosts resource tab to list the hosts in the results list.
  2. Click New to open the New Host window.
  3. Use the drop-down menus to select the Data Center and Host Cluster for the new host.
  4. Select the Use External Providers check box to display the options for adding a Foreman host provider host and select the external provider from which the host is to be added.
  5. Select the host to be added from the External Hosts drop-down list. Any details regarding the host that can be retrieved from the external provider are automatically set.
  6. Enter the Name, Address, and SSH Port of the new host.
  7. Select an authentication method to use with the host.
    • Enter the root user's password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication.
  8. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.
    1. Optionally disable automatic firewall configuration.
    2. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  9. You can configure the Power Management and SPM using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  10. Click OK to add the host and close the window.
Result
The new host displays in the list of hosts with a status of Installing. Once installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.

Note

You can view the progress of the installation in the details pane.

6.5.3. Approving a Hypervisor

Summary
It is not possible to run virtual machines on a Hypervisor until the addition of it to the environment has been approved in Red Hat Enterprise Virtualization Manager.

Procedure 6.3. Approving a Hypervisor

  1. Log in to the Red Hat Enterprise Virtualization Manager Administration Portal.
  2. From the Hosts tab, click on the host to be approved. The host should currently be listed with the status of Pending Approval.
  3. Click the Approve button. The Edit and Approve Hosts dialog displays. You can use the dialog to set a name for the host, fetch its SSH fingerprint before approving it, and configure power management, where the host has a supported power management card. For information on power management configuration, see the Power Management chapter of the Red Hat Enterprise Virtualization Administration Guide.
  4. Click OK. If you have not configured power management you will be prompted to confirm that you wish to proceed without doing so, click OK.
Result
The status in the Hosts tab changes to Installing, after a brief delay the host status changes to Up.

6.5.4. Explanation of Settings and Controls in the New Host and Edit Host Windows

6.5.4.1. Host General Settings Explained

These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Foreman host provider hosts.
The General settings table contains the information required on the General tab of the New Host or Edit Host window.

Table 6.1. General settings

Field Name
Description
Data Center
The data center to which the host belongs. Red Hat Enterprise Virtualization Hypervisor hosts can not be added to Gluster-enabled clusters.
Host Cluster
The cluster to which the host belongs.
Use External Providers
Select or clear this check box to view or hide options for adding hosts provided by external providers. Upon selection, a drop-down list of external providers that have been added to the Manager displays. The following options are also available:
  • Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts.
  • External Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter.
Name
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Comment
A field for adding plain text, human-readable comments regarding the host.
Address
The IP address, or resolvable hostname of the host.
Password
The password of the host's root user. This can only be given when you add the host, it cannot be edited afterwards.
SSH PublicKey
Copy the contents in the text box to the /root/.known_hosts file on the host if you'd like to use the Manager's ssh key instead of using a password to authenticate with the host.
Automatically configure host firewall
When adding a new host, the Manager can open the required ports on the host's firewall. This is enabled by default. This is an Advanced Parameter.
SSH Fingerprint
You can fetch the host's SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.

6.5.4.2. Host Power Management Settings Explained

The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows.

Table 6.2. Power Management Settings

Field Name
Description
Primary/ Secondary
Prior to Red Hat Enterprise Virtualization 3.2, a host with power management configured only recognized one fencing agent. Fencing agents configured on version 3.1 and earlier, and single agents, are treated as primary agents. The secondary option is valid when a second agent is defined.
Concurrent
Valid when there are two fencing agents, for example for dual power hosts in which each power switch has two agents connected to the same power switch.
  • If this check box is selected, both fencing agents are used concurrently when a host is fenced. This means that both fencing agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up.
  • If this check box is not selected, the fencing agents are used sequentially. This means that to stop or start a host, the primary agent is used first, and if it fails, the secondary agent is used.
Address
The address to access your host's power management device. Either a resolvable hostname or an IP address.
User Name
User account to access the power management device with. You may have to set up a user on the device, or use the default user.
Password
Password for the user accessing the power management device.
Type
The type of power management device in your host.
Choose one of the following:
  • apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices.
  • apc_snmp - Use with APC 5.x power switch devices.
  • bladecenter - IBM Bladecentre Remote Supervisor Adapter
  • cisco_ucs - Cisco Unified Computing System
  • drac5 - Dell Remote Access Controller for Dell computers
  • eps - ePowerSwitch 8M+ network power switch
  • ilo, ilo2, ilo3, ilo4 - HP Integrated Lights-Out
  • ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices.
  • rsa - IBM Remote Supervisor Adaptor
  • rsb - Fujitsu-Siemens RSB management interface
  • wti - WTI Network PowerSwitch
Port
The port number used by the power management device to communicate with the host.
Options
Power management device specific options. Give these as 'key=value' or 'key', refer to the documentation of your host's power management device for the options available.
Secure
Tick this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on and supported by the power management agent.
Source
Specifies whether the host will search within its cluster or data center for a fencing proxy. Use the Up and Down buttons to change the sequence in which the resources are used.

6.5.4.3. SPM Priority Settings Explained

The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.

Table 6.3. SPM settings

Field Name
Description
SPM Priority
Defines the likelihood that the host will be given the role of Storage Pool Manager(SPM). The options are Low, Normal, and High priority, where Low priority means a reduced likelihood of the host being assigned the role of SPM, and High priority increases the likelihood. The default setting is Normal.

6.5.4.4. Host Console Settings Explained

The Console settings table details the information required on the Console tab of the New Host or Edit Host window.

Table 6.4. Console settings

Field Name
Description
Override display address
Select this check box to enable overriding the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, a public IP or FQDN (which is resolved in the external network to the public IP) is returned.
Display address
The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.

6.5.5. Configuring Host Power Management Settings

Summary
Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.
It is necessary to configure host power management in order to utilize host high availability and virtual machine high availability.

Important

Ensure that your host is in maintenance mode before configuring power management settings. Otherwise, all running virtual machines on that host will be stopped ungracefully upon restarting the host, which can cause disruptions in production environments. A warning dialog will appear if you have not correctly set your host to maintenance mode.

Procedure 6.4. Configuring Power Management Settings

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab to display the Power Management settings.
  4. Select the Enable Power Management check box to enable the fields.
  5. The Primary option is selected by default if you are configuring a new power management device. If you are adding a new device, set it to Secondary.
  6. Select the Concurrent check box to enable multiple fence agents to be used concurrently.
  7. Enter the Address, User Name, and Password of the power management device into the appropriate fields.
  8. Use the drop-down menu to select the Type of power management device.
  9. Enter the Port number used by the power management device to communicate with the host.
  10. Enter the Options for the power management device. Use a comma-separated list of 'key=value' or 'key'.
  11. Select the Secure check box to enable the power management device to connect securely to the host.
  12. Click Test to ensure the settings are correct.
  13. Click OK to save your settings and close the window.
Result
You have configured the power management settings for the host. The Power Management drop-down menu is now enabled in the Administration Portal.

6.5.6. Configuring Host Storage Pool Manager (SPM) Settings

Summary
The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host's available resources, it is important to prioritize hosts that can afford the resources.
The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.

Procedure 6.5. Configuring SPM settings

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the SPM tab to display the SPM Priority settings.
  4. Use the radio buttons to select the appropriate SPM priority for the host.
  5. Click OK to save the settings and close the window.
Result
You have configured the SPM priority of the host.

6.5.7. Manually Selecting the SPM

Summary
The Storage Pool Manager (SPM) can be manually assigned to a host. This option is only available to valid hosts. Manually selecting the SPM will result in immediate preference for that host to be given the SPM role. If that host goes down, the SPM role will be passed to the next most preferred host.
Manually selecting the SPM is not permanent. If the host encounters problems, the Red Hat Enterprise Virtualization Manager will assign the role to another host, taking into account the SPM priority of the available hosts. Your chosen host will have no lingering precedence for the SPM role beyond its SPM priority.

Procedure 6.6. Manually Assigning SPM to a Host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Right-click the host and, if applicable, select Select as SPM.
Result
The host will be given immediate preference for the SPM role.

6.5.8. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 6.7. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

6.5.9. Moving a Host to Maintenance Mode

Summary
Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. When a host is placed into maintenance mode the Red Hat Enterprise Virtualization Manager attempts to migrate all running virtual machines to alternative hosts.
The normal prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Procedure 6.8. Moving a Host to Maintenance Mode

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance to open the Maintenance Host(s) confirmation window.
  3. Click OK to initiate maintenance mode.
Result:
All running virtual machines are migrated to alternative hosts. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully.

6.5.10. Activating a Host from Maintenance Mode

Summary
A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used.

Procedure 6.9. Activating a Host from Maintenance Mode

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Activate.
Result
The host status changes to Unassigned, and finally Up when the operation is complete. Virtual machines can now run on the host.

6.5.11. Removing a Host

Summary
Remove a host from your virtualized environment.

Procedure 6.10. Removing a host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Place the host into maintenance mode.
  3. Click Remove to open the Remove Host(s) confirmation window.
  4. Select the Force Remove check box if the host is part of a Red Hat Storage cluster and has volume bricks on it, or if the host is non-responsive.
  5. Click OK.
Result
Your host has been removed from the environment and is no longer visible in the Hosts tab.

6.5.12. Customizing Hosts with Tags

Summary
You can use tags to store information about your hosts. You can then search for hosts based on tags.

Procedure 6.11. Customizing hosts with tags

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Assign Tags to open the Assign Tags window.
    Assign Tags Window

    Figure 6.1. Assign Tags Window

  3. The Assign Tags window lists all available tags. Select the check boxes of applicable tags.
  4. Click OK to assign the tags and close the window.
Result
You have added extra, searchable information about your host as tags.

6.6. Hosts and Networking

6.6.1. Refreshing Host Capabilities

Summary
When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Manager.

Procedure 6.12. To Refresh Host Capabilities

  1. Use the resource tabs, tree mode, or the search function to find and select a host in the results list.
  2. Click the Refresh Capabilities button.
Result
The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

6.6.2. Editing Host Network Interfaces and Adding Logical Networks to Hosts

Summary
You can change the settings of host network interfaces. Moving the rhevm management logical network between interfaces, and adding a newly created logical network to a network interface are common reasons to edit host networking.

Procedure 6.13. Editing Host Network Interfaces and Adding Logical Networks to Hosts

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results.
  2. Click the Network Interfaces tab in the details pane to list the network interfaces attached to the host and their configurations.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
    The Setup Host Networks window

    Figure 6.2. The Setup Host Networks window

  4. Attach a logical network to a network interface by selecting and dragging a logical network into the Assigned Logical Networks area next to the network interface.
    Alternatively, right-click the logical network and select a network interface from the drop-down menu.
  5. Edit the logical networks by hovering your cursor over an assigned logical network and clicking the pencil icon to open the Edit Management Network window.
    If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
    Select a Boot Protocol from:
    • None,
    • DHCP, or
    • Static.
      If you have chosen Static, provide the IP, Subnet Mask, and the Gateway.
    Click OK.
  6. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  7. Select the Save network configuration check box if you want these network changes to be made persistent when the environment is rebooted.
  8. Click OK to implement the changes and close the window.
Result
You have assigned logical networks to network interfaces and configured the host network.

Note

If not all network interface cards for the host are displayed, click the Refresh Capabilities button to update the list of network interface cards available for that host.

6.6.3. Bonding Logic in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.
Two factors that affect bonding logic are:
  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks? A single device cannot carry both VLAN tagged and non-VLAN tagged logical networks.

Table 6.5. Bonding Scenarios and Their Results

Bonding Scenario Result
NIC + NIC
The Create New Bond window is displayed, and you can configure a new bond device.
If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
NIC + Bond
The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.
Bond + Bond
If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.
If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

6.6.4. Bonding Modes

Red Hat Enterprise Virtualization supports the following common bonding modes:
  • Mode 1 (active-backup policy) sets all interfaces to the backup state while one remains active. Upon failure on the active interface, a backup interface replaces it as the only active interface in the bond. The MAC address of the bond in mode 1 is visible on only one port (the network adapter), to prevent confusion for the switch. Mode 1 provides fault tolerance and is supported in Red Hat Enterprise Virtualization.
  • Mode 2 (XOR policy) selects an interface to transmit packages to based on the result of an XOR operation on the source and destination MAC addresses modulo NIC slave count. This calculation ensures that the same interface is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Enterprise Virtualization.
  • Mode 4 (IEEE 802.3ad policy) creates aggregation groups for which included interfaces share the speed and duplex settings. Mode 4 uses all interfaces in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Enterprise Virtualization.
  • Mode 5 (adaptive transmit load balancing policy) ensures the outgoing traffic distribution is according to the load on each interface and that the current interface receives all incoming traffic. If the interface assigned to receive traffic fails, another interface is assigned the receiving role instead. Mode 5 is supported in Red Hat Enterprise Virtualization.

6.6.5. Creating a Bond Device Using the Administration Portal

Summary
You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two.
A bond cannot carry both vlan tagged and non-vlan traffic.

Procedure 6.14. Creating a Bond Device using the Administration Portal

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks to open the Setup Host Networks window.
  4. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, for example one is vlan tagged and the other is not, the bond operation fails with a suggestion on how to correct the compatibility issue.
    Bond Devices Window

    Figure 6.3. Bond Devices Window

  5. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the Custom option.
  6. Click OK to create the bond and close the Create New Bond window.
  7. Assign a logical network to the newly created bond device.
  8. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  9. Click OK accept the changes and close the Setup Host Networks window.
Result:
Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.
Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

6.6.6. Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

Example 6.1. xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:
mode=4, xmit_hash_policy=layer2+3

Example 6.2. ARP Monitoring

ARP monitor is useful for systems which can't or don't report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:
mode=1, arp_interval=1, arp_ip_target=192.168.0.2

Example 6.3. Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:
mode=1, primary=eth0

6.6.7. Saving a Host Network Configuration

Summary
One of the options when configuring a host network is to save the configuration as you apply it, making the changes persistent.
Any changes made to the host network configuration will be temporary if you did not select the Save network configuration check box in the Setup Host Networks window.
Save the host network configuration to make it persistent.

Procedure 6.15. Saving a host network configuration

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Network Interfaces tab on the Details pane to list the NICs on the host, their address, and other specifications.
  3. Click the Save Network Configuration button.
  4. The host network configuration is saved and the following message is displayed on the task bar: "Network changes were saved on host [Hostname]."
Result
The host's network configuration is saved persistently and will survive reboots.

Note

Saving the host network configuration also updates the list of available network interfaces for the host. This behavior is similar to that of the Refresh Capabilities button.

6.6.8. Multiple Gateways

Summary
Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.
If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host.
Red Hat Enterprise Virtualization 3.3 handles multiple gateways automatically whenever an interface goes up or down.

Procedure 6.16. Viewing or Editing the Gateway for a Logical Network

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Network Interfaces tab in the details pane to list the network interfaces attached to the host and their configurations.
  3. Click the Setup Host Networks button to open the Setup Host Networks window.
  4. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.
Result
The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.

6.7. Host Resilience

6.7.1. Host High Availability

The Red Hat Enterprise Virtualization Manager uses fencing to keep the hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager.
If a host with a power management device loses communication with the Manager, it can be fenced (rebooted) from the Administration Portal. All the virtual machines running on that host are stopped, and highly available virtual machines are started on a different host.
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.
Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host's power management device and test their correctness from time to time.
Hosts can be fenced automatically using the power management parameters, or manually by right-clicking on a host and using the options on the menu. In a fencing operation, an unresponsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains unresponsive pending manual intervention and troubleshooting.
If the host is required to run virtual machines that are highly available, power management must be enabled and configured.

6.7.2. Power Management by Proxy in Red Hat Enterprise Virtualization

The Red Hat Enterprise Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.
You can select between:
  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.
A viable fencing proxy host has a status of either UP or Maintenance.

6.7.3. Setting Fencing Parameters on a Host

The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC).
All power management operations are done using a proxy host, as opposed to directly by the Red Hat Enterprise Virtualization Manager. At least two hosts are required for power management operations.

Procedure 6.17. Setting fencing parameters on a host

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Edit to open the Edit Host window.
  3. Click the Power Management tab.
    Power Management Settings

    Figure 6.4. Power Management Settings

  4. Select the Enable Power Management check box to enable the fields.
  5. The Primary option is selected by default if you are configuring a new power management device. If you are adding a new device, set it to Secondary.
  6. Select the Concurrent check box to enable multiple fence agents to be used concurrently.
  7. Enter the Address, User Name, and Password of the power management device.
  8. Select the power management device Type from the drop-down menu.
  9. Enter the Port number used by the power management device to communicate with the host.
  10. Enter the specific Options of the power management device. Use a comma-separated list of 'key=value' or 'key' entries.
  11. Click the Test button to test the power management device. Test Succeeded, Host Status is: on will display upon successful verification.

    Warning

    Power management parameters (userid, password, options, etc) are tested by Red Hat Enterprise Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Enterprise Virtualization Manager, fencing is likely to fail when most needed.
  12. Click OK to save the changes and close the window.
Result
You are returned to the list of hosts. Note that the exclamation mark next to the host's name has now disappeared, signifying that power management has been successfully configured.

6.7.4. Soft-Fencing Hosts

Sometimes a host becomes non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.
Red Hat Enterprise Virtualization 3.3 introduces "soft-fencing over SSH". Prior to Red Hat Enterprise Virtualization 3.3, non-responsive hosts were fenced only by external fencing devices. In Red Hat Enterprise Virtualization 3.3, the fencing process has been expanded to include "SSH Soft Fencing", a process whereby the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.
Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens:
  1. On the first network failure, the status of the host changes to "connecting".
  2. The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running vms on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
  3. If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH.
  4. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent.

Note

Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured.

6.7.5. Using Host Power Management Functions

Summary
When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host.

Procedure 6.18. Using Host Power Management Functions

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click the Power Management drop-down menu.
    Restart

    Figure 6.5. Restart

  3. Select one of the following options:
    • Restart: This option stops the host and waits until the host's status changes to Down. When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up.
    • Start: This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up.
    • Stop: This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational.

    Important

    When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used.
  4. Selecting one of the above options opens a confirmation window. Click OK to confirm and proceed.
Result
The selected action is performed.

6.7.6. Manually Fencing or Isolating a Non Responsive Host

Summary
If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure; it can significantly affect the performance of the environment. If you do not have a power management device, or it is incorrectly configured, you can reboot the host manually.

Warning

Do not use the Confirm host has been rebooted option unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption.

Procedure 6.19. Manually fencing or isolating a non-responsive host

  1. On the Hosts tab, select the host. The status must display as non-responsive.
  2. Manually reboot the host. This could mean physically entering the lab and rebooting the host.
  3. On the Administration Portal, right-click the host entry and select the Confirm Host has been rebooted button.
    The Host Right-click menu

    Figure 6.6. The Host Right-click menu

  4. A message displays prompting you to ensure that the host has been shut down or rebooted. Select the Approve Operation check box and click OK.
Result
You have manually rebooted your host, allowing highly available virtual machines to be started on active hosts. You confirmed your manual fencing action in the Administrator Portal, and the host is back online.

6.8. Hosts and Permissions

6.8.1. Managing System Permissions for a Host

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a HostAdmin has administrator privileges only for the assigned host, and so forth.
A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment.
The host administrator role permits the following actions:
  • Edit the configuration of the host;
  • Set up the logical networks; and
  • Remove the host.
You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator.

6.8.2. Host Administrator Roles Explained

Host Permission Roles
The table below describes the administrator roles and privileges applicable to host administration.

Table 6.6. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
HostAdmin Host Administrator Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host.

6.8.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 6.20. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

6.8.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 6.21. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 7. Storage Domains

Red Hat Enterprise Virtualization uses a centralized storage system for virtual machine disk images, ISO files and snapshots. Storage networking can be implemented using:
  • Network File System (NFS)
  • GlusterFS exports
  • Other POSIX compliant file systems
  • Internet Small Computer System Interface (iSCSI)
  • Local storage attached directly to the virtualization hosts
  • Fibre Channel Protocol (FCP)
  • Parallel NFS (pNFS)
Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated.
As a Red Hat Enterprise Virtualization system administrator, you need to create, configure, attach and maintain storage for the virtualized enterprise. You should be familiar with the storage types and their use. Read your storage array vendor's guides, and refer to the Red Hat Enterprise Linux Storage Administration Guide for more information on the concepts, protocols, requirements or general usage of storage.
The Red Hat Enterprise Virtualization platform enables you to assign and manage storage using the Administration Portal's Storage tab. The Storage results list displays all the storage domains, and the details pane shows general information about the domain.
Red Hat Enterprise Virtualization platform has three types of storage domains:
  • Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.
    The data domain cannot be shared across data centers, and the data domain must be of the same type as the data center. For example, a data center of a iSCSI type, must have an iSCSI data domain.
    You must attach a data domain to a data center before you can attach domains of other types to it.
  • ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers.
  • Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Enterprise Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time.

    Important

    Support for export storage domains backed by storage on anything other than NFS is being deprecated. While existing export storage domains imported from Red Hat Enterprise Virtualization 2.2 environments remain supported new export storage domains must be created on NFS storage.
Only commence configuring and attaching storage for your Red Hat Enterprise Virtualization environment once you have determined the storage needs of your data center(s).

Important

To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up.

7.1. Understanding Storage Domains

A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of either block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).
On NFS, all virtual disks, templates, and snapshots are files.
On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Logical Volume Manager Administration Guide for more information on LVM.
Virtual disks can have one of two formats, either Qcow2 or RAW. The type of storage can be either Sparse or Preallocated. Snapshots are always sparse but can be taken for disks created either as RAW or sparse.
Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.

7.2. Storage Metadata Versions in Red Hat Enterprise Virtualization

Red Hat Enterprise Virtualization stores information about storage domains as metadata on the storage domains themselves. Each major release of Red Hat Enterprise Virtualization has seen improved implementations of storage metadata.
  • V1 metadata (Red Hat Enterprise Virtualization 2.x series)
    Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual machine disk images.
    Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 kb, limiting the number of storage domains that can be in a pool.
    Template and virtual machine base images are read only.
    V1 metadata is applicable to NFS, iSCSI, and FC storage domains.
  • V2 metadata (Red Hat Enterprise Virtualization 3.0)
    All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains.
    Physical volume names are no longer included in the metadata.
    Template and virtual machine base images are read only.
    V2 metadata is applicable to iSCSI, and FC storage domains.
  • V3 metadata (Red Hat Enterprise Virtualization 3.1+)
    All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual machine disk volumes is still stored in a logical volume on the domains.
    Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot.
    Support for unicode metadata is added, for non-English volume names.
    V3 metadata is applicable to NFS, GlusterFS, POSIX, iSCSI, and FC storage domains.

Note

Upgrades between metadata versions are automatic. If you've upgraded from Red Hat Enterprise Virtualization 3.0 to 3.1, your existing data centers are initially in 3.0 compatibility mode. When you upgrade your hosts and change your data center compatibility from 3.0 to 3.1, the storage metadata for your storage domains is automatically upgraded to version 3.

7.3. Preparing and Adding File-based Storage

7.3.1. Preparing NFS Storage

Summary
These steps must be taken to prepare an NFS file share on a server running Red Hat Enterprise Linux 6 for use with Red Hat Enterprise Virtualization.

Procedure 7.1. Preparing NFS Storage

  1. Install nfs-utils

    NFS functionality is provided by the nfs-utils package. Before file shares can be created, check that the package is installed by querying the RPM database for the system:
    $ rpm -qi nfs-utils
    If the nfs-utils package is installed then the package information will be displayed. If no output is displayed then the package is not currently installed. Install it using yum while logged in as the root user:
    # yum install nfs-utils
  2. Configure Boot Scripts

    To ensure that NFS shares are always available when the system is operational both the nfs and rpcbind services must start at boot time. Use the chkconfig command while logged in as root to modify the boot scripts.
    # chkconfig --add rpcbind
    # chkconfig --add nfs
    # chkconfig rpcbind on
    # chkconfig nfs on
    Once the boot script configuration has been done, start the services for the first time.
    # service rpcbind start
    # service nfs start
  3. Create Directory

    Create the directory you wish to share using NFS.
    # mkdir /exports/iso
    Replace /exports/iso with the name, and path of the directory you wish to use.
  4. Export Directory

    To be accessible over the network using NFS the directory must be exported. NFS exports are controlled using the /etc/exports configuration file. Each export path appears on a separate line followed by a tab character and any additional NFS options. Exports to be attached to the Red Hat Enterprise Virtualization Manager must have the read, and write, options set.
    To grant read, and write access to /exports/iso using NFS for example you add the following line to the /etc/exports file.
    /exports/iso       *(rw)
    Again, replace /exports/iso with the name, and path of the directory you wish to use.
  5. Reload NFS Configuration

    For the changes to the /etc/exports file to take effect the service must be told to reload the configuration. To force the service to reload the configuration run the following command as root:
    # service nfs reload
  6. Set Permissions

    The NFS export directory must be configured for read write access and must be owned by vdsm:kvm. If these users do not exist on your external NFS server use the following command, assuming that /exports/iso is the directory to be used as an NFS share.
    # chown -R 36:36 /exports/iso
    The permissions on the directory must be set to allow read and write access to both the owner and the group. The owner should also have execute access to the directory. The permissions are set using the chmod command. The following command arguments set the required permissions on the /exports/iso directory.
    # chmod 0755 /exports/iso
Result
The NFS file share has been created, and is ready to be attached by the Red Hat Enterprise Virtualization Manager.

7.3.2. Attaching NFS Storage

Summary
An NFS type Storage Domain is a mounted NFS share that is attached to a data center. It is used to provide storage for virtualized guest images and ISO boot media. Once NFS storage has been exported it must be attached to the Red Hat Enterprise Virtualization Manager using the Administration Portal.
NFS data domains can be added to NFS data centers. You can add NFS, ISO, and export storage domains to data centers of any type.

Procedure 7.2. Attaching NFS Storage

  1. Click the Storage resource tab to list the existing storage domains.
  2. Click New Domain to open the New Domain window.
    NFS Storage

    Figure 7.1. NFS Storage

  3. Enter the Name of the storage domain.
  4. Select the Data Center, Domain Function / Storage Type, and Use Host from the drop-down menus.
    If applicable, select the Format from the drop-down menu.
  5. Enter the Export Path to be used for the storage domain.
    The export path should be in the format of 192.168.0.10:/data or domain.example.com:/data
  6. Click Advanced Parameters to enable further configurable settings. It is recommended that the values of these parameters not be modified.

    Important

    All communication to the storage domain is from the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must be attached to the chosen Data Center before the storage is configured.
  7. Click OK to create the storage domain and close the window.
Result
The new NFS data domain is displayed on the Storage tab with a status of Locked while the disk prepares. It is automatically attached to the data center upon completion.

7.3.3. Preparing Local Storage

Summary
A local storage domain can be set up on a host. When you set up host to use local storage, the host automatically gets added to a new data center and cluster that no other hosts can be added to. Multiple host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single host cluster cannot be migrated, fenced or scheduled.

Important

On Red Hat Enterprise Virtualization Hypervisors the only path permitted for use as local storage is /data/images. This directory already exists with the correct permissions on Hypervisor installations. The steps in this procedure are only required when preparing local storage on Red Hat Enterprise Linux virtualization hosts.

Procedure 7.3. Preparing Local Storage

  1. On the virtualization host, create the directory to be used for the local storage.
    # mkdir -p /data/images
  2. Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36).
    # chown 36:36 /data /data/images
    # chmod 0755 /data /data/images
Result
Your local storage is ready to be added to the Red Hat Enterprise Virtualization environment.

7.3.4. Adding Local Storage

Summary
Storage local to your host has been prepared. Now use the Manager to add it to the host.
Adding local storage to a host in this manner causes the host to be put in a new data center and cluster. The local storage configuration window combines the creation of a data center, a cluster, and storage into a single process.

Procedure 7.4. Adding Local Storage

  1. Use the Hosts resource tab, tree mode, or the search function to find and select the host in the results list.
  2. Click Maintenance to place the host into maintenance mode.
  3. Click Configure Local Storage to open the Configure Local Storage window.
    Configure Local Storage Window

    Figure 7.2. Configure Local Storage Window

  4. Click the Edit buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
  5. Set the path to your local storage in the text entry field.
  6. If applicable, select the Memory Optimization tab to configure the memory optimization policy for the new local storage cluster.
  7. Click OK to save the settings and close the window.
Result
Your host comes online in a data center of its own.

7.4. Preparing and Adding Red Hat Storage (GlusterFS) Storage Domains

7.4.1. Introduction to Red Hat Storage (GlusterFS) Volumes

Red Hat Storage volumes combine storage from more than one Red Hat Storage server into a single global namespace. A volume is a collection of bricks, where each brick is a mountpoint or directory on a Red Hat Storage Server in the trusted storage pool.
Most of the management operations of Red Hat Storage happen on the volume.
You can use the Administration Portal to create and start new volumes. You can monitor volumes in your Red Hat Storage cluster from the Volumes tab.
While volumes can be created and managed from the Administration Portal, bricks must be created on the individual Red Hat Storage nodes before they can be added to volumes using the Administration Portal

7.4.2. Introduction to Red Hat Storage (GlusterFS) Bricks

Bricks are the basic unit of storage in Red Hat Storage, and they are combined into volumes. Each brick is a directory or mountpoint. XFS is the recommended brick filesystem.
When adding a brick to a volume, the brick is expressed by combining a server address with export directory path in the following format: SERVER:EXPORT
For example: myhostname:/exports/myexportdir/
While volumes can be created using the Administration Portal, bricks must be manually created on Red Hat Storage nodes. Only after the directory structure is in place can bricks be added to volumes.
If your brick consists of a mounted device with an XFS filesystem as opposed to a directory in an XFS filesystem, consider making the mount permanent across reboots by adding it to the /etc/fstab file.

7.4.3. Optimizing Red Hat Storage Volumes to Store Virtual Machine Images

Optimize a Red Hat Storage volume to store virtual machine images using the Administration Portal.
To optimize a volume for storing virtual machines, the Manager sets a number of virtualization-specific parameters for the volume.

Important

Red Hat Storage currently supports Red Hat Enterprise Virtualization 3.1 and above. All Gluster clusters and hosts must be attached to data centers which are compatible with versions higher than 3.0.
Volumes can be optimized to store virtual machines during creation by selecting the Optimize for Virt Store check box, or after creation using the Optimize for Virt Store button from the Volumes tab.

7.4.4. Creating a Storage Volume

Summary
You can create new volumes using the Administration Portal. When creating a new volume, you must specify the bricks that comprise the volume and specify whether the volume is to be distributed, replicated, or striped.
You must create brick directories or mountpoints before you can add them to volumes.

Important

It is recommended that you use replicated volumes, where bricks exported from different hosts are combined into a volume. Replicated volumes create copies of files across multiple bricks in the volume, preventing data loss when a host is fenced.

Procedure 7.5. Creating A Storage Volume

  1. Click the Volumes resource tab to list existing volumes in the results list.
  2. Click New to open the New Volume window.
  3. Use the drop-down menus to select the Data Center and Volume Cluster.
  4. Enter the Name of the volume.
  5. Use the drop-down menu to select the Type of the volume.
  6. If active, select the appropriate Transport Type check box.
  7. Click the Add Bricks button to select bricks to add to the volume. Bricks must be created externally on the Red Hat Storage nodes.
  8. If active, use the Gluster, NFS, and CIFS check boxes to select the appropriate access protocols used for the volume.
  9. Enter the volume access control as a comma-separated list of IP addresses or hostnames in the Allow Access From field.
    You can use the * wildcard to specify ranges of addresses of IP addresses or hostnames.
  10. Select the Optimize for Virt Store option to set the parameters to optimize your volume for virtual machine storage. Select this if you intend to use this volume as a storage domain.
  11. Click OK to create the volume. The new volume is added and displays on the Volume tab.
Result
You've added a Red Hat Storage volume. You can now use it for storage.

7.4.5. Explanation of Settings in the New Volume Window

This table contains description of settings and options in the New Volume window.

Table 7.1. 

Setting Name Description
Data Center
The data center that the Red Hat Storage nodes hosting your new volume are a part of.
Volume Cluster
The cluster that the Red Hat Storage nodes hosting your new volume are a part of.
Name
The name of your new volume.
Type
The type of volume you are creating. Select one of:
  • Distribute: Distributed volumes distribute files throughout the cluster. You can use distributed volumes to scale storage in an archival environment in situations where small periods of down time is acceptable during disk swaps.
  • Replicate: Replicated volumes replicate files throughout the bricks in the volume. You can use replicated volumes in environments where high availability and high reliability are critical.
  • Distributed Replicate: Distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.
  • Stripe: stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.
  • Distributed Stripe: Distributed striped volumes stripe data across two or more nodes in the cluster. For best results, you should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.
  • Striped Replicate: Striped replicated volumes stripes data across replicated bricks in the trusted storage pool. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for MapReduce workloads.
  • Distributed Striped Replicate: Distributed striped replicated volumes distributes striped data across replicated bricks in the trusted storage pool. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for MapReduce workloads.
Replica count
Number of replicas to keep of each stored item.
Stripe count
Number of bricks to stripe each file across.
Transport Type
The transport protocol used to communicate between Red Hat Storage nodes.
Bricks
Directories or mount points that will be combined into the new volume.
Access Protocols
Choose from:
  • Gluster: the native GlusterFS access protocol.
  • NFS: a customized implementation of Network File System protocol, allowing access to volumes by clients that do not support the native GlusterFS protocol.
  • CIFS: a customized implementation of the Commons Internet File Sharing protocol, allowing access to volumes by clients that do not support the native GlusterFS protocol.
Allow Access From
Specify hosts allowed to access the volume. Use a comma separated list of IP Addresses or hostnames, using the * as a wildcard to specify ranges.
Optimize for Virt Store
Set parameters on the volume to provide enhanced performance when used as virtual machine storage.

7.4.6. Adding Bricks to a Volume

Summary
You can expand your volumes by adding new bricks. You need to add at least one brick to a distributed volume, multiples of two bricks to replicated volumes, multiples of four bricks to striped volumes when expanding your storage space.

Procedure 7.6. Adding Bricks to a Volume

  1. On the Volumes tab on the navigation pane, select the volume to which you want to add bricks.
  2. Select the volume you want to add new bricks to. Click the Bricks tab from the Details pane.
  3. Click Add Bricks to open the Add Bricks window.
  4. Use the Server drop-down menu to select the server on which the brick resides.
  5. Enter the path of the Brick Directory. The directory must already exist.
  6. Click Add. The brick appears in the list of bricks in the volume, with server addresses and brick directory names.
  7. Click OK.
Result
The new bricks are added to the volume and the bricks display in the volume's Bricks tab.

7.4.7. Explanation of Settings in the Add Bricks Window

Table 7.2. Add Bricks Tab Properties

Field Name
Description
Volume Type
Displays the type of volume. This field cannot be changed, it was set when you created the volume.
Server
The server where the bricks are hosted.
Brick Directory
The brick direcory or mountpoint.

7.5. Adding POSIX Compliant File System Storage

Red Hat Enterprise Virtualization 3.1 and higher supports the use of POSIX (native) file systems for storage. POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP.
Any POSIX compliant filesystem used as a storage domain in Red Hat Enterprise Virtualization MUST support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Enterprise Virtualization.

Important

Do not mount NFS storage by creating a POSIX compliant file system Storage Domain. Always create an NFS Storage Domain instead.

7.5.1. Attaching POSIX Compliant File System Storage

Summary
You want to use a POSIX compliant file system that is not exposed using NFS, iSCSI, or FCP as a storage domain.

Procedure 7.7. Attaching POSIX Compliant File System Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click New Domain to open the New Domain window.
    POSIX Storage

    Figure 7.3. POSIX Storage

  3. Enter the Name for the storage domain.
  4. Select the Data Center to be associated with the storage domain. The Data Center selected must be of type POSIX (POSIX compliant FS). Alternatively, select (none).
  5. Select Data / POSIX compliant FS from the Domain Function / Storage Type drop-down menu.
    If applicable, select the Format from the drop-down menu.
  6. Select a host from the Use Host drop-down menu. Only hosts within the selected data center will be listed. The host that you select will be used to connect the storage domain.
  7. Enter the Path to the POSIX file system, as you would normally provide it to the mount command.
  8. Enter the VFS Type, as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types.
  9. Enter additional Mount Options, as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options.
  10. Click OK to attach the new Storage Domain and close the window.
Result
You have used a supported mechanism to attach an unsupported file system as a storage domain.

7.5.2. Preparing pNFS Storage

Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. The pNFS protocol supports three storage protocols or layouts: files, objects, and blocks. Red Hat Enterprise Linux 6.4 supports only the "files" layout type.
To enable support for pNFS functionality, use one of the following mount options on mounts from a pNFS-enabled server:
-o minorversion=1
or
-o v4.1
Set the permissions of the pNFS path so that Red Hat Enterprise Virtualization can access them:
# chown 36:36 [path to pNFS resource]
After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. Verify that the module was loaded:
$ lsmod | grep nfs_layout_nfsv41_files
Another way to verify a successful NFSv4.1 mount is with the mount command. The mount entry in the output should contain minorversion=1.

7.5.3. Attaching pNFS Storage

Summary
A pNFS type Storage Domain is a mounted pNFS share attached to a data center. It provides storage for virtualized guest images and ISO boot media. After you have exported pNFS storage, it must be attached to the Red Hat Enterprise Virtualization Manager using the Administration Portal.

Procedure 7.8. Attaching pNFS Storage

  1. Click the Storage resource tab to list the existing storage domains.
  2. Click New Domain to open the New Domain window.
    NFS Storage

    Figure 7.4. NFS Storage

  3. Enter the Name of the storage domain.
  4. Select the Data Center, Domain Function / Storage Type, and Use Host from the drop-down menus.
    If applicable, select the Format from the drop-down menu.
  5. Enter the Export Path to be used for the storage domain.
    The export path should be in the format of 192.168.0.10:/data or domain.example.com:/data
  6. In the VFS Type field, enter nfs4.
  7. In the Mount Options field, enter minorversion=1.

    Important

    All communication to the storage domain comes from the selected host and not from the Red Hat Enterprise Virtualization Manager. At least one active host must be attached to the chosen Data Center before the storage is configured.
  8. Click OK to create the storage domain and close the window.
Result
The new pNFS data domain is displayed on the Storage tab with a status of Locked while the disk prepares. It is automatically attached to the data center upon completion.

7.6. Preparing and Adding Block-based Storage

7.6.1. Preparing iSCSI Storage

Summary
These steps must be taken to export iSCSI storage device from a server running Red Hat Enterprise Linux 6 to use as a storage domain with Red Hat Enterprise Virtualization.

Procedure 7.9. Preparing iSCSI Storage

  1. Install the scsi-target-utils package using the yum command as root on your storage server.
    # yum install -y scsi-target-utils
  2. Add the devices or files you want to export to the /etc/tgt/targets.conf file. Here is a generic example of a basic addition to the targets.conf file:
    <target iqn.YEAR-MONTH.com.EXAMPLE:SERVER.targetX>
              backing-store /PATH/TO/DEVICE1 # Becomes LUN 1
              backing-store /PATH/TO/DEVICE2 # Becomes LUN 2
              backing-store /PATH/TO/DEVICE3 # Becomes LUN 3
    </target>
    Targets are conventionally defined using the year and month they are created, the reversed fully qualified domain that the server is in, the server name, and a target number.
  3. Start the tgtd service.
    # service tgtd start
  4. Make the tgtd start persistently across reboots.
    # chkconfig tgtd on
  5. Open an iptables firewall port to allow clients to access your iSCSI export. By default, iSCSI uses port 3260. This example inserts a firewall rule at position 6 in the INPUT table.
    # iptables -I INPUT 6 -p tcp --dport 3260 -j ACCEPT
  6. Save the iptables rule you just created.
    # service iptables save
Result
You have created a basic iSCSI export. You can use it as an iSCSI data domain.

7.6.2. Adding iSCSI Storage

Summary
Red Hat Enterprise Virtualization platform supports iSCSI storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
For information regarding the setup and configuration of iSCSI on Red Hat Enterprise Linux, see the Red Hat Enterprise Linux Storage Administration Guide.

Note

You can only add an iSCSI storage domain to a data center that is set up for iSCSI storage type.

Procedure 7.10. Adding iSCSI Storage

  1. Click the Storage resource tab to list the existing storage domains in the results list.
  2. Click the New Domain button to open the New Domain window.
  3. Enter the Name of the new storage domain.
    New iSCSI Domain

    Figure 7.5. New iSCSI Domain

  4. Use the Data Center drop-down menu to select an iSCSI data center.
    If you do not yet have an appropriate iSCSI data center, select (none).
  5. Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen data center are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is via the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system, and be attached to the chosen data center, before the storage is configured.
  7. The Red Hat Enterprise Virtualization Manager is able to map either iSCSI targets to LUNs, or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when iSCSI is selected as the storage type. If the target that you are adding storage from is not listed then you can use target discovery to find it, otherwise proceed to the next step.

    iSCSI Target Discovery

    1. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.

      Note

      LUNs used externally to the environment are also displayed.
      You can use the Discover Targets options to add LUNs on many targets, or multiple paths to the same LUNs.
    2. Enter the fully qualified domain name or IP address of the iSCSI host in the Address field.
    3. Enter the port to connect to the host on when browsing for targets in the Port field. The default is 3260.
    4. If the Challenge Handshake Authentication Protocol (CHAP) is being used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
    5. Click the Discover button.
    6. Select the target to use from the discovery results and click the Login button.
      Alternatively, click the Login All to log in to all of the discovered targets.
  8. Click the + button next to the desired target. This will expand the entry and display all unused LUNs attached to the target.
  9. Select the check box for each LUN that you are using to create the storage domain.
  10. Click OK to create the storage domain and close the window.
Result
The new iSCSI storage domain displays on the storage tab. This can take up to 5 minutes.

7.6.3. Adding FCP Storage

Summary
Red Hat Enterprise Virtualization platform supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
Red Hat Enterprise Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information regarding the setup and configuration of FCP or multipathing on Red Hat Enterprise Linux, please refer to the Storage Administration Guide and DM Multipath Guide.

Note

You can only add an FCP storage domain to a data center that is set up for FCP storage type.

Procedure 7.11. Adding FCP Storage

  1. Click the Storage resource tab to list all storage domains in the virtualized environment.
  2. Click New Domain to open the New Domain window.
  3. Enter the Name of the storage domain
    Adding FCP Storage

    Figure 7.6. Adding FCP Storage

  4. Use the Data Center drop-down menu to select an FCP data center.
    If you do not yet have an appropriate FCP data center, select (none).
  5. Use the drop-down menus to select the Domain Function / Storage Type and the Format. The storage domain types that are not compatible with the chosen data center are not available.
  6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host.

    Important

    All communication to the storage domain is via the selected host and not directly from the Red Hat Enterprise Virtualization Manager. At least one active host must exist in the system, and be attached to the chosen data center, before the storage is configured.
  7. The New Domain window automatically displays known targets with unused LUNs when Data / Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
  8. Click OK to create the storage domain and close the window.
Result
The new FCP data domain displays on the Storage tab. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.

7.6.4. Unusable LUNs in Red Hat Enterprise Virtualization

In certain circumstances, the Red Hat Enterprise Virtualization Manager will not allow you to use a LUN to create a storage domain or virtual machine hard disk.
  • LUNs that are already part of the current Red Hat Enterprise Virtualization environment are automatically prevented from being used.
    Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

    Figure 7.7. Unusable LUNs in the Red Hat Enterprise Virtualization Administration Portal

  • LUNs that are already being used by the SPM host will also display as in use. You can choose to forcefully over ride the contents of these LUNs, but the operation is not guaranteed to succeed.

7.7. Storage Tasks

7.7.1. Importing Existing ISO or Export Storage Domains

Summary
You have an ISO or export domain that you have been using with a different data center. You want to attach it to the data center you are using, and import virtual machines or use ISOs.

Procedure 7.12. Importing an Existing ISO or Export Storage Domain

  1. Click the Storage resource tab to list all the available storage domains in the results list.
  2. Click Import Domain to open the Import Pre-Configured Domain window.
    Import Domain

    Figure 7.8. Import Domain

  3. Select the appropriate Domain Function / Storage Type from the following:
    • ISO
    • Export
    The Domain Function / Storage Type determines the availability of the Format field.
  4. Select the SPM host from the Use host drop-down menu.

    Important

    All communication to the storage domain is via the selected host and not from the Red Hat Enterprise Virtualization Manager. At least one host must be active and have access to the storage before the storage can be configured.
  5. Enter the Export path of the storage. The export path can be either a static IP address or a resolvable hostname. For example, 192.168.0.10:/Images/ISO or storage.demo.redhat.com:/exports/iso.
  6. Click OK to import the domain and close the window.
  7. The storage domain is imported and displays on the Storage tab. The next step is to attach it to a data center. This is described later in this chapter, .
Result
You have imported your export or ISO domain to you data center. Attach it to a data center to use it.

7.7.2. Populating the ISO Storage Domain

Summary
An ISO storage domain is attached to a data center, ISO images must be uploaded to it. Red Hat Enterprise Virtualization provides an ISO uploader tool that ensures that the images are uploaded into the correct directory path, with the correct user permissions.
The creation of ISO images from physical media is not described in this document. It is assumed that you have access to the images required for your environment.

Procedure 7.13. Populating the ISO Storage Domain

  1. Copy the required ISO image to a temporary directory on the system running Red Hat Enterprise Virtualization Manager.
  2. Log in to the system running Red Hat Enterprise Virtualization Manager as the root user.
  3. Use the engine-iso-uploader command to upload the ISO image. This action will take some time, the amount of time varies depending on the size of the image being uploaded and available network bandwidth.

    Example 7.1. ISO Uploader Usage

    In this example the ISO image RHEL6.iso is uploaded to the ISO domain called ISODomain using NFS. The command will prompt for an administrative user name and password. The user name must be provided in the form user name@domain.
    # engine-iso-uploader --iso-domain=ISODomain upload RHEL6.iso
Result
The ISO image is uploaded and appears in the ISO storage domain specified. It is also available in the list of available boot media when creating virtual machines in the data center which the storage domain is attached to.

7.7.3. Moving Storage Domains to Maintenance Mode

Summary
Detaching and removing storage domains requires that they be in maintenance mode. This is required to redesignate another data domain as the master data domain.
Editing domains and expanding iSCSI domains by adding more LUNs can only be done when the domain is active.

Important

Put any active ISO and export domains in maintenance mode using this procedure.

Procedure 7.14. Moving storage domains to maintenance mode

  1. Use the Storage resource tab, tree mode, or the search function to find and select the storage domain in the results list.
  2. Shut down and move all the virtual machines running on the storage domain.
  3. Click the Data Centers tab in the details pane.
  4. Click Maintenance. The storage domain is deactivated and has an Inactive status in the results list.
Result
You can now edit, detach, remove, or reactivate the inactive storage domains from the data center.

Note

You can also activate, detach and place domains into maintenance mode using the Storage tab on the details pane of the data center it is associated with.

7.7.4. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 7.15. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

7.7.5. Activating Storage Domains

Summary
If you have been making changes to a data center's storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it.
  1. Use the Storage resource tab, tree mode, or the search function to find and select the inactive storage domain in the results list.
  2. Click the Data Centers tab in the details pane.
  3. Select the appropriate data center and click Activate.

    Important

    If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated.
Result
Your storage domain is active and ready for use.

7.7.6. Removing a Storage Domain

Summary
You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure 7.16. Removing a Storage Domain

  1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list.
  2. Move the domain into maintenance mode to deactivate it.
  3. Detach the domain from the data center.
  4. Click Remove to open the Remove Storage confirmation window.
  5. Select a host from the list.
  6. Click OK to remove the storage domain and close the window.
Summary
The storage domain is permanently removed from the environment.

7.7.7. Destroying a Storage Domain

Summary
A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain will forcibly remove the storage domain from the virtualized environment without reference to the export directory.
When the storage domain is destroyed, you are required to manually fix the export directory of the storage domain before it can be used again.

Procedure 7.17. Destroying a Storage Domain

  1. Use the Storage resource tab, tree mode, or the search function to find and select the appropriate storage domain in the results list.
  2. Right-click the storage domain and select Destroy to open the Destroy Storage Domain confirmation window.
  3. Select the Approve operation check box and click OK to destroy the storage domain and close the window.
Result
The storage domain has been destroyed. Manually clean the export directory for the storage domain to recycle it.

7.7.8. Detaching the Export Domain

Summary
Detach the export domain from the data center to import the templates to another data center.

Procedure 7.18. Detaching an Export Domain from the Data Center

  1. Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list.
  2. Click the Data Centers tab in the details pane and select the export domain.
  3. Click Maintenance to put the export domain into maintenance mode.
  4. Click Detach to open the Detach Storage confirmation window.
  5. Click OK to detach the export domain.
Result
The export domain has been detached from the data center, ready to be attached to another data center.

7.7.9. Attaching an Export Domain to a Data Center

Summary
Attach the export domain to a data center.

Procedure 7.19. Attaching an Export Domain to a Data Center

  1. Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list.
  2. Click the Data Centers tab in the details pane.
  3. Click Attach to open the Attach to Data Center window.
  4. Select the radio button of the appropriate data center.
  5. Click OK to attach the export domain.
Result
The export domain is attached to the data center and is automatically activated.

7.8. Storage and Permissions

7.8.1. Managing System Permissions for a Storage Domain

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a StorageAdmin has administrator privileges only for the assigned storage domain, and so forth.
A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment.
The storage domain administrator role permits the following actions:
  • Edit the configuration of the storage domain;
  • Move the storage domain into maintenance mode; and
  • Remove the storage domain.

Note

You can only assign roles and permissions to existing users.
You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator.

7.8.2. Storage Administrator Roles Explained

Storage Domain Permission Roles
The table below describes the administrator roles and privileges applicable to storage domain administration.

Table 7.3. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
StorageAdmin Storage Administrator Can create, delete, configure and manage a specific storage domain.
GlusterAdmin Gluster Storage Administrator Can create, delete, configure and manage Gluster storage volumes.

7.8.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 7.20. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

7.8.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 7.21. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 8. Virtual Machines

8.1. Introduction to Virtual Machines

A virtual machine is a software implementation of a computer. The Red Hat Enterprise Virtualization environment enables you to create virtual desktops and virtual servers.
Virtual machines consolidate computing tasks and workloads. In traditional computing environments, workloads usually run on individually administered and upgraded servers. Virtual machines reduce the amount of hardware and administration required to run the same computing tasks and workloads.

8.2. Supported Virtual Machine Operating Systems

The operating systems that can be virtualized as guest operating systems in Red Hat Enterprise Virtualization are as follows:

Table 8.1. Operating systems that can be used as guest operating systems

Operating System Architecture SPICE support
Red Hat Enterprise Linux 3
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 4
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 5
32-bit, 64-bit
Yes
Red Hat Enterprise Linux 6
32-bit, 64-bit
Yes
SUSE Linux Enterprise Server 10 (select Other Linux for the guest type in the user interface)
32-bit, 64-bit
No
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide spice drivers as part of their distribution.)
32-bit, 64-bit
No
Ubuntu 12.04 (Precise Pangolin LTS)
32-bit, 64-bit
Yes
Ubuntu 12.10 (Quantal Quetzal)
32-bit, 64-bit
Yes
Ubuntu 13.04 (Raring Ringtail)
32-bit, 64-bit
No
Ubuntu 13.10 (Saucy Salamander)
32-bit, 64-bit
Yes
Windows XP Service Pack 3 and newer
32-bit
Yes
Windows 7
32-bit, 64-bit
Yes
Windows 8
32-bit, 64-bit
No
Windows Server 2003 Service Pack 2 and newer
32-bit, 64-bit
Yes
Windows Server 2003 R2
32-bit, 64-bit
Yes
Windows Server 2008
32-bit, 64-bit
Yes
Windows Server 2008 R2
64-bit
Yes
Windows Server 2012
64-bit
No
Of the operating systems that can be virtualized as guest operating systems in Red Hat Enterprise Virtualization, the operating systems that are supported by Global Support Services are as follows:

Table 8.2. Guest operating systems that are supported by Global Support Services

Operating System Architecture
Red Hat Enterprise Linux 3
32-bit, 64-bit
Red Hat Enterprise Linux 4
32-bit, 64-bit
Red Hat Enterprise Linux 5
32-bit, 64-bit
Red Hat Enterprise Linux 6
32-bit, 64-bit
SUSE Linux Enterprise Server 10 (select Other Linux for the guest type in the user interface)
32-bit, 64-bit
SUSE Linux Enterprise Server 11 (SPICE drivers (QXL) are not supplied by Red Hat. However, the distribution's vendor may provide spice drivers as part of their distribution.)
32-bit, 64-bit
Windows XP Service Pack 3 and newer
32-bit
Windows 7
32-bit, 64-bit
Windows 8
32-bit, 64-bit
Windows Server 2003 Service Pack 2 and newer
32-bit, 64-bit
Windows Server 2003 R2
32-bit, 64-bit
Windows Server 2008
32-bit, 64-bit
Windows Server 2008 R2
64-bit
Windows Server 2012
64-bit
Remote Desktop Protocol (RDP) is the default connection protocol for accessing Windows 8 and Windows 2012 guests from the user portal as Microsoft introduced changes to the Windows Display Driver Model that prevent SPICE from performing optimally.

Note

While Red Hat Enterprise Linux 3 and Red Hat Enterprise Linux 4 are supported, virtual machines running the 32-bit version of these operating systems cannot be shut down gracefully from the administration portal because there is no ACPI support in the 32-bit x86 kernel. To terminate virtual machines running the 32-bit version of Red Hat Enterprise Linux 3 or Red Hat Enterprise Linux 4, right-click the virtual machine and select the Power Off option.

Note

8.3. Virtual Machine Performance Parameters

Red Hat Enterprise Virtualization virtual machines can support the following parameters:

Table 8.3. Supported virtual machine parameters

Parameter Number Note
Virtualized CPUs 160 per virtual machine
Virtualized RAM 2TB For a 64 bit virtual machine
Virtualized RAM 4GB per 32 bit virtual machine. Note, the virtual machine may not register the entire 4GB. The amount of RAM that the virtual machine recognizes is limited by its operating system.
Virtualized storage devices 8 per virtual machine
Virtualized network interface controllers 8 per virtual machine
Virtualized PCI devices 32 per virtual machine

8.4. Creating Virtual Machines

8.4.1. Creating a New Virtual Machine from an Existing Template

Summary
You can use a template to create a virtual machine which has already been configured with virtual disks, network interfaces, an operating system, and applications.
A virtual machine created from a template depends on the template. You cannot remove a template from the environment if there are still virtual machines that were created from it. Cloning a virtual machine from a template removes the dependency on the template.

Procedure 8.1. Creating a New Virtual Machine from an Existing Template

  1. Click the Virtual Machines resource tab to list all the virtual machines in the results list.
    The icon to the right of the virtual machine name indicates whether it is a virtual server, a virtual machine, or a part of a virtual machine pool.
  2. Click the New VM button to open the New Virtual Machine window.
  3. Select the Data Center and Host Cluster on which the desktop is to run. Select an existing template from the Based on Template drop-down menu.
    New Virtual Machine Window

    Figure 8.1. New Virtual Machine Window

  4. Enter a suitable Name and Description, and accept the default values inherited from the template. You can change the rest of the fields if needed.
  5. Click OK.
Result
The virtual machine is created and displayed in the Virtual Machines list. You can now log on to your virtual machine and begin using it, or assign users to it.

8.4.2. Creating a New Virtual Machine from a Blank Template

Summary
You can create a virtual machine using a blank template and configure all of its settings.

Procedure 8.2. Creating a New Virtual Machine from a Blank Template

  1. Click the Virtual Machines resource tab to list all the virtual machines in the results list.
    The icon to the left of the virtual machine name indicates whether it is a virtual server, a virtual machine, or a part of a virtual machine pool.
  2. Click the New VM button to open the New Virtual Machine window.
  3. On the General tab, you only need to fill in the Name and Operating System fields. You can accept the default settings for other fields, or change them if required.
  4. Alternatively, click the Initial Run, Console, Host, Resource Allocation, Boot Options, and Custom Properties tabs in turn to define options for your virtual machine.
  5. Click OK to create the virtual machine and close the window.
  6. The New Virtual Machine - Guide Me window opens. Use the Guide Me buttons to complete configuration or click Configure Later to close the window.
Result
The new virtual machine is created and displays in the list of virtual machines with a status of Down. Before you can use this virtual machine, add at least one network interface and one virtual disk, and install an operating system.

8.4.3. Explanation of Settings and Controls in the New Virtual Machine and Edit Virtual Machine Windows

8.4.3.1. Virtual Machine General Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: General settings table details the information required on the General tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.4. Virtual Machine: General Settings

Field Name
Description
Cluster
The name of the host cluster to which the virtual machine is attached. It can be hosted on any physical machine in the cluster depending on the policy rules.
Based on Template
Templates can be used to create a virtual machines from existing models. This field is set to Blank by default, which enables creating a virtual machine from scratch.
Operating System
The operating system. Valid values include a range of Red Hat Enterprise Linux and Windows variants.
Optimized for
The type of system for which the virtual machine is to be optimized. There are two options: Server, and Desktop, and the field is set to Server by default. Virtual machines optimized to act as servers have no sound card, use a cloned disk image and are not stateless. In contrast, virtual machines optimized to act as desktop machines do have a sound card, use an image (thin allocation) and are stateless.
Name
The name of virtual machine. Names must not contain any spaces, and must contain at least one character from A-Z or 0-9. The maximum length of a virtual machine name is 64 characters.
Description
A meaningful description of the new virtual machine.
Comment
A field for adding plain text, human-readable comments regarding the virtual machine.
Stateless
Select this check box if the virtual machine is to run in stateless mode. The stateless mode is used primarily for desktop virtual machines. Running a stateless desktop or server creates a new COW layer on the virtual machine hard disk image where new and changed data is stored. Shutting down the stateless virtual machine deletes the new COW layer, returning the virtual machine to its original state. This type of virtual machine is useful when creating virtual machines that need to be used for a short time, or by temporary staff.
Start in Pause Mode
Select this check box to always start the VM in pause mode. This option is suitable for virtual machines which require a long time to establish a SPICE connection, for example virtual machines in remote locations.
Delete Protection
Select this check box to make deletion of the virtual machine impossible. It is possible to delete the virtual machine only when this check box is not selected.
At the bottom of the General tab is a drop-down box that allows you to assign network interfaces to the new virtual machine. Use the plus and minus buttons to add or remove additional network interfaces.

8.4.3.2. Virtual Machine System Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: System settings table details the information required on the System tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.5. Virtual Machine: System Settings

Field Name
Description
Memory Size
The amount of memory assigned to the virtual machine. When allocating memory, consider the processing and storage needs of the applications that are intended to run on the virtual machine.
Maximum guest memory is constrained by the selected guest architecture and the cluster compatibility level.
Total Virtual CPUs
The processing power allocated to the virtual machine as CPU Cores. Do not assign more cores to a virtual machine than are present on the physical host.
Cores per Virtual Socket
The number of cores assigned to each virtual socket.
Virtual Sockets
The number of CPU sockets for the virtual machine. Do not assign more sockets to a virtual machine than are present on the physical host.

8.4.3.3. Virtual Machine Initial Run Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: Initial Run settings table details the information required on the Initial Run tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.6. Virtual Machine: Initial Run Settings

Field Name
Description
General - Time Zone
The time zone in which the virtual machine is to run. It is not necessarily the time zone for the physical host on which the virtual machine is running.
Windows - Domain
The domain in which the virtual machine is to run. This option is only available when Windows is selected as the operating system on the Virtual Machine - General tab.

8.4.3.4. Virtual Machine Console Settings Explained

These settings apply to adding new or editing existing virtual machines.
The Virtual Machine: Console settings table details the information required on the Console tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.7. Virtual Machine: Console Settings

Field Name
Description
Protocol
Defines the display protocol to be used. SPICE is the recommended protocol for Linux and Windows virtual machines, excepting Windows 8 and Windows Server 2012. Optionally, select VNC for Linux virtual machines. A VNC client is required to connect to a virtual machine using the VNC protocol.
VNC Keyboard Layout
Defines the keyboard layout for the virtual machine. This option is only available when using the VNC protocol.
USB Support
Defines whether USB devices can be used on the virtual machine. This option is only available for virtual machines using the SPICE protocol. Select either:
  • Disabled - Does not allow USB redirection from the client machine to the virtual machine.
  • Legacy - Enables the SPICE USB redirection policy used in Red Hat Enterprise Virtualization 3.0. This option can only be used on Windows virtual machines, and will not be supported in future versions of Red Hat Enterprise Virtualization.
  • Native - Enables native KVM/ SPICE USB redirection for Linux and Windows virtual machines. Virtual machines do not require any in-guest agents or drivers for native USB. This option can only be used if the virtual machine's cluster compatibility version is set to 3.1 or higher.
Monitors
The number of monitors for the virtual machine. This option is only available for virtual desktops using the SPICE display protocol. You can choose 1, 2 or 4. Since Windows 8 and Windows Server 2012 virtual machines do not support the SPICE protocol, they do not support multiple monitors.
Smartcard Enabled
Smartcards are an external hardware security feature, most commonly seen in credit cards, but also used by many businesses as authentication tokens. Smartcards can be used to protect Red Hat Enterprise Virtualization virtual machines. Tick or untick the check box to activate and deactivate Smartcard authentication for individual virtual machines.
Disable strict user checking
Click the Advanced Parameters arrow and select the check box to use this option. With this option selected, the virtual machine does not need to be rebooted when a different user connects to it.
By default, strict checking is enabled so that only one user can connect to the console of a virtual machine. No other user is able to open a console to the same virtual machine until it has been rebooted. The exception is that a SuperUser can connect at any time and replace a existing connection. When a SuperUser has connected, no normal user can connect again until the virtual machine is rebooted.
Disable strict checking with caution, because you can expose the previous user's session to the new user.
Soundcard Enabled
A soundcard device is not necessary for all virtual machine use cases. If it is for yours, enable a soundcard here.
VirtIO Console Device Enabled
The VirtIO console device is a console over VirtIO transport for communication between the host userspace and guest userspace. It has two parts: device emulation in QEMU that presents a virtio-pci device to the guest, and a guest driver that presents a character device interface to userspace applications. Tick the check box to attach a VirtIO console device to your virtual machine.

8.4.3.5. Virtual Machine Host Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: Host settings table details the information required on the Host tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.8. Virtual Machine: Host Settings

Field Name
Description
Start Running On
Defines the preferred host on which the virtual machine is to run. Select either:
  • Any Host in Cluster - The virtual machine can start and run on any available host in the cluster.
  • Specific - The virtual machine will start running on a particular host in the cluster. However, the Manager or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host from the drop-down list of available hosts.
Migration Options
Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy.
  • Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator.
  • Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator.
  • Do not allow migration - The virtual machine cannot be migrated, either automatically or manually.
The Use Host CPU check box allows virtual machines to take advantage of the features of the physical CPU of the host on which they are situated. This option can only be enabled when Allow manual migration only or Do not allow migration are selected.

8.4.3.6. Virtual Machine High Availability Settings Explained

These settings apply to adding or editing new server virtual machines.
The Virtual Machine: High Availability settings table details the information required on the High Availability tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.9. Virtual Machine: High Availability Settings

Field Name
Description
Highly Available
Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance or failure, the virtual machine will be automatically moved to or re-launched on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically moved to another host.
Note that this option is unavailable if the Migration Options setting in the Hosts tab is set to either Allow manual migration only or No migration. For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary.
Priority for Run/Migration queue
Sets the priority level for the virtual machine to be migrated or restarted on another host.
Watchdog
Allows users to attach a watchdog card to a virtual machine. A watchdog is a timer that is used to automatically detect and recover from failures. Once set, a watchdog timer continually counts down to zero while the system is in operation, and is periodically restarted by the system to prevent it from reaching zero. If the timer reaches zero, it signifies that the system has been unable to reset the timer and is therefore experiencing a failure. Corrective actions are then taken to address the failure. This functionality is especially useful for servers that demand high availability.
Watchdog Model: The model of watchdog card to assign to the virtual machine. At current, the only supported model is i6300esb.
Watchdog Action: The action to take if the watchdog timer reaches zero. The following actions are available:
  • none - No action is taken. However, the watchdog event is recorded in the audit log.
  • reset - The virtual machine is reset and the Manager is notified of the reset action.
  • poweroff - The virtual machine is immediately shut down.
  • dump - A dump is performed and the virtual machine is paused.
  • pause - The virtual machine is paused, and can be resumed by users.

8.4.3.7. Virtual Machine Resource Allocation Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: Resource Allocation settings table details the information required on the Resource Allocation tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.10. Virtual Machine: Resource Allocation Settings

Field Name
Sub-element
Description
CPU Allocation
CPU Shares
Allows users the set the level of CPU resources a virtual machine can demand relative to other virtual machines.
  • Low - 512
  • Medium - 1024
  • High - 2048
  • Custom - A custom level of CPU shares defined by the user.
 
CPU Pinning topology
Enables the virtual machine's virtual CPU (vCPU) to run on a specific physical CPU (pCPU) in a specific host. This option is not supported if the virtual machine's cluster compatibility version is set to 3.0. The syntax of CPU pinning is v#p[_v#p], for example:
  • 0#0 - Pins vCPU 0 to pCPU 0.
  • 0#0_1#3 - Pins vCPU 0 to pCPU 0, and pins vCPU 1 to pCPU 3.
  • 1#1-4,^2 - Pins vCPU 1 to one of the pCPUs in the range of 1 to 4, excluding pCPU 2.
In order to pin a virtual machine to a host, you must select Do not allow migration under Migration Options, and select the Use Host CPU check box.
Memory Allocation
The amount of physical memory guaranteed for this virtual machine.
Storage Allocation
The Template Provisioning option is only available when the virtual machine is created from a template.
Thin
Provides optimized usage of storage capacity. Disk space is allocated only as it is required.
Clone
Optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation.
VirtIO-SCSI Enabled
Allows users to enable or disable the use of VirtIO-SCSI on the virtual machines.

8.4.3.8. Virtual Machine Boot Options Settings Explained

These settings apply to adding or editing new virtual machines.
The Virtual Machine: Boot Options settings table details the information required on the Boot Options tab of the New Virtual Machine and Edit Virtual Machine windows

Table 8.11. Virtual Machine: Boot Options Settings

Field Name
Description
First Device
After installing a new virtual machine, the new virtual machine must go into Boot mode before powering up. Select the first device that the virtual machine must try to boot:
  • Hard Disk
  • CD-ROM
  • Network (PXE)
Second Device
Select the second device for the virtual machine to use to boot if the first device is not available. The first device selected in the previous option does not appear in the options.
Attach CD
If you have selected CD-ROM as a boot device, tick this check box and select a CD-ROM image from the drop-down menu. The images must be available in the ISO domain.

8.4.3.9. Virtual Machine Custom Properties Settings Explained

These settings apply to adding or editing new virtual machines.
The table below details the information required on the Custom Properties tab of the New Virtual Machine and Edit Virtual Machine windows.

Table 8.12. Virtual Machine: Custom Properties Settings

Field Name
Description
Recommendations and Limitations
sap_agent
Enables SAP monitoring on the virtual machine. Set to true or false.
-
sndbuf
Enter the size of the buffer for sending the virtual machine's outgoing data over the socket. Default value is 0.
-
vhost
Disables vhost-net, which is the kernel-based virtio network driver on virtual network interface cards attached to the virtual machine. To disable vhost, the format for this property is:
LogicalNetworkName: false
This will explicitly start the virtual machine without the vhost-net setting on the virtual NIC attached to LogicalNetworkName.
vhost-net provides better performance than virtio-net, and if it is present, it is enabled on all virtual machine NICs by default. Disabling this property makes it easier to isolate and diagnose performance issues, or to debug vhost-net errors, for example if migration fails for virtual machines on which vhost does not exist.
viodiskcache
Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching.
For Red Hat Enterprise Virtualization 3.1, if viodiskcache is enabled, the virtual machine cannot be live migrated.

Warning

Increasing the value of the sndbuf custom property results in increased occurrences of communication failure between hosts and unresponsive virtual machines.

8.4.4. Creating a Cloned Virtual Machine from an Existing Template

Summary
Cloning a virtual machine from a template is like creating a virtual machine from a template. A cloned virtual machine inherits all the settings from the original virtual machine on which its template is based. A clone does not depend on the template it was created from after it has been created.

Procedure 8.3. Creating a Cloned Virtual Machine from an Existing Template

  1. Click the Virtual Machines resource tab to list all the virtual machines in the results list.
  2. Click the New VM button to open the New Virtual Machine window.
  3. Select an existing template from the Based on Template drop-down menu.
  4. Enter a Name and appropriate Description, and accept the default values inherited from the template in the rest of the fields. You can change them if needed.
  5. Click the Resource Allocation tab. The template you selected is displayed on the Template Provisioning field. Select Clone.
    Provisioning - Clone

    Figure 8.2. Provisioning - Clone

    Select the disk provisioning mode in the Allocation field. This selection impacts both the speed of the clone operation and the amount of disk space it requires.
    • Selecting Thin Provision results in a faster clone operation and provides optimized usage of storage capacity. Disk space is allocated only as it is required. This is the default selection.
    • Selecting Preallocated results in a slower clone operation and is optimized for the speed of guest read and write operations. All disk space requested in the template is allocated at the time of the clone operation.
  6. Select the Target storage domain for the virtual machine.
  7. Click OK.

    Note

    Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked, then Down.
Result
The virtual machine is created and displayed in the Virtual Machines list. You can now log on to your virtual machine and begin using it, or assign users to it.

8.4.5. Completing the Configuration of a Virtual Machine by Defining Network Interfaces and Hard Disks

Summary
Before you can use your newly created virtual machine, the Guide Me window prompts you to configure at least one network interface and one virtual disk for the virtual machine.

Procedure 8.4. Completing the Configuration of a Virtual Machine by Defining Network Interfaces and Hard Disks

  1. On the New Virtual Machine - Guide Me window, click the Configure Network Interfaces button to open the New Network Interface window. You can accept the default values or change them as necessary.
    New Network Interface window

    Figure 8.3. New Network Interface window

    Enter the Name of the network interface.
  2. Use the drop-down menus to select the Network and the Type of network interface for the new virtual machine. The Link State is set to Up by default when the NIC is defined on the virtual machine and connected to the network.

    Note

    The options on the Network and Type fields are populated by the networks available to the cluster, and the NICs available to the virtual machine.
  3. If applicable, select the Specify custom MAC address check box and enter the network interface's MAC address.
  4. Click the arrow next to Advanced Parameters to configure the Port Mirroring and Card Status fields, if necessary.
  5. Click OK to close the New Network Interface window and open the New Virtual Machine - Guide Me window.
  6. Click the Configure Virtual Disk button to open the New Virtual Disk window.
  7. Add either an Internal virtual disk or an External LUN to the virtual machine.
    New Virtual Disk Window

    Figure 8.4. New Virtual Disk Window

  8. Click OK to close the New Virtual Disk window. The New Virtual Machine - Guide Me window opens with changed context. There is no further mandatory configuration.
  9. Click Configure Later to close the window.
Result
You have added a network interface and a virtual disk to your virtual machine.

8.4.6. Installing a Guest Operating System onto a Virtual Machine

Summary
An operating system has to be installed onto a virtual machine that is created from a blank template. You can install a new operating system on any virtual machine.

Procedure 8.5. Installing an operating system onto a virtual machine

  1. Select the created virtual machine. It has a status of Down.
  2. Click the Run Once button to open the Run Virtual Machine window.
    Run Virtual Machine Window

    Figure 8.5. Run Virtual Machine Window

  3. Click the Boot Options tab to define the boot sequence and source images for installing the operating system.
  4. Click the Linux Boot Options tab to define additional boot options specific to Linux virtual machines.
  5. Click the Initial Run tab to join the virtual machine to a domain on the initial run.
  6. Click the Display Protocol tab and select a suitable protocol to connect to the virtual machine. SPICE is the recommended protocol.
  7. Click the Custom Properties tab to enter additional running options for virtual machines.
  8. Click OK.
Result
You have installed an operating system onto your virtual machine. You can now log in and begin using your virtual machine, or assign users to it.

8.4.7. Installing Windows on VirtIO-optimized Hardware

Summary
The virtio-win.vfd floppy image contains Windows drivers for VirtIO-optimized disk and network devices. These drivers provide a performance improvement over emulated device drivers.
The virtio-win.vfd is placed automatically on ISO storage domains that are hosted on the Manager server. It must be manually uploaded using the engine-iso-uploader tool to other ISO storage domains.
You can install the VirtIO-optimized device drivers during your Windows installation by attaching a floppy disk device to your virtual machine.
This procedure presumes that you added a Red Hat VirtIO network interface and a disk that uses the VirtIO interface to your virtual machine.

Procedure 8.6. Installing VirtIO Drivers during Windows Installation

  1. Select your virtual machine from the list in the Virtual Machines tab.
  2. Click the Run Once button, and the Run Once window displays.
  3. Click Boot Options to expand the Boot Options configuration options.
  4. Click the Attach Floppy check box, and select virtio-win.vfd from the drop down selection box.
  5. Click the Attach CD check box, and select from the drop down selection box the ISO containing the version of Windows you want to install.
  6. Move CD-ROM UP in the Boot Sequence field.
  7. Configure the rest of your Run Once options as required, and click OK to start your virtual machine, and then click the Console button to open a graphical console to your virtual machine.
Result
Windows installations include an option to load additional drivers early in the installation process. Use this option load drivers from the virtio-win.vfd floppy disk which was attached to your virtual machine as A:.
For each supported virtual machine architecture and Windows version, there is a folder on the disk containing optimized hardware device drivers.

8.4.8. Virtual Machine Run Once Settings Explained

The Run Once window defines one-off boot options for a virtual machine. For persistent boot options, use the Boot Options tab in the New Virtual Machine window. The following table details the information required for the Run Once window.

Table 8.13. Virtual Machine: Run Once Settings

Field Name
Description
Boot Options
Defines the virtual machine's boot sequence, running options, and source images for installing the operating system and required drivers.
  • Attach Floppy - Attaches a floppy disk image to the virtual machine. Use this option to install Windows drivers. The floppy disk image must reside in the ISO domain.
  • Attach CD - Attaches an ISO image to the virtual machine. Use this option to install the virtual machine's operating system and applications. The CD image must reside in the ISO domain.
  • Boot Sequence - Determines the order in which the boot devices are used to boot the virtual machine. Select either Hard Disk, CD-ROM or Network, and use the arrow keys to move the option up or down.
  • Run Stateless - Deletes all changes to the virtual machine upon shutdown.
  • Start in Pause Mode - Starts then pauses the virtual machine to enable connection to the console, suitable for virtual machines in remote locations.
Linux Boot Options
The following options boot a Linux kernel directly instead of through the BIOS bootloader.
  • kernel path - A fully-qualified path to a kernel image to boot the virtual machine. The kernel image must be stored on either the ISO domain (path name in the format of iso://path-to-image) or on the host's local storage domain (path name in the format of /data/images).
  • initrd path - A fully-qualified path to a ramdisk image to be used with the previously specified kernel. The ramdisk image must be stored on the ISO domain (path name in the format of iso://path-to-image) or on the host's local storage domain (path name in the format of /data/images).
  • kernel params - Kernel command line parameter strings to be used with the defined kernel on boot.
Initial Run
Allows the user to enable cloud-init and define the settings this function applies to the virtual machine.
  • Hostname - Allows the user to specify a host name.
  • Network - Allows the user to configure network interfaces for the virtual machine, including network settings and DNS settings. Additional network interfaces can be added or removed using the Add new and Removed selected buttons.
  • SSH Authorized Keys - Allows the user to specify SSH keys to be added to the authorized keys file.
  • Regenerate System SSH Keys - Allows the user to regenerate SSH keys.
  • Time Zone - Allows the user to specify the time zone in which the virtual machine is to run.
  • Root Password - Allows the user to set a root password.
  • File Attachment - Allows the user to create files to be injected into the virtual machine. Files can be either Plain Text or Base64 in content, and are injected into the path specified via the drop-down box. Additional files can be added or removed using the Add new and Remove selected buttons.
Host
Defines the virtual machine's host.
  • Any host in cluster: - Allocates the virtual machine to any available host.
  • Specific - Allows the user to define a specific host for the virtual machine.
Display Protocol
Defines the protocol to connect to virtual machines.
  • VNC - Can be used for Linux virtual machines. Requires a VNC client to connect to a virtual machine using VNC.
  • SPICE - Recommended protocol for Linux and Windows virtual machines, excepting Windows 8 and Server 2012 virtual machines.
Custom Properties
Additional VDSM options for running virtual machines.
  • sap_agent - Enables SAP monitoring on the virtual machine. Set to true or false.
  • sndbuf - Enter the size of the buffer for sending the virtual machine's outgoing data over the socket.
  • vhost - Enter the name of the virtual host on which this virtual machine should run. The name can contain any combination of letters and numbers.
  • viodiskcache - Caching mode for the virtio disk. writethrough writes data to the cache and the disk in parallel, writeback does not copy modifications from the cache to the disk, and none disables caching.

8.5. Using Virtual Machines

8.5.1. SPICE

The SPICE (Simple Protocol for Independent Computing Environments) protocol facilitates graphical connections to virtual machines. The SPICE protocol allows:
  • video at more than 30 frames per second
  • bidirectional audio (for softphones/IP phones)
  • bidirectional video (for video telephony/video conferencing)
  • connection to multiple monitors with a single virtual machine
  • USB redirection from the client's USB port into the virtual machine
  • connection to a proxy from outside of the network the hypervisor is attached to

8.5.2. Powering on a Virtual Machine

Summary
You can start a virtual machine from the Administration Portal.

Procedure 8.7. Powering on a Virtual Machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list with a status of Down.
  2. Click the icon.
    Alternatively, right-click and select Run.
Result
The Status of the virtual machine changes to Up. The display protocol of the selected virtual machine is displayed. If the virtual machine has the rhevm-guest-agent installed, its IP address is also displayed.

8.5.3. Installing SPICE Plugins in Windows and Linux

The SPICE protocol is the default graphical protocol used to connect to virtual machines. Plugins for the Internet Explorer and Firefox web browsers allow you to launch graphical virtual machine connection sessions from the Red Hat Enterprise Virtualization web portals.
Summary
This procedure describes the installation of the SPICE Plugin for Mozilla Firefox on Linux clients.

Procedure 8.8. Installing the SPICE plugin for Mozilla Firefox on Red Hat Enterprise Linux

  • Open a terminal and run the following command as root:
    # yum install spice-xpi
    The plugin will be installed the next time Firefox is started.
Result
The SPICE plugin is installed on your Red Hat Enterprise Linux client.
Summary
This procedure describes the installation of the SPICE ActiveX component for Internet Explorer on Windows clients.

Procedure 8.9. Installing the SPICE ActiveX component for Internet Explorer on Windows

  1. The first time you attempt to connect to a virtual machine, an add-on notification bar displays in the browser, prompting you to install the SPICE ActiveX component. You need administrative privileges on your client machine to install the component. Contact your systems administrator if you do not have the necessary permissions.
  2. When you accept the prompt to install the SPICE ActiveX component, Internet Explorer may issue a security warning. Confirm that you wish to proceed, and the component will be installed.
Result
The SPICE ActiveX component for Internet Explorer for Windows is installed on your client machine.

Important

If you installed the SPICE ActiveX component without administrative permissions, you will receive a message stating that the usbclerk package was not installed. This means that you will be able to connect to a virtual machine using SPICE, however you will not be able to use USB devices on your virtual machine. Contact your systems administrator to install usbclerk if required.

8.5.4. Logging in to a Virtual Machine

Summary
The default protocol for graphical connections to virtual machines is SPICE. You can log in to virtual machines using the SPICE protocol from the Administration Portal. An external VNC client is required to log in to virtual machines using the VNC protocol.

Procedure 8.10. Logging in to a virtual machine

  1. On the Virtual Machines resource tab, select a running virtual machine.
  2. Click the Console button or right-click the virtual machine and select Console from the menu.
    Connection Icon on the Virtual Machine Menu

    Figure 8.6. Connection Icon on the Virtual Machine Menu

    • If the virtual machine's display protocol is set to SPICE, a console window to the virtual machine opens. Log in to the virtual machine's guest operating system.
    • If the virtual machine's display protocol is set to VNC, you will be prompted to download a file called "console.vv". This file contains information needed by the VNC client to log into a virtual machine. Use a text editor to open the file and retrieve the information. Run the VNC client and log into the virtual machine using the information provided in the downloaded file.
Result
You have connected to a virtual machine from the Administration Portal using SPICE or a VNC client.

8.6. Shutting Down or Pausing Virtual Machines

8.6.1. Shutting Down or Pausing Virtual Machines

Virtual machine should be shut down from within. However, occasionally there is a need to shut down the virtual machine from the Administration Portal.
The Red Hat Enterprise Virtualization platform provides for an orderly shutdown if the guest tools are installed on the virtual machine. Shutdown of virtual machines should be planned after due consideration, preferably at times that will least impact users.
All users should be logged off of a Windows virtual machine before shutting them down. If any users are still logged in, the virtual machine remains on with a Powering Off status in the Administration Portal. The virtual machine requires manual intervention to shut it down completely because the following Windows message is displayed on the virtual machine:
Other people are logged on to this computer. Shutting down Windows might cause them to lose data. Do you want to continue shutting down?
If a virtual machine cannot be properly shut down, since, for example, the operating system is not responsive, you might need to force a shutdown, which is equivalent to pulling out the power cord of a physical machine.

Warning

Exercise extreme caution when forcing shutdown of a virtual machine, as data loss may occur.
Pausing a virtual machine puts it into Hibernate mode, where the virtual machine state is preserved. Applications running in RAM are written to the hard drive and CPU usage is zero.

8.6.2. Shutting Down a Virtual Machine

Summary
If your virtual machine has the rhevm-guest-agent installed, or has Advanced Configuration and Power Interface (ACPI) support, you can shut it down from the Administration Portal.

Procedure 8.11. Shutting Down a Virtual Machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a running virtual machine in the results list.
  2. Click the Shut down ( ) button.
    Alternatively, right-click the virtual machine and select Shut down.
Result
The Status of the virtual machine changes to Down.

8.6.3. Pausing a Virtual Machine

Summary
If your virtual machine has the rhevm-guest-agent installed, or has Advanced Configuration and Power Interface (ACPI) support, you can pause it from the Administration Portal. This is equal to setting it on Hibernate mode.

Procedure 8.12. Pausing a virtual machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a running virtual machine in the results list.
  2. Click the Suspend ( ) button.
    Alternatively, right-click the virtual machine and select Suspend
Result
The Status of the virtual machine changes to Paused.

8.7. Managing Virtual Machines

8.7.1. Editing a Resource

Summary
Edit the properties of a resource.

Procedure 8.13. Editing a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click Edit to open the Edit window.
  3. Change the necessary properties and click OK.
Result
The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

8.7.2. Removing a Virtual Machine

Summary
When you no longer require a virtual machine, remove it from the data center. Shut down the virtual machine before removing it.

Procedure 8.14. Removing a virtual machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Shut down the virtual machine. The Remove button is only enabled for a virtual machine that has a status of Down.
  3. Click Remove. On the Remove Virtual Machine(s) confirmation window, the Remove Disk(s) check box is automatically selected, which will remove the attached virtual disks together with the virtual machine. If the check box is cleared, the virtual disks will remain in the environment as floating disks.
  4. Click OK to remove the virtual machine (and associated virtual disks) and close the window.
Result
The virtual machine is removed from the environment and no longer displays on the Virtual Machines resource tab.

8.7.3. Adding and Editing Virtual Machine Disks

Summary
It is possible to add disks to virtual machines. You can add new disks, or previously created floating disks to a virtual machine. This allows you to provide additional space to and share disks between virtual machines. You can also edit disks to change some of their details.
An Internal disk is the default type of disk. You can also add an External(Direct Lun) disk. Internal disk creation is managed entirely by the Manager; external disks require externally prepared targets that already exist. Existing disks are either floating disks or shareable disks attached to virtual machines.

Procedure 8.15. Adding Disks to Virtual Machines

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Click the Disks tab in the details pane to display a list of virtual disks currently associated with the virtual machine.
  3. Click Add to open the Add Virtual Disk window.
    Add Virtual Disk Window

    Figure 8.7. Add Virtual Disk Window

  4. Use the appropriate radio buttons to switch between Internal and the External (Direct Lun) disks.
  5. Select the Attach Disk check box to choose an existing disk from the list and select the Activate check box.
    Alternatively, enter the Size, Alias, and Description of a new disk and use the drop-down menus and check boxes to configure the disk.
  6. Click OK to add the disk and close the window.
Result
Your new disk is listed in the Virtual Disks tab in the details pane of the virtual machine.

8.7.4. Adding and Editing Virtual Machine Network Interfaces

Summary
You can add network interfaces to virtual machines. Doing so allows you to put your virtual machine on multiple logical networks. You can also edit a virtual machine's network interface card to change the details of that network interface card. This procedure can be performed on virtual machines that are running, but some actions can be performed only on virtual machines that are not running.

Procedure 8.16. Adding Network Interfaces to Virtual Machines

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Select the Network Interfaces tab in the details pane to display a list of network interfaces that are currently associated with the virtual machine.
  3. Click New to open the New Network Interface window.
    New Network Interface window

    Figure 8.8. New Network Interface window

  4. Enter the Name of the network interface.
  5. Use the drop-down menus to select the Profile and the Type of network interface for the new network interface. The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network.

    Note

    The Profile and Type fields are populated in accordance with the profiles and network types available to the cluster and the network interface cards available to the virtual machine.
  6. Select the Custom MAC address check box and enter a MAC address for the network interface card as required.
  7. Click OK to close the New Network Interface window.
Result
Your new network interface is listed in the Network Interfaces tab in the details pane of the virtual machine.

8.7.5. Explanation of Settings in the Virtual Machine Network Interface Window

These settings apply when you are adding or editing a virtual machine network interface. If you have more than one network interface attached to a virtual machine, you can put the virtual machine on more than one logical network.

Table 8.14. Add a network interface to a virtual machine entries

Field Name
Description
Name
The name of the network interface. This text field has a 21-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Network
Logical network that the network interface is placed on. By default, all network interfaces are put on the rhevm management network.
Link State
Whether or not the network interface is connected to the logical network.
  • Up: The network interface is located on its slot.
    • When the Card Status is Plugged, it means the network interface is connected to a network cable, and is active.
    • When the Card Status is Unplugged, the network interface will be automatically connected to the network and become active.
  • Down: The network interface is located on its slot, but it is not connected to any network. Virtual machines will not be able to run in this state.
Type
The virtual interface the network interface presents to virtual machines. VirtIO is faster but requires VirtIO drivers. Red Hat Enterprise Linux 5 and higher includes VirtIO drivers. Windows does not include VirtIO drivers, but they can be installed from the guest tools ISO or virtual floppy disk. rtl8139 and e1000 device drivers are included in most operating systems.
Specify custom MAC address
Choose this option to set a custom MAC address. The Red Hat Enterprise Virtualization Manager automatically generates a MAC address that is unique to the environment to identify the network interface. Having two devices with the same MAC address online in the same network causes networking conflicts.
Port Mirroring
A security feature that allows all network traffic going to or leaving from virtual machines on a given logical network and host to be copied (mirrored) to the network interface. If the host also uses the network, then traffic going to or leaving from the host is also copied.
Port mirroring only works on network interfaces with IPv4 IP addresses.
Card Status
Whether or not the network interface is defined on the virtual machine.
  • Plugged: The network interface has been defined on the virtual machine.
    • If its Link State is Up, it means the network interface is connected to a network cable, and is active.
    • If its Link State is Down, the network interface is not connected to a network cable.
  • Unplugged: The network interface is only defined on the Manager, and is not associated with a virtual machine.
    • If its Link State is Up, when the network interface is plugged it will automatically be connected to a network and become active.
    • If its Link State is Down, the network interface is not connected to any network until it is defined on a virtual machine.

8.7.6. Hot Plugging Virtual Machine Disks

Summary
You can hot plug virtual machine disks. Hot plugging means enabling or disabling devices while a virtual machine is running.

Procedure 8.17. Hot plugging virtual machine disks

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a running virtual machine in the results list.
  2. Select the Disks tab from the details pane of the virtual machine.
  3. Select the virtual machine disk you would like to hot plug.
  4. Click the Activate or Deactivate button.
Result
You have enabled or disabled a virtual machine disk.

8.7.7. Hot Plugging Network Interfaces

Summary
You can hot plug network interfaces. Hot plugging means enabling and disabling network interfaces while a virtual machine is running.

Procedure 8.18. Hot plugging network interfaces

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a running virtual machine in the results list.
  2. Select the Network Interfaces tab from the details pane of the virtual machine.
  3. Select the network interface you would like to hot plug and click Edit to open the Edit Network Interface window.
  4. Click the Advanced Parameters arrow to access the Card Status option. Set the Card Status to Plugged if you want to enable the network interface, or set it to Unplugged if you want to disable the network interface.
Result
You have enabled or disabled a virtual network interface.

8.7.8. Removing Disks and Network Interfaces from Virtual Machines

Summary
You can remove network interfaces and virtual hard disks from virtual machines. If you remove a disk from a virtual machine, the contents of the disk are permanently lost.
This procedure is not the same as hot plugging. You can only remove virtual hardware that is Deactivated.

Procedure 8.19. Removing disks and network interfaces from virtual machines

  1. Select the virtual machine with virtual hardware you'd like to remove.
  2. Select the relevant tab, either Network Interfaces or Disks, from the virtual machine details pane.
  3. Select the disk or network interface you'd like to remove. To remove it, you must have first Deactivated it.
  4. Click the Remove button. Click OK in the confirmation window. If you are removing a disk, select the Remove Permanently option to completely remove it from the environment. If you don't select this option, for example because the disk is a shared disk, it will remain in the Disks resource tab.
Result
The disk or network interface is no longer attached to the virtual machine.

8.8. Virtual Machines and Permissions

8.8.1. Managing System Permissions for a Virtual Machine

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, and so forth.
A UserVmManager is a system administration role for virtual machines in a data center. This role can be applied to specific virtual machines, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual resources.
The user virtual machine administrator role permits the following actions:
  • Create, edit, and remove virtual machines; and
  • Run, suspend, shutdown, and stop virtual machines.

Note

You can only assign roles and permissions to existing users.
Many end-users are concerned solely with the virtual machine resources of the virtualized environment. As a result, Red Hat Enterprise Virtualization provides several user roles which enable the user to manage virtual machines specifically, but not other resources in the data center.

8.8.2. Virtual Machines Administrator Roles Explained

Virtual Machine Administrator Permission Roles
The table below describes the administrator roles and privileges applicable to virtual machine administration.

Table 8.15. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DataCenterAdmin Data Center Administrator Can use, create, delete, manage all virtual machines within a specific data center.
ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machines within a specific cluster.
NetworkAdmin Network Administrator Can configure and manage networks attached to virtual machines. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine.

8.8.3. Virtual Machine User Roles Explained

Virtual Machine User Permission Roles
The table below describes the user roles and privileges applicable to virtual machine users. These roles allow access to the User Portal for managing and accessing virtual machines, but they do not confer any permissions for the Administration Portal.

Table 8.16. Red Hat Enterprise Virtualization System User Roles

Role Privileges Notes
UserRole Can access and use virtual machines and pools. Can log in to the User Portal and use virtual machines and pools.
PowerUserRole Can create and manage virtual machines and templates. Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center.
UserVmManager System administrator of a virtual machine. Can manage virtual machines, create and use snapshots, and migrate virtual machines. A user who creates a virtual machine in the User Portal is automatically assigned the UserVmManager role on the machine.
UserTemplateBasedVm Limited privileges to only use Templates. Level of privilege to create a virtual machine by means of a template.
VmCreator Can create virtual machines in the User Portal. This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains.
NetworkUser Logical network and network interface user for virtual machines. If the Allow all users to use this Network option was selected when a logical network is created, NetworkUser permissions are assigned to all users for the logical network. Users can then attach or detach virtual machine network interfaces to or from the logical network.

Note

In Red Hat Enterprise Virtualization 3.0, the PowerUserRole only granted permissions for virtual machines which are directly assigned to the PowerUser, or virtual machines created by the PowerUser. Now, the VmCreator role provides privileges previously conferred by the PowerUserRole. The PowerUserRole can now be applied on a system-wide level, or on specific data centers or clusters, and grants permissions to all virtual machines and templates within the system or specific resource. Having a PowerUserRole is equivalent to having the VmCreator, DiskCreator, and TemplateCreator roles.

8.8.4. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 8.20. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

8.8.5. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 8.21. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

8.9. Backing Up and Restoring Virtual Machines with Snapshots

8.9.1. Creating a Snapshot of a Virtual Machine

Summary
A snapshot is a view of a virtual machine's operating system and applications at a given point in time. Take a snapshot of a virtual machine before you make a change to it that may have unintended consequences. You can use a snapshot to return a virtual machine to a previous state.

Note

Live snapshots can only be created for virtual machines running on 3.1-or-higher-compatible data centers. Virtual machines in 3.0-or-lower-compatible data centers must be shut down before a snapshot can be created.

Procedure 8.22. Creating a snapshot of a virtual machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Click Create Snapshot to open the Create Snapshot window.
  3. Enter a description for the snapshot.
  4. Click OK to create the snapshot and close the window.
Result
The virtual machine's operating system and applications are stored in a snapshot that can be previewed or restored. The snapshot is created with a status of Locked, which changes to Ok. When you click on the snapshot, its details are shown on the General, Disks, Network Interfaces, and Installed Applications tabs in the right side-pane of the details pane.

8.9.2. Using a Snapshot to Restore a Virtual Machine

Summary
A snapshot can be used to restore a virtual machine to its previous state.

Procedure 8.23. Using a snapshot to restore a virtual machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select the virtual machine in the results list. Ensure the status is Powered Down.
  2. Click the Snapshots tab in the details pane to list the available snapshots.
  3. Select a snapshot to restore in the left side-pane. The snapshot details display in the right side-pane.
  4. Click Preview to preview the snapshot. The status of the virtual machine briefly changes to Image Locked before returning to Down.
    Preview snapshot

    Figure 8.9. Preview snapshot

  5. Start the virtual machine and it will run with the disk image of the snapshot.
  6. Click Commit to permanently restore the virtual machine to the condition of the snapshot. Any subsequent snapshots are erased.
    Alternatively, click the Undo button to deactivate the snapshot and return the virtual machine to its previous state.
Result
The virtual machine is restored to its state at the time of the snapshot, or returned to its state before the preview of the snapshot.

8.9.3. Creating a Virtual Machine from a Snapshot

Summary
You have created a snapshot from a virtual machine. Now you can use that snapshot to create another virtual machine.

Procedure 8.24. Creating a virtual machine from a snapshot

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select the virtual machine in the results list.
  2. Click the Snapshots tab in the details pane to list the available snapshots for the virtual machines.
  3. Select a snapshot in the list displayed and click Clone to open the Clone VM from Snapshot window.
  4. Enter the Name and Description of the virtual machine to be created.
    Clone a Virtual Machine from a Snapshot

    Figure 8.10. Clone a Virtual Machine from a Snapshot

  5. Click OK to create the virtual machine and close the window.
Result
After a short time, the cloned virtual machine appears in the Virtual Machines tab in the navigation pane. It appears in the navigation pane with a status of Image Locked. The virtual machine will remain in this state until Red Hat Enterprise Virtualization completes the creation of the virtual machine. A virtual machine with a preallocated 20GB hard drive takes about fifteen minutes to create. Sparsely-allocated virtual disks take less time to create than do preallocated virtual disks.
When the virtual machine is ready to use, its status changes from Image Locked to Down in the Virtual Machines tab in the navigation pane.

8.9.4. Deleting a Snapshot

Summary
Delete a snapshot and permanently remove it from the virtualized environment.

Procedure 8.25. Deleting a Snapshot

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Click the Snapshots tab in the details pane to list available snapshots for the virtual machine.
    Snapshot List

    Figure 8.11. Snapshot List

  3. Select the snapshot to delete.
  4. In the Navigation pane, shut down the running virtual machine associated with the snapshot to be deleted.
  5. Click Delete to open the Delete Snapshot confirmation window.
  6. Click OK to delete the snapshot and close the window.
Result
You have removed a virtual machine snapshot. Removing a snapshot does not affect the virtual machine.

8.10. Importing and Exporting Virtual Machines

8.10.1. Exporting and Importing Virtual Machines

A virtual machine or a template can be moved between data centers in the same environment, or to a different Red Hat Enterprise Virtualization environment. The Red Hat Enterprise Virtualization Manager allows you to import and export virtual machines (and templates) stored in Open Virtual Machine Format (OVF). This feature can be used in multiple ways:
  • Moving virtual resources between Red Hat Enterprise Virtualization environments.
  • Move virtual machines and templates between data centers in a single Red Hat Enterprise Virtualization environment.
  • Backing up virtual machines and templates.
There are three stages of exporting and importing virtual resources:
  • First you export your virtual machines and templates to an export domain.
  • Second, you detach the export domain from one data center, and attach it to another. You can attach it to a different data center in the same Red Hat Enterprise Virtualization environment, or attach it to a data center in a separate Red Hat Enterprise Virtualization environment that is managed by another installation of the Red Hat Enterprise Virtualization Manager.
  • Third, you import your virtual machines and template into the data center you attached the export domain to.
A virtual machine must be stopped before it can be moved across data centers. If the virtual machine was created using a template, the template must exist in the destination data center for the virtual machine to work, or the virtual machine must be exported with the Collapse Snapshots option selected.

8.10.2. Overview of the Export-Import Process

The export domain allows you to move virtual machines and templates between Red Hat Enterprise Virtualization environments.
Exporting and importing resources requires that an active export domain be attached to the data center. An export domain is a temporary storage area containing two directories per exported virtual resource. One directory consists of all the OVF (Open Virtualization Format) files pertaining to the virtual machine. The other holds the virtual resource's disk image, or images.
You can also import virtual machines from other virtualization providers, for example, Xen, VMware or Windows virtual machines, using the V2V feature. V2V converts virtual machines and places them in the export domain.
For more information on V2V, see the Red Hat Enterprise Linux V2V Guide.

Note

An export domain can be active in only one data center. This means that the export domain can be attached to either the source data center or the destination data center.
Exporting virtual resources across data centers requires some preparation. Make sure that:
  • an export domain exists, and is attached to the source data center.
  • the virtual machine is shut down.
  • if the virtual machine was created from a template, the template resides on the destination data center, or is exported alongside the virtual machine.
When the virtual machine, or machines, have been exported to the export domain, you can import them into the destination data center. If the destination data center is within the same Red Hat Enterprise Virtualization environment, delete the originals from the source data center after exporting them to the export domain.

8.10.3. Performing an Export-Import of Virtual Resources

Summary
This procedure provides a graphical overview of the steps required to import a virtual resource to its destination.

Procedure 8.26. Performing an export-import of virtual resources

  1. Attach the export domain to the source data center.
    Attach Export Domain

    Figure 8.12. Attach Export Domain

  2. Export the virtual resource to the export domain.
    Export the Virtual Resource

    Figure 8.13. Export the Virtual Resource

  3. Detach the export domain from the source data center.
    Detach Export Domain

    Figure 8.14. Detach Export Domain

  4. Attach the export domain to the destination Data center.
    Attach the Export Domain

    Figure 8.15. Attach the Export Domain

  5. Import the virtual resource into the destination data center.
    Import the virtual resource

    Figure 8.16. Import the virtual resource

Result
Your virtual resource is exported to the destination data center.

8.10.4. Exporting a Virtual Machine to the Export Domain

Summary
Export a virtual machine to the export domain so that it can be imported into a different data center. Before you begin, the export domain must be attached to the data center that contains the virtual machine to be exported.

Procedure 8.27. Exporting a Virtual Machine to the Export Domain

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list. Ensure the virtual machine has a status of Down.
  2. Click Export to open the Export Virtual Machine window.
  3. Select the Force Override check box to override existing images of the virtual machine on the export domain.
    Select the Collapse Snapshots check box to create a single export volume per disk. Selecting this option will remove snapshot restore points and include the template in a template-based virtual machine. This removes any dependencies a virtual machine has on a template.
  4. Click OK to export the virtual machine and close the window.
Result
The export of the virtual machine begins. The virtual machine displays in the Virtual Machines list with an Image Locked status as it is exported. Depending on the size of your virtual machine hard disk images, and your storage hardware, this can take up to an hour. Use the Events tab to view the progress.
When complete, the virtual machine has been exported to the export domain and displays on the VM Import tab of the export domain's details pane.

8.10.5. Importing a Virtual Machine into the Destination Data Center

Summary
You have a virtual machine on an export domain. Before the virtual machine can be imported to a new data center, the export domain must be attached to the destination data center.

Procedure 8.28. Importing a Virtual Machine into the Destination Data Center

  1. Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list. The export domain must have a status of Active
  2. Select the VM Import tab in the details pane to list the available virtual machines to import.
  3. Select one or more virtual machines to import and click Import to open the Import Virtual Machine(s) window.
    Import Virtual Machine

    Figure 8.17. Import Virtual Machine

  4. Use the drop-down menus to select the Default Storage Domain and Cluster.
  5. Select the Collapse Snapshots check box to remove snapshot restore points and include templates in template-based virtual machines.
  6. Click the virtual machine to be imported and click on the Disks sub-tab. From this tab, you can use the Allocation Policy and Storage Domain drop-down lists to select whether the disk used by the virtual machine will be thinly provisioned or preallocated, and can also select the storage domain on which the disk will be stored.
  7. Click OK to import the virtual machines.
    The Import Conflict window opens if the virtual machine exists in the virtualized environment.
    Import Conflict Window

    Figure 8.18. Import Conflict Window

  8. Choose one of the following radio buttons:
    • Don't import
    • Clone and enter a unique name for the virtual machine in the New Name field.
    Or select the Apply to all check box to import all duplicated virtual machines with the same suffix.
  9. Click OK to import the virtual machines and close the window.
Result
You have imported the virtual machine to the destination data center. This may take some time to complete.

8.11. Migrating Virtual Machines Between Hosts

8.11.1. What is Live Migration?

Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service.
Live migration is transparent to the end user: the virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host.

8.11.2. Live Migration Prerequisites

Live migration is used to seamlessly move virtual machines to support a number of common maintenance tasks. Ensure that your Red Hat Enterprise Virtualization environment is correctly configured to support live migration well in advance of using it.
At a minimum, for successful live migration of virtual machines to be possible:
  • The source and destination host must both be members of the same cluster, ensuring CPU compatibility between them.
  • The source and destination host must have a status of Up.
  • The source and destination host must have access to the same virtual networks and VLANs.
  • The source and destination host must have access to the data storage domain on which the virtual machine resides.
  • There must be enough CPU capacity on the destination host to support the virtual machine's requirements.
  • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements.
  • The migrating virtual machine must not have the cache!=none custom property set.
In addition, for best performance, the storage and management networks should be split to avoid network saturation. Virtual machine migration involves transferring large amounts of data between hosts.
Live migration is performed using the management network. Each live migration event is limited to a maximum transfer speed of 30 MBps, and the number of concurrent migrations supported is also limited by default. Despite these measures, concurrent migrations have the potential to saturate the management network. It is recommended that separate logical networks are created for storage, display, and virtual machine data to minimize the risk of network saturation.

8.11.3. Automatic Virtual Machine Migration

Red Hat Enterprise Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster.
The Manager automatically initiates live migration of virtual machines in order to maintain load balancing or power saving levels in line with cluster policy. While no cluster policy is defined by default, it is recommended that you specify the cluster policy which best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required.

8.11.4. Preventing Automatic Migration of a Virtual Machine

Summary
Red Hat Enterprise Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host.
The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite.

Procedure 8.29. Preventing automatic migration of a virtual machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine or virtual server in the results list.
  2. Click Edit to open the Edit Virtual Machine window.
    Edit Virtual Machine Window

    Figure 8.19. Edit Virtual Machine Window

  3. Click the Host tab.
  4. Use the Run On radio buttons to designate the virtual machine to run on Any Host in Cluster or a Specific host. If applicable, select a specific host from the drop-down menu.

    Warning

    Explicitly assigning a virtual machine to a specific host and disabling migration is mutually exclusive with Red Hat Enterprise Virtualization high availability. Virtual machines that are assigned to a specific host can only be made highly available using third party high availability products like Red Hat High Availability.
  5. Use the drop-down menu to affect the Migration Options. Select Do not allow migration to enable the Use Host CPU check box.
  6. If applicable, enter relevant CPU Pinning topology commands in the text field.
  7. Click OK to save the changes and close the window.
Result
You have changed the migration settings for the virtual machine.

8.11.5. Manually Migrating Virtual Machines

Summary
A running virtual machine can be migrated to any host within its designated host cluster. This is especially useful if the load on a particular host is too high. When bringing a server down for maintenance, migration is triggered automatically, so manual migration is not required. Migration of virtual machines does not cause any service interruption.
The migrating virtual machine must not have the cache!=none custom property set.

Procedure 8.30. Manually migrating virtual machines

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a running virtual machine in the results list.
    Click Migrate to open the Migrate Virtual Machine(s) window.
  2. Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host, specifying the host using the drop-down menu.

    Note

    Virtual Machines migrate within their designated host cluster. When the Select Host Automatically option is selected, the system determines the host to which the virtual is migrated according to the load balancing and power management rules set up in the cluster policy.
  3. Click OK to commence migration and close the window.
Result
The virtual machine is migrated. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to.

8.11.6. Setting Migration Priority

Summary
Red Hat Enterprise Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. Every minute the load balancing process runs. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster.
It is possible to influence the ordering of the migration queue, for example setting mission critical virtual machines to migrate before others. The Red Hat Enterprise Virtualization Manager allows you to set the priority of each virtual machine to facilitate this. Virtual machines migrations will be ordered by priority, those virtual machines with the highest priority will be migrated first.

Note

In previous versions of Red Hat Enterprise Virtualization, you could only set migration policy for virtual servers. You can now set migration policy for both virtual machine types.

Procedure 8.31. Setting Migration Priority

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual server in the results list.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Select the High Availability tab.
  4. Use the radio buttons to set the Priority for Run/Migrate Queue of the virtual machine to one of Low, Medium, or High.
  5. Click OK to save changes and close the window.
Result
The virtual machine's migration priority has been modified.

8.11.7. Canceling ongoing virtual machine migrations

Summary
A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment.

Procedure 8.32. Canceling ongoing virtual machine migrations

  1. Select the migrating virtual machine. It is displayed in the Virtual Machines resource tab with a status of Migrating from.
  2. Click the Cancel Migration button at the top of the results list. Alternatively, right-click on the virtual machine and select Cancel Migration from the context menu.
Result
The virtual machine status returns from Migrating from status to Up status.

8.11.8. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers

When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples:

Example 8.1. Notification in the Events Tab of the Web Admin Portal

Highly Available Virtual_Machine_Name failed. It will be restarted automatically.
Virtual_Machine_Name was restarted on Host Host_Name

Example 8.2. Notification in the Manager engine.log

This log can be found on the Red Hat Enterprise Virtualization Manager at /var/log/ovirt-engine/engine.log:
Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name, VM Id:Virtual_Machine_ID_Number

8.12. Improving Uptime with Virtual Machine High Availability

8.12.1. Why Use High Availability?

High availability is recommended for virtual machines running critical workloads.
High availability can ensure that virtual machines are restarted in the following scenarios:
  • When a host becomes non-operational due to hardware failure.
  • When a host is put into maintenance mode for scheduled downtime.
  • When a host becomes unavailable because it has lost communication with an external storage resource.
A high availability virtual machine is automatically restarted, either on its original host or another host in the cluster.

8.12.2. What is High Availability?

High availability means that a virtual machine will be automatically restarted if its process is interrupted. This happens if the virtual machine is terminated by methods other than powering off from within the guest or sending the shutdown command from the Manager. When these events occur, the highly available virtual machine is automatically restarted, either on its original host or another host in the cluster.
High availability is possible because the Red Hat Enterprise Virtualization Manager constantly monitors the hosts and storage, and automatically detects hardware failure. If host failure is detected, any virtual machine configured to be highly available is automatically restarted on another host in the cluster.
With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times.

8.12.3. High Availability Considerations

A highly available host requires a power management device and its fencing parameters configured. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines:
  • Power management must be configured for the hosts running the highly available virtual machines.
  • The host running the highly available virtual machine must be part of a cluster which has other available hosts.
  • The destination host must be running.
  • The source and destination host must have access to the data domain on which the virtual machine resides.
  • The source and destination host must have access to the same virtual networks and VLANs.
  • There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements.
  • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements.

8.12.4. Configuring a Highly Available Virtual Machine

Summary
High availability must be configured individually for each virtual server.

Note

You can only set high availability for virtual servers. You can not set high availability for virtual desktops.

Procedure 8.33. Configuring a Highly Available Virtual Machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual server in the results list.
  2. Click Edit to open the Edit Virtual Machine window.
  3. Click the High Availability tab.
    Set virtual machine high availability

    Figure 8.20. Set virtual machine high availability

  4. Select the Highly Available check box to enable high availability for the virtual server.
  5. Use the radio buttons to set the Priority for Run/Migrate Queue of the virtual machine to one of Low, Medium, or High. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated.
  6. Click OK to save changes and close the window.
Result
You have configured high availability for a virtual machine. You can check if a virtual machine is highly available when you select it and click on its General tab in the details pane.

8.13. Other Virtual Machine Tasks

8.13.1. Enabling SAP monitoring for a virtual machine from the Administration Portal

Summary
Enable SAP monitoring on a virtual machine to be recognized by SAP monitoring systems.

Procedure 8.34. Enabling SAP monitoring for a Virtual Machine from the Administration Portal

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine with a status of Down in the results list.
  2. Click Edit button to open the Edit Virtual Machine window.
  3. Select the Custom Properties tab.
    Enable SAP

    Figure 8.21. Enable SAP

  4. Use the drop-down menu to select sap_agent. Ensure the secondary drop-down menu is set to True.
    If previous properties have been set, select the plus sign to add a new property rule and select sap_agent.
  5. Click OK to save changes and close the window.
Result
You have enabled SAP monitoring for your virtual machine.

8.13.2. Configuring Red Hat Enterprise Linux 5.4 or Higher Virtual Machines to use SPICE

8.13.2.1. Using SPICE on virtual machines running versions of Red Hat Enterprise Linux released prior to 5.4

SPICE is a remote display protocol designed for virtual environments, which enables you to view a virtualized desktop or server. SPICE delivers a high quality user experience, keeps CPU consumption low, and supports high quality video streaming.
Using SPICE on a Linux machine significantly improves the movement of the mouse cursor on the console of the virtual machine. To use SPICE, the X-Windows system requires additional qxl drivers. The qxl drivers are provided with Red Hat Enterprise Linux 5.4 and newer. Older versions are not supported. Installing SPICE on a virtual machine running Red Hat Enterprise Linux significantly improves the performance of the graphical user interface.

Note

Typically, this is most useful for virtual machines where the user requires the use of the graphical user interface. System administrators who are creating virtual servers may prefer not to configure SPICE if their use of the graphical user interface is minimal.

8.13.2.2. Installing qxl drivers on virtual machines

Summary
This procedure installs qxl drivers on virtual machines running Red Hat Enterprise Linux 5.4 or higher.

Procedure 8.35. Installing qxl drivers on a virtual machine

  1. Log in to a Red Hat Enterprise Linux virtual machine.
  2. Open a terminal.
  3. Run the following command as root:
    # yum install xorg-x11-drv-qxl
Result
The qxl drivers have been installed and must now be configured.

8.13.2.3. Configuring qxl drivers on virtual machines

Summary
You can configure qxl drivers using either a graphical interface or the command line. Perform only one of the following procedures.

Procedure 8.36. Configuring qxl drivers in GNOME

  1. Click System.
  2. Click Administration.
  3. Click Display.
  4. Click the Hardware tab.
  5. Click Video Cards Configure.
  6. Select qxl and click OK.
  7. Restart X-Windows by logging out of the virtual machine and logging back in.

Procedure 8.37. Configuring qxl drivers on the command line:

  1. Back up /etc/X11/xorg.conf:
    # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.$$.backup
  2. Make the following change to the Device section of /etc/X11/xorg.conf:
    Section 	"Device"
    Identifier	"Videocard0"
    Driver		"qxl"
    Endsection
    
Result
You have configured qxl drivers to enable your virtual machine to use SPICE.

8.13.2.4. Configuring a virtual machine's tablet and mouse to use SPICE

Summary
Edit the /etc/X11/xorg.conf file to enable SPICE for your virtual machine's tablet devices.

Procedure 8.38. Configuring a virtual machine's tablet and mouse to use SPICE

  1. Verify that the tablet device is available on your guest:
    # /sbin/lsusb -v | grep 'QEMU USB Tablet'
    If there is no output from the command, do not continue configuring the tablet.
  2. Back up /etc/X11/xorg.conf by running this command:
    # cp /etc/X11/xorg.conf /etc/X11/xorg.conf.$$.backup
  3. Make the following changes to /etc/X11/xorg.conf:
    Section "ServerLayout"
    Identifier     "single head configuration"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Tablet" "SendCoreEvents"
    InputDevice    "Mouse" "CorePointer"
    EndSection
    							 
    Section "InputDevice"
    Identifier  "Mouse"
    Driver      "void"
    #Option      "Device" "/dev/input/mice"
    #Option      "Emulate3Buttons" "yes"
    EndSection
    							 
    Section "InputDevice"
    Identifier  "Tablet"
    Driver      "evdev"
    Option      "Device" "/dev/input/event2"
    Option "CorePointer" "true"
    EndSection
    
  4. Log out and log back into the virtual machine to restart X-Windows.
Result
You have enabled a tablet and a mouse device on your virtual machine to use SPICE.

8.13.3. KVM virtual machine timing management

Virtualization poses various challenges for virtual machine time keeping. Virtual machines which use the Time Stamp Counter (TSC) as a clock source may suffer timing issues as some CPUs do not have a constant Time Stamp Counter. Virtual machines running without accurate timekeeping can have serious affects on some networked applications as your virtual machine will run faster or slower than the actual time.
KVM works around this issue by providing virtual machines with a paravirtualized clock. The KVM pvclock provides a stable source of timing for KVM guests that support it.
Presently, only Red Hat Enterprise Linux 5.4 and higher virtual machines fully support the paravirtualized clock.
Virtual machines can have several problems caused by inaccurate clocks and counters:
  • Clocks can fall out of synchronization with the actual time which invalidates sessions and affects networks.
  • Virtual machines with slower clocks may have issues migrating.
These problems exist on other virtualization platforms and timing should always be tested.

Important

The Network Time Protocol (NTP) daemon should be running on the host and the virtual machines. Enable the ntpd service:
# service ntpd start
Add the ntpd service to the default startup sequence:
# chkconfig ntpd on
Using the ntpd service should minimize the affects of clock skew in all cases.
The NTP servers you are trying to use must be operational and accessible to your hosts and virtual machines.
Determining if your CPU has the constant Time Stamp Counter
Your CPU has a constant Time Stamp Counter if the constant_tsc flag is present. To determine if your CPU has the constant_tsc flag run the following command:
$ cat /proc/cpuinfo | grep constant_tsc
If any output is given your CPU has the constant_tsc bit. If no output is given follow the instructions below.
Configuring hosts without a constant Time Stamp Counter
Systems without constant time stamp counters require additional configuration. Power management features interfere with accurate time keeping and must be disabled for virtual machines to accurately keep time with KVM.

Important

These instructions are for AMD revision F CPUs only.
If the CPU lacks the constant_tsc bit, disable all power management features (BZ#513138). Each system has several timers it uses to keep time. The TSC is not stable on the host, which is sometimes caused by cpufreq changes, deep C state, or migration to a host with a faster TSC. Deep C sleep states can stop the TSC. To prevent the kernel using deep C states append "processor.max_cstate=1" to the kernel boot options in the grub.conf file on the host:
term Red Hat Enterprise Linux Server (2.6.18-159.el5)
        root (hd0,0)
	kernel /vmlinuz-2.6.18-159.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet processor.max_cstate=1
Disable cpufreq (only necessary on hosts without the constant_tsc) by editing the /etc/sysconfig/cpuspeed configuration file and change the MIN_SPEED and MAX_SPEED variables to the highest frequency available. Valid limits can be found in the /sys/devices/system/cpu/cpu*/cpufreq/scaling_available_frequencies files.
Using the engine-config tool to receive alerts when hosts drift out of sync.
You can use the engine-config tool to configure alerts when your hosts drift out of sync.
There are 2 relevant parameters for time drift on hosts: EnableHostTimeDrift and HostTimeDriftInSec. EnableHostTimeDrift, with a default value of false, can be enabled to receive alert notifications of host time drift. The HostTimeDriftInSec parameter is used to set the maximum allowable drift before alerts start being sent.
Alerts are sent once per hour per host.
Using the paravirtualized clock with Red Hat Enterprise Linux virtual machines
For certain Red Hat Enterprise Linux virtual machines, additional kernel parameters are required. These parameters can be set by appending them to the end of the /kernel line in the /boot/grub/grub.conf file of the virtual machine.

Note

The process of configuring kernel parameters can be automated using the ktune package
The ktune package provides an interactive Bourne shell script, fix_clock_drift.sh. When run as the superuser, this script inspects various system parameters to determine if the virtual machine on which it is run is susceptible to clock drift under load. If so, it then creates a new grub.conf.kvm file in the /boot/grub/ directory. This file contains a kernel boot line with additional kernel parameters that allow the kernel to account for and prevent significant clock drift on the KVM virtual machine. After running fix_clock_drift.sh as the superuser, and once the script has created the grub.conf.kvm file, then the virtual machine's current grub.conf file should be backed up manually by the system administrator, the new grub.conf.kvm file should be manually inspected to ensure that it is identical to grub.conf with the exception of the additional boot line parameters, the grub.conf.kvm file should finally be renamed grub.conf, and the virtual machine should be rebooted.
The table below lists versions of Red Hat Enterprise Linux and the parameters required for virtual machines on systems without a constant Time Stamp Counter.
Red Hat Enterprise Linux Additional virtual machine kernel parameters
5.4 AMD64/Intel 64 with the paravirtualized clock Additional parameters are not required
5.4 AMD64/Intel 64 without the paravirtualized clock notsc lpj=n
5.4 x86 with the paravirtualized clock Additional parameters are not required
5.4 x86 without the paravirtualized clock clocksource=acpi_pm lpj=n
5.3 AMD64/Intel 64 notsc
5.3 x86 clocksource=acpi_pm
4.8 AMD64/Intel 64 notsc
4.8 x86 clock=pmtmr
3.9 AMD64/Intel 64 Additional parameters are not required
3.9 x86 Additional parameters are not required
Using the Real-Time Clock with Windows virtual machines
Windows uses the both the Real-Time Clock (RTC) and the Time Stamp Counter (TSC). For Windows virtual machines the Real-Time Clock can be used instead of the TSC for all time sources which resolves virtual machine timing issues.
To enable the Real-Time Clock for the PMTIMER clocksource (the PMTIMER usually uses the TSC) add the following line to the Windows boot settings. Windows boot settings are stored in the boot.ini file. Add the following line to the boot.ini file:
/use pmtimer
For more information on Windows boot settings and the pmtimer option, refer to Available switch options for the Windows XP and the Windows Server 2003 Boot.ini files.

Chapter 9. Templates

9.1. Introduction to Templates

A template is a copy of a preconfigured virtual machine, used to simplify the subsequent, repeated creation of similar virtual machines. Templates capture installed software and software configurations, as well as the hardware configuration, of the original virtual machine.
When you create a template from a virtual machine, a read-only copy of the virtual machine's disk is taken. The read-only disk becomes the base disk image of the new template, and of any virtual machines created from the template. As such, the template cannot be deleted whilst virtual machines created from the template exist in the environment.
Virtual machines created from a template use the same NIC type and driver as the original virtual machine, but utilize separate and unique MAC addresses.

Note

A virtual machine may require to be sealed before being used to create a template.

9.2. Template Tasks

9.2.1. Creating a Template from an Existing Virtual Machine

Summary
Create a template from an existing virtual machine to use as a blueprint for creating additional virtual machines.

Procedure 9.1. Creating a Template from an Existing Virtual Machine

  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Ensure the virtual machine is powered down and has a status of Down.
  3. Click Make Template to open the New Template window.
    New Template Window

    Figure 9.1. New Template Window

  4. Enter a Name, Description and Comment for the template.
  5. From the Cluster drop-down menu, select the cluster with which the template will be associated. By default, this will be the same as that of the source virtual machine.
  6. In the Disks Allocation section, enter an alias for the disk in the Alias text field and select the storage domain on which the disk will be stored from the Target drop-down list. By default, these will be the same as those of the source virtual machine.
  7. The Allow all users to access this Template check box is selected by default. This makes the template public.
  8. The Copy VM permissions check box is not selected by default. Select this check box to copy the permissions of the source virtual machine to the template.
  9. Click OK.
Result
The virtual machine displays a status of Image Locked while the template is being created. The process of creating a template may take up to an hour depending on the size of the virtual machine disk and your storage hardware. When complete, the template is added to the Templates tab. You can now create new virtual machines based on the template.

Note

When a template is made, the virtual machine is copied so that both the existing virtual machine and its template are usable after template creation.

9.2.2. Explanation of Settings and Controls in the New Template Window

The following table details the settings for the New Template window.

Table 9.1. New Template and Edit Template Settings

Field
Description/Action
Name
The name of the template. This is the name by which the template is listed in the Templates tab in the Administration Portal and is accessed via the REST API. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.
Description
A description of the template. This field is recommended but not mandatory.
Comment
A field for adding plain text, human-readable comments regarding the template.
Cluster
The cluster with which the template will be associated. This is the same as the original virtual machines by default. You can select any cluster in the data center.
Create as a Sub Template version
Allows you to specify whether the template will be created as a new version of an existing template. Select this check box to access the settings for configuring this option.
  • Root Template: The template under which the sub template will be added.
  • Sub Version Name: The name of the template. This is the name by which the template is accessed when creating a new virtual machine based on the template.
Disks Allocation
Alias - An alias for the virtual machine disk used by the template. By default, the alias is set to the same value as that of the source virtual machine.
Virtual Size - The current actual size of the virtual disk used by the template. This value cannot be edited, and is provided for reference only.
Target - The storage domain on which the virtual disk used by the template will be stored. By default, the storage domain is set to the same value as that of the source virtual machine. You can select any storage domain in the cluster.
Allow all users to access this Template
Allows you to specify whether a template is public or private. A public template can be accessed by all users, whereas a private template can only be accessed by users with the TemplateAdmin or SuperUser roles.
Copy VM permissions
Allows you to copy explicit permissions that have been set on the source virtual machine to the template.

9.2.3. Editing a Template

Summary
Once a template has been created, its properties can be edited. Because a template is a copy of a virtual machine, the options available when editing a template are identical to those in the Edit Virtual Machine window.

Procedure 9.2. Editing a Template

  1. Use the Templates resource tab, tree mode, or the search function to find and select the template in the results list.
  2. Click Edit to open the Edit Template window.
  3. Change the necessary properties and click OK.
Result
The properties of the template are updated. The Edit Template window will not close if a property field is invalid.

9.2.4. Deleting a Template

Summary
Delete a template from your Red Hat Enterprise Virtualization environment.

Warning

If you have used a template to create a virtual machine, make sure that you do not delete the template as the virtual machine needs it to continue running.

Procedure 9.3. Deleting a Template

  1. Use the resource tabs, tree mode, or the search function to find and select the template in the results list.
  2. Click Remove to open the Remove Template(s) window.
  3. Click OK to remove the template.
Result
You have removed the template.

9.2.5. Exporting Templates

9.2.5.1. Migrating Templates to the Export Domain

Summary
Export templates into the export domain to move them to another data domain, either in the same Red Hat Enterprise Virtualization environment, or another one.

Procedure 9.4. Exporting Individual Templates to the Export Domain

  1. Use the Templates resource tab, tree mode, or the search function to find and select the template in the results list.
  2. Click Export to open the Export Template window.

    Note

    Select the Force Override check box to replace any earlier version of the template on the export domain.
  3. Click OK to begin exporting the template; this may take up to an hour, depending on the virtual machine disk image size and your storage hardware.
  4. Repeat these steps until the export domain contains all the templates to migrate before you start the import process.
    Use the Storage resource tab, tree mode, or the search function to find and select the export domain in the results list and click the Template Import tab in the details pane to view all exported templates in the export domain.
Result
The templates have been exported to the export domain.

9.2.5.2. Copying a Template's Virtual Hard Disk

Summary
If you are moving a virtual machine that was created from a template with the thin provisioning storage allocation option selected, the template's disks must be copied to the same storage domain as that of the virtual machine disk.

Procedure 9.5. Copying a Virtual Hard Disk

  1. Select the Disks tab.
  2. Select the template disk or disks to copy.
  3. Click the Copy button to display the Copy Disk window.
  4. Use the drop-down menu or menus to select the Target data domain.
Result
A copy of the template's virtual hard disk has been created, either on the same, or a different, storage domain. If you were copying a template disk in preparation for moving a virtual hard disk, you can now move the virtual hard disk.

9.2.6. Importing Templates

9.2.6.1. Importing a Template into a Data Center

Summary
Import templates from a newly attached export domain.

Procedure 9.6. Importing a Template into a Data Center

  1. Use the resource tabs, tree mode, or the search function to find and select the newly attached export domain in the results list.
  2. Select the Template Import tab of the details pane to display the templates that migrated across with the export domain.
  3. Select a template and click Import to open the Import Template(s) window.
  4. Select the templates to import.
  5. Use the drop-down menus to select the Destination Cluster and Storage domain. Alter the Suffix if applicable.
    Alternatively, clear the Clone All Templates check box.
  6. Click OK to import templates and open a notification window. Click Close to close the notification window.
Result
The template is imported into the destination data center. This can take up to an hour, depending on your storage hardware. You can view the import progress in the Events tab.
Once the importing process is complete, the templates will be visible in the Templates resource tab. The templates can create new virtual machines, or run existing imported virtual machines based on that template.

9.3. Sealing Templates in Preparation for Deployment

9.3.1. Sealing a Linux Virtual Machine Manually for Deployment as a Template

Summary
Generalize (seal) a Linux virtual machine before making it into a template. This prevents conflicts between virtual machines deployed from the template.

Procedure 9.7. Sealing a Linux Virtual Machine

  1. Log in to the virtual machine. Flag the system for re-configuration by running the following command as root:
    # touch /.unconfigured
  2. Remove ssh host keys. Run:
    # rm -rf /etc/ssh/ssh_host_*
  3. Set HOSTNAME=localhost.localdomain in /etc/sysconfig/network
  4. Remove /etc/udev/rules.d/70-*. Run:
    # rm -rf /etc/udev/rules.d/70-*
  5. Remove the HWADDR= line from /etc/sysconfig/network-scripts/ifcfg-eth*.
  6. Optionally delete all the logs from /var/log and build logs from /root.
  7. Shut down the virtual machine. Run:
    # poweroff
Result
The virtual machine is sealed and can be made into a template. You can deploy Linux virtual machines from this template without experiencing configuration file conflicts.

9.3.2. Sealing a Linux Virtual Machine for Deployment as a Template using sys-unconfig

Summary
Generalize (seal) a Linux virtual machine using the sys-unconfig command before making it into a template. This prevents conflicts between virtual machines deployed from the template.

Procedure 9.8. Sealing a Linux Virtual Machine using sys-unconfig

  1. Log in to the virtual machine.
  2. Remove ssh host keys. Run:
    # rm -rf /etc/ssh/ssh_host_*
  3. Set HOSTNAME=localhost.localdomain in /etc/sysconfig/network
  4. Remove the HWADDR= line from /etc/sysconfig/network-scripts/ifcfg-eth*.
  5. Optionally delete all the logs from /var/log and build logs from /root.
  6. Run the following command:
    # sys-unconfig
Result
The virtual machine shuts down; it is now sealed and can be made into a template. You can deploy Linux virtual machines from this template without experiencing configuration file conflicts.

9.3.3. Sealing a Windows Template

9.3.3.1. Considerations when Sealing a Windows Template with Sysprep

A template created for Windows virtual machines must be generalized (sealed) before being used to deploy virtual machines. This ensures that machine-specific settings are not reproduced in the template.
The Sysprep tool is used to seal Windows templates before use.

Important

Do not reboot the virtual machine during this process.
Before starting the Sysprep process, verify the following settings are configured:
  • The Windows Sysprep parameters have been correctly defined.
    If not, click Edit and enter the required information in the Operating System and Domain fields.
  • The correct product key has been entered in the engine-config configuration tool.
    If not, run the configuration tool on the Manager as the root user, and enter the required information. The configuration keys that you need to set are ProductKey and SysPrepPath. For example, the Windows 7 configuration value is ProductKeyWindow7 and SysPrepWindows7Path. Set these values with this command:
    # engine-config --set ProductKeyWindow7=<validproductkey> --cver=general

9.3.3.2. Sealing a Windows XP Template

Summary
Seal a Windows XP template using the Sysprep tool before using the template to deploy virtual machines.

Note

You can also use the procedure above to seal a Windows 2003 template. The Windows 2003 Sysprep tool is available at http://www.microsoft.com/download/en/details.aspx?id=14830.

Procedure 9.9. Sealing a Windows XP Template

  1. Download sysprep to the virtual machine to be used as a template.
    The Windows XP Sysprep tool is available at http://www.microsoft.com/download/en/details.aspx?id=11282
  2. Create a new directory: c:\sysprep.
  3. Open the deploy.cab file and add its contents to c:\sysprep.
  4. Execute sysprep.exe from within the folder and click OK on the welcome message to display the Sysprep tool.
  5. Select the following check boxes:
    • Don't reset grace period for activation
    • Use Mini-Setup
  6. Ensure that the shutdown mode is set to Shut down and click Reseal.
  7. Acknowledge the pop-up window to complete the sealing process; the virtual machine shuts down automatically upon completion.
Result
The Windows XP template is sealed and ready for deploying virtual machines.

9.3.3.3. Sealing a Windows 7 or Windows 2008 Template

Summary
Seal a Windows 7 or Windows 2008 template before using the template to deploy virtual machines.

Procedure 9.10. Sealing a Windows 7 or Windows 2008 Template

  1. In the virtual machine to be used as a template, open a command line terminal and type regedit.
  2. The Registry Editor window opens. On the left pane, expand HKEY_LOCAL_MACHINESYSTEMSETUP.
  3. On the main window, right-click to add a new string value using NewString Value.
  4. Right-click on the file and select Modify to open the Edit String window.
  5. Enter the following information in the provided fields:
    • Value name: UnattendFile
    • Value data: a:\sysprep.inf
  6. Launch Sysprep from C:\Windows\System32\sysprep\sysprep.exe.
  7. Enter the following information into the Sysprep tool:
    • Under System Cleanup Action, select Enter System Out-of-Box-Experience (OOBE).
    • Select the Generalize check box if you need to change the computer's system identification number (SID).
    • Under Shutdown Options, select Shutdown.
    Click OK to complete the sealing process; the virtual machine shuts down automatically upon completion.
Result
The Windows 7 or Windows 2008 template is sealed and ready for deploying virtual machines.

9.3.4. Using Cloud-Init to Automate the Configuration of Virtual Machines

9.3.4.1. Cloud-Init Overview

Cloud-Init is a tool for automating the initial setup of virtual machines such as configuring the hostname, network interfaces, authorized keys and more. It can be used when provisioning virtual machines that have been deployed based on a template to avoid conflicts on the network.
To use this tool, the cloud-init package must first be installed on the virtual machine. Once installed, the Cloud-Init service starts during the boot process to search for instructions on what to configure. Options in the Run Once function of the Red Hat Enterprise Virtualization Manager can then be used to provide these instructions.

9.3.4.2. Installing Cloud-Init

Install cloud-init on a virtual machine.
  1. Enable the following channel:
    Red Hat Common (for RHEL 6 Server x86_64)
  2. Log on to the virtual machine.
  3. Open a terminal.
  4. Install the cloud-init package and dependencies:
    # yum install cloud-init

9.3.4.3. Using Cloud-Init

Use Cloud-Init to automate the initial configuration of a Linux virtual machine that has been provisioned based on a template.
  1. Use the Virtual Machines resource tab, tree mode, or the search function to find and select a virtual machine in the results list.
  2. Click Run Once to open the Run Virtual Machine(s) window.
  3. Expand the Initial Run section and select the Cloud-Init check box.
  4. Select the check box to enter a Hostname.
  5. Select the check box to enable Network and use the Add new and Remove selected buttons to add or remove network interfaces.
  6. Select the check box to enable SSH Authorized Keys.
  7. Select the check box to enable Regenerate System SSH Keys.
  8. Select the check box to specify a Time Zone.
  9. Select the check box to specify a Root Password and verify the password in Verify Root Password.
  10. Select the check box to add a File Attachment and use the Add new and Remove selected buttons to add or remove files.
  11. Click Ok.

Important

Cloud-Init is only supported on cluster compatibility version 3.3 and higher.
Result
The virtual machine boots and the settings specified in Cloud-Init are applied.

9.4. Templates and Permissions

9.4.1. Managing System Permissions for a Template

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, and so forth.
A template administrator is a system administration role for templates in a data center. This role can be applied to specific virtual machines, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual resources.
The template administrator role permits the following actions:
  • Create, edit, export, and remove associated templates; and
  • import and export templates.

Note

You can only assign roles and permissions to existing users.

9.4.2. Template Administrator Roles Explained

Template Administrator Permission Roles
The table below describes the administrator roles and privileges applicable to template administration.

Table 9.2. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
TemplateAdmin Can perform all operations on templates. Has privileges to create, delete and configure the templates' storage domains and network details, and to move templates between domains.
NetworkAdmin Network Administrator Can configure and manage networks attached to templates.

9.4.3. Template User Roles Explained

Template User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating templates in the User Portal.

Table 9.3. Red Hat Enterprise Virtualization Template User Roles

Role Privileges Notes
TemplateCreator Can create, edit, manage and remove virtual machine templates within assigned resources. The TemplateCreator role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.
TemplateOwner Can edit and delete the template, assign and manage user permissions for the template. The TemplateOwner role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template.
UserTemplateBasedVm Can use the template to create virtual machines. Cannot edit template properties.
NetworkUser Logical network and network interface user for templates. If the Allow all users to use this Network option was selected when a logical network is created, NetworkUser permissions are assigned to all users for the logical network. Users can then attach or detach template network interfaces to or from the logical network.

9.4.4. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 9.11. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

9.4.5. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 9.12. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 10. Pools

10.1. Introduction to Virtual Machine Pools

A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users.
Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool.
Virtual machines in a pool are stateless, meaning that data is not persistent across reboots. Virtual machines in a pool are started when taken by a user, and shut down when the user is finished.
Virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.

Note

Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary.

10.2. Virtual Machine Pool Tasks

10.2.1. Creating a Virtual Machine Pool

Summary
You can create a virtual machine pool that contains multiple virtual machines that have been created based on a common template. Virtual machine pools provide generic virtual machines to end users on demand.

Procedure 10.1. Creating a Virtual Machine Pool

  1. In flat mode, click the Pools resource tab to display a list of virtual machine pools.
  2. Click the New button to open the New Pool window.
    • Use the drop down-list to select the Cluster or use the selected default.
    • Use the Based on Template drop-down list to select a template or use the selected default. A template provides standard settings for all the virtual machines in the pool.
    • Use the Operating System drop-down list to select an Operating System or use the default provided by the template.
    • Use the Optimized for drop-down list to optimize virtual machines for either Desktop use or Server use.
  3. Enter the Name, Description, and Number of VMs for the pool.
  4. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is one.
  5. Click the Show Advanced Options button, and then select the System tab. Enter the Memory Size to be used for each virtual machine in the pool and the number of Total Virtual CPUs, or use the defaults, which are set in the template.
  6. If applicable, click the Advanced Parameters disclosure button and use the drop-down lists to select the Cores per Virtual Socket and Virtual Sockets. The number of these you can set depends on the number of Total Virtual CPUs you have specified.
  7. Click the Pool tab and select a Pool Type:
    • Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool.
    • Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool.
  8. The Initial Run, Console, Host, Resource Allocation, Boot Options, and Custom Properties tabs are not mandatory but define options for your pool. These tabs feature settings and controls that are identical to the settings and controls in the New Virtual Machines window.
  9. Click OK to create the pool.
Result
You have created and configured a virtual machine pool with the specified number of identical virtual machines.
You can view these virtual machines in the Virtual Machines resource tab, or in the details pane of the Pools resource tab; a virtual machine in a pool is distinguished from independent virtual machines by its icon.

10.2.2. Editing a Virtual Machine Pool

Summary
After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by.

Procedure 10.2. Editing a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Edit the properties of the virtual machine pool.
  4. Click Ok.
Result
The properties of an existing virtual machine pool have been edited.

10.2.3. Prestarting Virtual Machines in a Pool

The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool.
Summary
Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines.

Procedure 10.3. Prestarting Virtual Machines in a Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  4. Select the Pool tab. Ensure Pool Type is set to Automatic.
  5. Click OK.
Result
You have set a number of prestarted virtual machines in a pool. The prestarted machines are running and available for use.

10.2.4. Adding Virtual Machines to a Virtual Machine Pool

Summary
If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool.

Procedure 10.4. Adding Virtual Machines to a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Edit to open the Edit Pool window.
  3. Enter the number of additional virtual machines to add in the Increase number of VMs in pool by field.
  4. Click OK.
Result
You have added more virtual machines to the virtual machine pool.

10.2.5. Detaching Virtual Machines from a Virtual Machine Pool

Summary
You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine.

Procedure 10.5. Detaching Virtual Machines from a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Ensure the virtual machine has a status of Down because you cannot detach a running virtual machine.
    Click the Virtual Machines tab in the details pane to list the virtual machines in the pool.
  3. Select one or more virtual machines and click Detach to open the Detach Virtual Machine(s) confirmation window.
  4. Click OK to detach the virtual machine from the pool.

Note

The virtual machine still exists in the environment and can be viewed and accessed from the Virtual Machines resource tab. Note that the icon changes to denote that the detached virtual machine is an independent virtual machine.
Result
You have detached a virtual machine from the virtual machine pool.

10.2.6. Removing a Virtual Machine Pool

Summary
You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines.

Procedure 10.6. Removing a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Click Remove to open the Remove Pool(s) confirmation window.
  3. Click OK to remove the pool.
Result
You have removed the pool from the data center.

10.3. Pools and Permissions

10.3.1. Managing System Permissions for a Virtual Machine Pool

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, and so forth.
A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.
The virtual machine pool administrator role permits the following actions:
  • Create, edit, and remove pools; and
  • Add and detach virtual machines from the pool.

Note

You can only assign roles and permissions to existing users.

10.3.2. Virtual Machine Pool Administrator Roles Explained

Pool Permission Roles
The table below describes the administrator roles and privileges applicable to pool administration.

Table 10.1. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
VmPoolAdmin System Administrator role of a virtual pool. Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine.
ClusterAdmin Cluster Administrator Can use, create, delete, manage all virtual machine pools in a specific cluster.

10.3.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 10.7. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

10.3.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 10.8. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 11. Virtual Machine Disks

11.1. Understanding Virtual Machine Storage

Red Hat Enterprise Virtualization supports three storage types: NFS, iSCSI and FCP.
In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool's metadata. All other hosts can only access virtual machine hard disk image data.
By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system.
In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual machine disks. Virtual machine disks on block-based storage are preallocated by default.
If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Red Hat Enterprise Linux server using kpartx, vgscan, vgchange and mount to investigate the virtual machine's processes or problems.
If the virtual disk is a thin provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space.
A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (Qcow2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-IO intensive virtual machines.
Red Hat Enterprise Virtualization 3.3 introduces online virtual drive resizing.

11.2. Understanding Virtual Disks

Virtual disks are of two types, Thin Provisioned or Preallocated. Preallocated disks are RAW formatted. Thin provisioned disks are Qcow2 formatted.
  • Preallocated
    A preallocated virtual disk has reserved storage of the same size as the virtual disk itself. The backing storage device (file/block device) is presented as is to the virtual machine with no additional layering in between. This results in better performance because no storage allocation is required during runtime.
    On SAN (iSCSI, FCP) this is achieved by creating a block device with the same size as the virtual disk. On NFS this is achieved by filling the backing hard disk image file with zeros. Preallocating storage on an NFS storage domain presumes that the backing storage is not Qcow2 formatted and zeros will not be deduplicated in the hard disk image file. (If these assumptions are incorrect, do not select Preallocated for NFS virtual disks).
  • Thin Provisioned
    For sparse virtual disks backing storage is not reserved and is allocated as needed during runtime. This allows for storage over commitment under the assumption that most disks are not fully utilized and storage capacity can be utilized better. This requires the backing storage to monitor write requests and can cause some performance issues. On NFS backing storage is achieved by using files. On SAN this is achieved by creating a block device smaller than the virtual disk's defined size and communicating with the hypervisor to monitor necessary allocations. This does not require support from the underlying storage devices.
The possible combinations of storage types and formats are described in the following table.

Table 11.1. Permitted Storage Combinations

Storage Format Type Note
NFS or iSCSI/FCP RAW or Qcow2 Sparse or Preallocated  
NFS RAW Preallocated A file with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
NFS RAW Sparse A file with an initial size which is close to zero, and has no formatting.
NFS Qcow2 Sparse A file with an initial size which is close to zero, and has RAW formatting. Subsequent layers will be Qcow2 formatted.
SAN RAW Preallocated A block device with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.
SAN Qcow2 Preallocated A block device with an initial size which equals the amount of storage defined for the virtual disk, and has Qcow2 formatting.
SAN Qcow2 Sparse A block device with an initial size which is much smaller than the size defined for the VDisk (currently 1GB), and has Qcow2 formatting for which space is allocated as needed (currently in 1GB increments).

11.3. Shareable Disks in Red Hat Enterprise Virtualization

Some applications require storage to be shared between servers. Red Hat Enterprise Virtualization allows you to mark virtual machine hard disks as shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests.
Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated.
You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable.
You can mark a disk shareable either when you create it, or by editing the disk later.

11.4. Virtual Disk Tasks

11.4.1. Creating Floating Virtual Disks

Summary
You can create a virtual disk that does not belong to any virtual machines. You can then attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable.

Procedure 11.1. Creating Floating Virtual Disks

  1. Select the Disks resource tab.
  2. Click Add to open the Add Virtual Disk window.
    Add Virtual Disk Window

    Figure 11.1. Add Virtual Disk Window

  3. Use the radio buttons to specify whether the virtual disk will be an Internal or External (Direct Lun) disk.
  4. Enter the Size(GB), Alias, and Description of the virtual disk.
  5. Use the drop-down menus to select the Interface, Allocation Policy, Data Center, and Storage Domain of the virtual disk.
  6. Select the Wipe After Delete, Is Bootable and Is Shareable check boxes to enable each of these options.
  7. Click OK.
Result
You have created a virtual disk that can be attached to one or more virtual machines depending on its settings.

11.4.2. Explanation of Settings in the New Virtual Disk Window

Table 11.2. Add Virtual Disk Settings: Internal

Field Name
Description
Size(GB)
The size of the new virtual disk in GB.
Alias
The name of the virtual disk, limited to 40 characters.
Description
A description of the virtual disk. This field is recommended but not mandatory.
Interface
The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.
Allocation Policy
The provisioning policy for the new virtual disk. Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. Preallocated virtual disks take more time to create than thinly provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers. Thinly provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thinly provisioned virtual disks are recommended for desktops.
Data Center
The data center in which the virtual disk will be available.
Storage Domain
The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given cluster, and also shows the total space and currently available space in the storage domain.
Wipe after delete
Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted.
Is bootable
Allows you to enable the bootable flag on the virtual disk.
Is Shareable
Allows you to attach the virtual disk to more than one virtual machine at a time.
The External (Direct Lun) settings include all entries from the Internal settings except Size(GB), Wipe after Delete, and some additional entries as follows. The settings specific to external LUNs can be displayed in either Targets > LUNs or LUNs > Targets, which specifies whether available LUNs are sorted according to the host on which they are discovered or in a single list of LUNs.

Table 11.3. Add Virtual Disk Settings: External (Direct Lun)

Field Name
Description
Use Host
The host on which the LUN will be mounted. You can select any host in the data center.
Storage Type
The type of external LUN to add. You can select from either iSCSI or Fibre Channel.
Discover Targets
This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected.
Address - The host name or IP address of the target server.
Port - The port by which to attempt a connection to the target server. The default port is 3260.
User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs.
CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.
CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.
Fill in the fields in the Discover Targets section and click the Discover button to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons next to each LUN, select the LUN to add.
Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data.
The following considerations must be made when using a direct LUN as a virtual machine hard disk image:
  • Live storage migration of direct LUN hard disk images is not supported.
  • Direct LUN disks are not included in virtual machine exports.
  • Direct LUN disks are not included in virtual machine snapshots.

11.4.3. Moving a Virtual Machine Hard Disk Between Data Domains

Summary
You would like to move a virtual hard disk from one data domain to another. You might do this to take advantage of high performance storage, or because you would like to decommission one of your storage domains that contains virtual hard disks.
If the virtual disk is attached to a virtual machine that was created from a template with the thin provisioning storage allocation option selected, the template's disks must be copied to the same data domain as the virtual machine disk.

Procedure 11.2. Moving a Virtual Machine Hard Disk Between Data Domains

  1. Select the Disks resource tab.
  2. Select the virtual disk or disks to move.
  3. Click Move to open the Move Disk(s) window.
  4. Use the drop-down menu or menus to select the Target data domain.
  5. Click OK to move the disks and close the window.
Result
Disks have a Locked status while being moved. Upon completion, the virtual disk has been moved from the source domain to the target domain.

11.4.4. Importing a Virtual Disk Image from an OpenStack Image Service

Summary
Virtual disk images managed by an OpenStack Image Service can be imported into the Red Hat Enterprise Virtualization Manager if that OpenStack Image Service has been added to the Manager as an external provider.
  1. Click the Storage resource tab and select the OpenStack Image Service domain from the results list.
  2. Select the image to import in the Images tab of the details pane.
  3. Click Import to open the Import Image(s) window.
  4. From the Data Center drop-down menu, select the data center into which the virtual disk image will be imported.
  5. From the Domain Name drop-down menu, select the storage domain in which the virtual disk image will be stored.
  6. Optionally, select a quota from the Quota drop-down menu to apply a quota to the virtual disk image.
  7. Click OK to import the image.
Result
The image is imported as a floating disk and is displayed in the results list of the Disks resource tab. It can now be attached to a virtual machine.

11.4.5. Exporting a Virtual Machine Disk to an OpenStack Image Service

Summary
Virtual machine disks can be exported to an OpenStack Image Service that has been added to the Manager as an external provider.
  1. Click the Disks resource tab.
  2. Select the disks to export.
  3. Click the Export button to open the Export Disk window.
  4. From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported.
  5. From the Quota drop-down list, select a quota for the disks if a quota is to be applied.
  6. Click OK.
Result
The virtual machine disks are exported to the specified OpenStack Image Service where they are managed as virtual machine disk images.

Important

Virtual machine disks can only be exported if they do not have multiple volumes, are not thinly provisioned, and do not have any snapshots.

11.5. Virtual Disks and Permissions

11.5.1. Managing System Permissions for a Virtual Disk

The system administrator, as the SuperUser, manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for empowering a user with certain administrative privileges that limit them to a specific resource: a DataCenterAdmin role has administrator privileges only for the assigned data center, a ClusterAdmin has administrator privileges only for the assigned cluster, a StorageAdmin has administrator privileges only for the assigned storage domain, and so forth.
Red Hat Enterprise Virtualization Manager provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the User Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources.
The virtual disk creator role permits the following actions:
  • Create, edit, and remove virtual disks associated with a virtual machine or other resources; and
  • Edit user permissions for virtual disks.

Note

You can only assign roles and permissions to existing users.

11.5.2. Virtual Disk User Roles Explained

Virtual Disk User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating virtual machine disks in the User Portal.

Table 11.4. Red Hat Enterprise Virtualization System Administrator Roles

Role Privileges Notes
DiskOperator Virtual disk user. Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached.
DiskCreator Can create, edit, manage and remove virtual machine disks within assigned clusters or data centers. This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.

11.5.3. Assigning an Administrator or User Role to a Resource

Summary
Assign administrator or user roles to resources to allow users to access or manage that resource.

Procedure 11.3. Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Click Add to open the Add Permission to User window.
  4. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  5. Select a role from the Role to Assign: drop-down menu.
  6. Click OK to assign the role and close the window.
Result
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

11.5.4. Removing an Administrator or User Role from a Resource

Summary
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Procedure 11.4. Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab of the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  3. Select the user to remove from the resource.
  4. Click Remove. The Remove Permission window opens to confirm permissions removal.
  5. Click OK to remove the user role.
Result
You have removed the user's role, and the associated permissions, from the resource.

Chapter 12. Red Hat Storage (GlusterFS) Volumes

12.1. Introduction to Red Hat Storage (GlusterFS) Volumes

Red Hat Storage volumes combine storage from more than one Red Hat Storage server into a single global namespace. A volume is a collection of bricks, where each brick is a mountpoint or directory on a Red Hat Storage Server in the trusted storage pool.
Most of the management operations of Red Hat Storage happen on the volume.
You can use the Administration Portal to create and start new volumes. You can monitor volumes in your Red Hat Storage cluster from the Volumes tab.
While volumes can be created and managed from the Administration Portal, bricks must be created on the individual Red Hat Storage nodes before they can be added to volumes using the Administration Portal

12.2. Introduction to Red Hat Storage (GlusterFS) Bricks

Bricks are the basic unit of storage in Red Hat Storage, and they are combined into volumes. Each brick is a directory or mountpoint. XFS is the recommended brick filesystem.
When adding a brick to a volume, the bric