Red Hat Training
A Red Hat training course is available for Red Hat Virtualization
Technical Reference
The Technical Architecture of Red Hat Virtualization Environments
Abstract
Chapter 1. Introduction
1.1. Red Hat Virtualization Manager

Figure 1.1. Red Hat Virtualization Manager Architecture
1.2. Red Hat Virtualization Host

Figure 1.2. Host Architecture
- Kernel-based Virtual Machine (KVM)
- The Kernel-based Virtual Machine (KVM) is a loadable kernel module that provides full virtualization through the use of the Intel VT or AMD-V hardware extensions. Though KVM itself runs in kernel space, the guests running upon it run as individual QEMU processes in user space. KVM allows a host to make its physical hardware available to virtual machines.
- QEMU
- QEMU is a multi-platform emulator used to provide full system emulation. QEMU emulates a full system, for example a PC, including one or more processors, and peripherals. QEMU can be used to launch different operating systems or to debug system code. QEMU, working in conjunction with KVM and a processor with appropriate virtualization extensions, provides full hardware assisted virtualization.
- Red Hat Virtualization Manager Host Agent, VDSM
- In Red Hat Virtualization,
VDSM
initiates actions on virtual machines and storage. It also facilitates inter-host communication. VDSM monitors host resources such as memory, storage, and networking. Additionally, VDSM manages tasks such as virtual machine creation, statistics accumulation, and log collection. A VDSM instance runs on each host and receives management commands from the Red Hat Virtualization Manager using the re-configurable port54321
.VDSM-REGVDSM
usesVDSM-REG
to register each host with the Red Hat Virtualization Manager.VDSM-REG
supplies information about itself and its host using port80
or port443
. libvirt
- Libvirt facilitates the management of virtual machines and their associated virtual devices. When Red Hat Virtualization Manager initiates virtual machine life-cycle commands (start, stop, reboot), VDSM invokes libvirt on the relevant host machines to execute them.
- Storage Pool Manager, SPM
- The Storage Pool Manager (SPM) is a role assigned to one host in a data center. The SPM host has sole authority to make all storage domain structure metadata changes for the data center. This includes creation, deletion, and manipulation of virtual disk images, snapshots, and templates. It also includes allocation of storage for sparse block devices on a Storage Area Network(SAN). The role of SPM can be migrated to any host in a data center. As a result, all hosts in a data center must have access to all the storage domains defined in the data center.Red Hat Virtualization Manager ensures that the SPM is always available. In case of storage connectivity errors, the Manager re-assigns the SPM role to another host.
- Guest Operating System
- Guest operating systems can be installed without modification on virtual machines in a Red Hat Virtualization environment. The guest operating system, and any applications on the guest, are unaware of the virtualized environment and run normally.Red Hat provides enhanced device drivers that allow faster and more efficient access to virtualized devices. You can also install the Red Hat Virtualization Guest Agent on guests, which provides enhanced guest information to the management console.
1.3. Interfaces for Accessing the Manager
- User Portal
- Desktop virtualization provides users with a desktop environment that is similar a personal computer's desktop environment. The User Portal is for delivering Virtual Desktop Infrastructure to users. Users access the User Portal through a web browser to display and access their assigned virtual desktops. The actions available to a user in the User Portal are set by a system administrator. Standard users can start, stop, and use desktops that are assigned to them by the system administrator. Power users can perform some administrative actions. Both types of user access the User Portal from the same URL, and are presented with options appropriate to their permission level on login.
- Standard User Access
Standard users are able to power their virtual desktops on and off and connect to them through the User Portal. Direct connection to virtual machines is facilitated with Simple Protocol for Independent Computing Environments (SPICE) or Virtual Network Computing (VNC) clients. Both protocols provide the user with an environment similar to a locally installed desktop environment. The administrator specifies the protocol used to connect to a virtual machine at the time of the virtual machine's creation.
More information on the actions available from the User Portal as well as supported browsers and clients can be found in the Introduction to the User Portal. - Power User Access
The Red Hat Virtualization User Portal provides power users with a graphical user interface to create, use, and monitor virtual resources. System administrators can delegate some administration tasks by granting users power user access. In addition to the tasks that can be performed by standard users, power users can:
- Create, edit, and remove virtual machines.
- Manage virtual disks and network interfaces.
- Assign user permissions to virtual machines.
- Create and use templates to rapidly deploy virtual machines.
- Monitor resource usage and high-severity events.
- Create and use snapshots to restore virtual machines to previous states.
Power users can perform virtual machine administration tasks to be delegated. Data center and cluster level administration tasks are saved for the environment administrator.
- Administration Portal
- The Administration Portal is the graphical administration interface of the Red Hat Virtualization Manager server. Using it administrators can monitor, create, and maintain all elements of the virtualized environment using from web browsers. Tasks which can be performed from the Administration Portal include:
- Creation and management of virtual infrastructure (networks, storage domains).
- Installation and management of hosts.
- Creation and management of logical entities (data centers, clusters).
- Creation and management of virtual machines.
- Red Hat Virtualization user and permission management.
The Administration Portal is displayed using the JavaScript.Administration Portal functions are discussed in further detail in the Red Hat Virtualization Administration Guide. Information on the browsers and platforms that are supported by the Administration Portal can be found in the Red Hat Virtualization Installation Guide. - Representational State Transfer (REST) API
- The Red Hat Virtualization REST API provides a software interface for the interrogation and control of the Red Hat Virtualization environment. The REST API can be used by any programming language that supports HTTP actions.Using the REST API developers and administrators can:
- Integrate with enterprise IT systems.
- Integrate with third party virtualization software.
- Perform automated maintenance and error checking tasks.
- Use scripts to automate repetitive tasks in a Red Hat Virtualization environment.
See the Red Hat Virtualization REST API Guide for the API specification and usage examples.
1.4. Components that Support the Manager
- Red Hat JBoss Enterprise Application Platform
- Red Hat JBoss Enterprise Application Platform is a Java application server. It provides a framework to support efficient development and delivery of cross-platform Java applications. The Red Hat Virtualization Manager is delivered using Red Hat JBoss Enterprise Application Platform.
Important
The version of the Red Hat JBoss Enterprise Application Platform bundled with Red Hat Virtualization Manager is not to be used to serve other applications. It has been customized for the specific purpose of serving the Red Hat Virtualization Manager. Using the Red Hat JBoss Enterprise Application Platform that is included with the Manager for additional purposes adversely affects its ability to service the Red Hat Virtualization environment. - Gathering Reports and Historical Data
- The Red Hat Virtualization Manager includes a data warehouse that collects monitoring data about hosts, virtual machines, and storage. A number of pre-defined reports are available. Customers can analyze their environments and create reports using any query tools that support SQL.The Red Hat Virtualization Manager installation process creates two databases. These databases are created on a Postgres instance which is selected during installation.
- The engine database is the primary data store used by the Red Hat Virtualization Manager. Information about the virtualization environment like its state, configuration, and performance are stored in this database.
- The ovirt_engine_history database contains configuration information and statistical metrics which are collated over time from the engine operational database. The configuration data in the engine database is examined every minute, and changes are replicated to the ovirt_engine_history database. Tracking the changes to the database provides information on the objects in the database. This enables you to analyze and enhance the performance of your Red Hat Virtualization environment and resolve difficulties.For more information on generating reports based on the ovirt_engine_history database see the History Database in the Red Hat Virtualization Data Warehouse Guide.
Important
The replication of data in the ovirt_engine_history database is performed by the RHEVM History Service, ovirt-engine-dwhd. - Directory services
- Directory services provide centralized network-based storage of user and organizational information. Types of information stored include application settings, user profiles, group data, policies, and access control. The Red Hat Virtualization Manager supports Active Directory, Identity Management (IdM), OpenLDAP, and Red Hat Directory Server 9. There is also a local, internal domain for administration purposes only. This internal domain has only one user: the admin user.
1.5. Storage
- Data storage domain
- Data domains hold the virtual hard disk images of all the virtual machines running in the environment. Templates and snapshots of the virtual machines are also stored in the data domain. A data domain cannot be shared across data centers.
- Export storage domain
- An export domain is a temporary storage repository that is used to copy and move images between data centers and Red Hat Virtualization environments. The export domain can be used to back up virtual machines and templates. An export domain can be moved between data centers, but can only be active in one data center at a time.
- ISO storage domain
- ISO domains store ISO files, which are logical CD-ROMs used to install operating systems and applications for the virtual machines. As a logical entity that replaces a library of physical CD-ROMs or DVDs, an ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers.
1.6. Network

Figure 1.3. Network Architecture
- Networking Infrastructure Layer
- The Red Hat Virtualization network architecture relies on some common hardware and software devices:
- Network Interface Controllers (NICs) are physical network interface devices that connect a host to the network.
- Virtual NICs (VNICs) are logical NICs that operate using the host's physical NICs. They provide network connectivity to virtual machines.
- Bonds bind multiple NICs into a single interface.
- Bridges are a packet-forwarding technique for packet-switching networks. They form the basis of virtual machine logical networks.
- Logical Networks
- Logical networks allow segregation of network traffic based on environment requirements. The types of logical network are:
- logical networks that carry virtual machine network traffic,
- logical networks that do not carry virtual machine network traffic,
- optional logical networks,
- and required networks.
All logical networks can either be required or optional.A logical network that carries virtual machine network traffic is implemented at the host level as a software bridge device. By default, one logical network is defined during the installation of the Red Hat Virtualization Manager: theovirtmgmt
management network.Other logical networks that can be added by an administrator are: a dedicated storage logical network, and a dedicated display logical network. Logical networks that do not carry virtual machine traffic do not have an associated bridge device on hosts. They are associated with host network interfaces directly.Red Hat Virtualization segregates management-related network traffic from migration-related network traffic. This makes it possible to use a dedicated network (without routing) for live migration, and ensures that the management network (ovirtmgmt) does not lose its connection to hypervisors during migrations. - Explanation of logical networks on different layers
- Logical networks have different implications for each layer of the virtualization environment.Data Center Layer
Logical networks are defined at the data center level. Each data center has the
ovirtmgmt
management network by default. Further logical networks are optional but recommended. Designation as a VM Network and a custom MTU can be set at the data center level. A logical network that is defined for a data center must also be added to the clusters that use the logical network.Cluster LayerLogical networks are made available from a data center, and must be added to the clusters that will use them. Each cluster is connected to the management network by default. You can optionally add to a cluster logical networks that have been defined for the cluster's parent data center. When a required logical network has been added to a cluster, it must be implemented for each host in the cluster. Optional logical networks can be added to hosts as needed.
Host LayerVirtual machine logical networks are implemented for each host in a cluster as a software bridge device associated with a given network interface. Non-virtual machine logical networks do not have associated bridges, and are associated with host network interfaces directly. Each host has the management network implemented as a bridge using one of its network devices as a result of being included in a Red Hat Virtualization environment. Further required logical networks that have been added to a cluster must be associated with network interfaces on each host to become operational for the cluster.
Virtual Machine LayerLogical networks can be made available to virtual machines in the same way that a network can be made available to a physical machine. A virtual machine can have its virtual NIC connected to any virtual machine logical network that has been implemented on the host that runs it. The virtual machine then gains connectivity to any other devices or destinations that are available on the logical network it is connected to.
Example 1.1. Management Network
The management logical network, namedovirtmgmt
, is created automatically when the Red Hat Virtualization Manager is installed. Theovirtmgmt
network is dedicated to management traffic between the Red Hat Virtualization Manager and hosts. If no other specifically-purposed bridges are set up,ovirtmgmt
is the default bridge for all traffic.
1.7. Data Centers
- The storage container holds information about storage types and storage domains, including connectivity information for storage domains. Storage is defined for a data center, and available to all clusters in the data center. All host clusters within a data center have access to the same storage domains.
- The network container holds information about the data center's logical networks. This includes details such as network addresses, VLAN tags and STP support. Logical networks are defined for a data center, and are optionally implemented at the cluster level.
- The cluster container holds clusters. Clusters are groups of hosts with compatible processor cores, either AMD or Intel processors. Clusters are migration domains; virtual machines can be live-migrated to any host within a cluster, and not to other clusters. One data center can hold multiple clusters, and each cluster can contain multiple hosts.
Chapter 2. Storage
2.1. Storage Domains Overview
2.2. Types of Storage Backing Storage Domains
- File Based Storage
- The file based storage types supported by Red Hat Virtualization are NFS, GlusterFS, other POSIX compliant file systems, and storage local to hosts.File based storage is managed externally to the Red Hat Virtualization environment.NFS storage is managed by a Red Hat Enterprise Linux NFS server, or other third party network attached storage server.Hosts can manage their own local storage file systems.
- Block Based Storage
- Block storage uses unformatted block devices. Block devices are aggregated into volume groups by the Logical Volume Manager (LVM). An instance of LVM runs on all hosts, unaware of the instances running on other hosts. VDSM adds clustering logic on top of LVM by scanning volume groups for changes. When changes are detected, VDSM updates individual hosts by telling them to refresh their volume group information. The hosts divide the volume group into logical volumes, writing logical volume metadata to disk. If more storage capacity is added to an existing storage domain, the Red Hat Virtualization Manager causes VDSM on each host to refresh volume group information.A Logical Unit Number (LUN) is an individual block device. One of the supported block storage protocols, iSCSI, FCoE, or SAS, is used to connect to a LUN. The Red Hat Virtualization Manager manages software iSCSI connections to the LUNs. All other block storage connections are managed externally to the Red Hat Virtualization environment. Any changes in a block based storage environment, such as the creation of logical volumes, extension or deletion of logical volumes and the addition of a new LUN are handled by LVM on a specially selected host called the Storage Pool Manager. Changes are then synced by VDSM which storage metadata refreshes across all hosts in the cluster.
2.3. Storage Domain Types
- The Data Storage Domain stores the hard disk images of all virtual machines in the Red Hat Virtualization environment. Disk images may contain an installed operating system or data stored or generated by a virtual machine. Data storage domains support NFS, iSCSI, FCP, GlusterFS and POSIX compliant storage. A data domain cannot be shared between multiple data centers.
- The Export Storage Domain provides transitory storage for hard disk images and virtual machine templates being transferred between data centers. Additionally, export storage domains store backed up copies of virtual machines. Export storage domains support NFS storage. Multiple data centers can access a single export storage domain but only one data center can use it at a time.
- The ISO Storage Domain stores ISO files, also called images. ISO files are representations of physical CDs or DVDs. In the Red Hat Virtualization environment the common types of ISO files are operating system installation disks, application installation disks, and guest agent installation disks. These images can be attached to virtual machines and booted in the same way that physical disks are inserted into a disk drive and booted. ISO storage domains allow all hosts within the data center to share ISOs, eliminating the need for physical optical media.
2.4. Storage Formats for Virtual Disk Images
- QCOW2 Formatted Virtual Machine Storage
- QCOW2 is a storage format for virtual disk images. QCOW stands for QEMU copy on write. The QCOW2 format decouples the physical storage layer from the virtual layer by adding a mapping between logical and physical blocks. Each logical block is mapped to its physical offset, which enables storage over-commitment and virtual machine snapshots, where each QCOW volume only represents changes made to an underlying disk image.The initial mapping points all logical blocks to the offsets in the backing file or volume. When a virtual machine writes data to a QCOW2 volume after a snapshot, the relevant block is read from the backing volume, modified with the new information and written into a new snapshot QCOW2 volume. Then the map is updated to point to the new place.
- RAW
- The RAW storage format has a performance advantage over QCOW2 in that no formatting is applied to virtual disk images stored in the RAW format. Virtual machine data operations on disk images stored in RAW format require no additional work from hosts. When a virtual machine writes data to a given offset in its virtual disk, the I/O is written to the same offset on the backing file or logical volume.Raw format requires that the entire space of the defined image be preallocated unless using externally managed thin provisioned LUNs from a storage array.
2.5. Virtual Disk Image Storage Allocation Policies
- Preallocated Storage
- All of the storage required for a virtual disk image is allocated prior to virtual machine creation. If a 20 GB disk image is created for a virtual machine, the disk image uses 20 GB of storage domain capacity. Preallocated disk images cannot be enlarged. Preallocating storage can mean faster write times because no storage allocation takes place during runtime, at the cost of flexibility. Allocating storage this way reduces the capacity of the Red Hat Virtualization Manager to overcommit storage. Preallocated storage is recommended for virtual machines used for high intensity I/O tasks with less tolerance for latency in storage. Generally, server virtual machines fit this description.
Note
If thin provisioning functionality provided by your storage back-end is being used, preallocated storage should still be selected from the Administration Portal when provisioning storage for virtual machines. - Sparsely Allocated Storage
- The upper size limit for a virtual disk image is set at virtual machine creation time. Initially, the disk image does not use any storage domain capacity. Usage grows as the virtual machine writes data to disk, until the upper limit is reached. Capacity is not returned to the storage domain when data in the disk image is removed. Sparsely allocated storage is appropriate for virtual machines with low or medium intensity I/O tasks with some tolerance for latency in storage. Generally, desktop virtual machines fit this description.
Note
If thin provisioning functionality is provided by your storage back-end, it should be used as the preferred implementation of thin provisioning. Storage should be provisioned from the graphical user interface as preallocated, leaving thin provisioning to the back-end solution.
2.6. Storage Metadata Versions in Red Hat Virtualization
- V1 metadata (Red Hat Virtualization 2.x series)Each storage domain contains metadata describing its own structure, and all of the names of physical volumes that are used to back virtual disk images.Master domains additionally contain metadata for all the domains and physical volume names in the storage pool. The total size of this metadata is limited to 2 KB, limiting the number of storage domains that can be in a pool.Template and virtual machine base images are read only.V1 metadata is applicable to NFS, iSCSI, and FC storage domains.
- V2 metadata (Red Hat Enterprise Virtualization 3.0)All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains.Physical volume names are no longer included in the metadata.Template and virtual machine base images are read only.V2 metadata is applicable to iSCSI, and FC storage domains.
- V3 metadata (Red Hat Enterprise Virtualization 3.1+)All storage domain and pool metadata is stored as logical volume tags rather than written to a logical volume. Metadata about virtual disk volumes is still stored in a logical volume on the domains.Virtual machine and template base images are no longer read only. This change enables live snapshots, live storage migration, and clone from snapshot.Support for unicode metadata is added, for non-English volume names.V3 metadata is applicable to NFS, GlusterFS, POSIX, iSCSI, and FC storage domains.
2.7. Storage Domain Autorecovery in Red Hat Virtualization
2.8. The Storage Pool Manager
leases
. Metadata about the structure of the data domain is written to a special logical volume called metadata
. Changes to the metadata
logical volume are protected against by the leases
logical volume.
spmStart
command to a host, causing VDSM on that host to attempt to assume the storage-centric lease. If the host is successful it becomes the SPM and retains the storage-centric lease until the Red Hat Virtualization Manager requests that a new host assume the role of SPM.
- the SPM host can not access all storage domains, but can access the master storage domain.
- the SPM host is unable to renew the lease because of a loss of storage connectivity or the lease volume is full and no write operation can be performed.
- the SPM host crashes.

Figure 2.1. The Storage Pool Manager Exclusively Writes Structural Metadata.
2.9. Storage Pool Manager Selection Process
- The "getSPMstatus" command: the Manager uses VDSM to check with the host that had SPM status last and receives one of "SPM", "Contending", or "Free".
- The metadata volume for a storage domain contains the last host with SPM status.
- The metadata volume for a storage domain contains the version of the last host with SPM status.
2.10. Exclusive Resources and Sanlock in Red Hat Virtualization
2.11. Thin Provisioning and Storage Over-Commitment
Note
2.12. Logical Volume Extension
Chapter 3. Network
3.1. Network Architecture
3.2. Introduction: Basic Networking Terms
- A Network Interface Controller (NIC)
- A Bridge
- A Bond
- A Virtual NIC
- A Virtual LAN (VLAN)
3.3. Network Interface Controller
3.4. Bridge
3.5. Bonds
Important
Bonding Modes
Mode 0 (round-robin policy)
- Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.
Mode 1 (active-backup policy)
- Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 2 (XOR policy)
- Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in Red Hat Virtualization.
Mode 3 (broadcast policy)
- Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in Red Hat Virtualization.
Mode 4 (IEEE 802.3ad policy)
- Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in Red Hat Virtualization.
Mode 5 (adaptive transmit load balancing policy)
- Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
Mode 6 (adaptive load balancing policy)
- Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.
3.6. Switch Configuration for Bonding
Important
3.7. Virtual Network Interface Cards
libvirt
assigns the virtual network interface card a PCI address. The MAC address and PCI address are then used to obtain the name of the virtual network interface card (for example, eth0
) in the virtual machine.
ip addr show
command on a virtualization host shows all of the virtual network interface cards that are associated with virtual machines on that host. Also visible are any network bridges that have been created to back logical networks, and any network interface cards used by the host.
[root@rhev-host-01 ~]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:21:6b:cc:14:6c brd ff:ff:ff:ff:ff:ff 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 4a:d5:52:c2:7f:4b brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 7: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 9: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 11: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet 10.64.32.134/23 brd 10.64.33.255 scope global ovirtmgmt inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever
lo
), one Ethernet device (eth0
), one wireless device (wlan0
), one VDSM dummy device (;vdsmdummy;
), five bond devices (bond0
, bond4
, bond1
, bond2
, bond3
), and one network bridge (ovirtmgmt
).
brctl show
command:
[root@rhev-host-01 ~]# brctl show bridge name bridge id STP enabled interfaces ovirtmgmt 8000.e41f13b7fdd4 no vnet002 vnet001 vnet000 eth0
brctl show
command shows that the virtio virtual network interface cards are members of the ovirtmgmt
bridge. All of the virtual machines that the virtual network interface cards are associated with are connected to the ovirtmgmt logical network. The eth0
network interface card is also a member of the ovirtmgmt
bridge. The eth0
device is cabled to a switch that provides connectivity beyond the host.
3.8. Virtual LAN (VLAN)
3.9. Network Labels
Network Label Associations
- When you attach a label to a logical network, that logical network will be automatically associated with any physical host network interfaces with the given label.
- When you attach a label to a physical host network interface, any logical networks with the given label will be automatically associated with that physical host network interface.
- Changing the label attached to a logical network or physical host network interface acts in the same way as removing a label and adding a new label. The association between related logical networks or physical host network interfaces is updated.
Network Labels and Clusters
- When a labeled logical network is added to a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically added to that physical host network interface.
- When a labeled logical network is detached from a cluster and there is a physical host network interface in that cluster with the same label, the logical network is automatically detached from that physical host network interface.
Network Labels and Logical Networks With Roles
- When a labeled logical network is assigned to act as a display network or migration network, that logical network is then configured on the physical host network interface using DHCP so that the logical network can be assigned an IP address.Setting a label on a role network (for instance, "a migration network" or "a display network") causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.
3.10. Cluster Networking
- Clusters
- Logical Networks

Figure 3.1. Networking within a cluster
3.11. Logical Networks
ovirtmgmt
network is created by default during the installation of the Red Hat Virtualization to be used for management communication between the Manager and hosts. A typical use for logical networks is to group network traffic with similar requirements and usage together. In many cases, a storage network and a display network are created by an administrator to isolate traffic of each respective type for optimization and troubleshooting.
- logical networks that carry virtual machine network traffic,
- logical networks that do not carry virtual machine network traffic,
- optional logical networks,
- and required networks.

Figure 3.2. The ovirtmgmt logical network.
Example 3.1. Example usage of a logical network.
ovirtmgmt
for all networking functions. The system administrator responsible for Pink decides to isolate network testing for a web server by placing the web server and some client virtual machines on a separate logical network. She decides to call the new logical network network_testing
.
network_testing
network. Next she activates White, migrates all of the running virtual machines off of Red, and repeats the process for Red.
network_testing
logical network bridged to a physical network interface, the network_testing
logical network becomes Operational and is ready to be used by virtual machines.
3.12. Required Networks, Optional Networks, and Virtual Machine Networks
3.13. Virtual Machine Connectivity
ovirtmgmt
logical network, its VNIC is added as a member of the ovirtmgmt
bridge of the host on which that virtual machine runs.
3.14. Port Mirroring
- Hot plugging vNICs with a profile that has port mirroring enabled is not supported.
- Port mirroring cannot be altered when the vNIC profile is attached to a virtual machine.
Important
3.15. Host Networking Configurations
- Bridge and NIC configuration.
- Bridge, VLAN, and NIC configuration.
- Bridge, Bond, and VLAN configuration.
- Multiple Bridge, Multiple VLAN, and NIC configuration.
3.16. Bridge Configuration

Figure 3.3. Bridge and NIC configuration
ovirtmgmt
when the Red Hat Virtualization Manager installs. On installation, the Red Hat Virtualization Manager installs VDSM
on the host. The VDSM
installation process creates the bridge ovirtmgmt
. The ovirtmgmt
bridge then obtains the IP
address of the host to enable management communication for the host.
3.17. VLAN Configuration

Figure 3.4. Bridge, VLAN, and NIC configuration
3.18. Bridge and Bond Configuration

Figure 3.5. Bridge, Bond, and NIC configuration
3.19. Multiple Bridge, Multiple VLAN, and NIC Configuration

Figure 3.6. Multiple Bridge, Multiple VLAN, and NIC configuration
3.20. Multiple Bridge, Multiple VLAN, and Bond Configuration

Figure 3.7. Multiple Bridge, Multiple VLAN, and Multiple NIC with Bond connection
Chapter 4. Power Management
4.1. Introduction to Power Management and Fencing
4.2. Power Management by Proxy in Red Hat Virtualization
- Any host in the same cluster as the host requiring fencing.
- Any host in the same data center as the host requiring fencing.
4.3. Power Management
- American Power Conversion (apc).
- Bladecenter.
- Cisco Unified Computing System (cisco_ucs).
- Dell Remote Access Card 5 (drac5).
- Dell Remote Access Card 7 (drac7).
- Electronic Power Switch (eps).
- HP BladeSystem (hpblade).
- Integrated Lights Out (ilo, ilo2, ilo3, ilo4).
- Intelligent Platform Management Interface (ipmilan).
- Remote Supervisor Adapter (rsa).
- rsb.
- Western Telematic, Inc (wti).
Note
- Status: check the status of the host.
- Start: power on the host.
- Stop: power down the host.
- Restart: restart the host. Actually implemented as stop, wait, status, start, wait, status.
4.4. Fencing
4.5. Soft-Fencing Hosts
- On the first network failure, the status of the host changes to "connecting".
- The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
- If the host does not respond when that interval has elapsed,
vdsm restart
is executed via SSH. - If
vdsm restart
does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes toNon Responsive
and, if power management is configured, fencing is handed off to the external fencing agent.
Note
4.6. Using Multiple Power Management Fencing Agents
- Concurrent: Both primary and secondary agents have to respond to the Stop command for the host to be stopped. If one agent responds to the Start command, the host will go up.
- Sequential: To stop or start a host, the primary agent is used first, and if it fails, the secondary agent is used.
Chapter 5. Load Balancing, Scheduling, and Migration
5.1. Load Balancing, Scheduling, and Migration
- Virtual machine start - Resources are checked to determine on which host a virtual machine will start.
- Virtual machine migration - Resources are checked in order to determine an appropriate target host.
- Time elapses - Resources are checked at a regular interval to determine whether individual host load is in compliance with cluster load balancing policy.
5.2. Load Balancing Policy
5.3. Load Balancing Policy: VM_Evenly_Distributed
5.4. Load Balancing Policy: Evenly_Distributed

Figure 5.1. Evenly Distributed Scheduling Policy
5.5. Load Balancing Policy: Power_Saving

Figure 5.2. Power Saving Scheduling Policy
5.6. Load Balancing Policy: None
5.7. Load Balancing Policy: InClusterUpgrade
Important
5.8. Highly Available Virtual Machine Reservation
5.9. Scheduling
5.10. Migration
- A bandwidth limit of 52 MiBps (megabytes per second) is imposed on each virtual machine migration.
- A migration will time out after 64 seconds per GB of virtual machine memory.
- A migration will abort if progress is stalled for 240 seconds.
- Concurrent outgoing migrations are limited to one per CPU core per host, or 2, whichever is smaller.
Chapter 6. Directory Services
6.1. Directory Services
- Portal logins (User, Power User, Administrator, REST API).
- Queries to display user information.
- Adding the Manager to a domain.
6.2. Local Authentication: Internal Domain
admin@internal
user. Taking this approach to initial authentication allows Red Hat Virtualization to be evaluated without requiring a complete, functional directory server, and ensures an administrative account is available to troubleshoot any issues with external directory services.
6.3. Remote Authentication Using GSSAPI
engine-manage-domains
tool to be a part of an RHDS, AD, or IdM domain. This requires that the Manager be provided with credentials for an account from the RHDS, AD, or IdM directory server for the domain with sufficient privileges to join a system to the domain. After domains have been added, domain users can be authenticated by the Red Hat Virtualization Manager against the directory server using a password. The Manager uses a framework called the Simple Authentication and Security Layer (SASL) which in turn uses the Generic Security Services Application Program Interface (GSSAPI) to securely verify the identity of a user, and ascertain the authorization level available to the user.

Figure 6.1. GSSAPI Authentication
Chapter 7. Templates and Pools
7.1. Templates and Pools
7.2. Templates
7.3. Pools
Example 7.1. Example Pool Usage
Chapter 8. Virtual Machine Snapshots
8.1. Snapshots
Note
- Creation, which involves the first snapshot created for a virtual machine.
- Previews, which involves previewing a snapshot to determine whether or not to restore the system data to the point in time that the snapshot was taken.
- Deletion, which involves deleting a restoration point that is no longer required.
8.2. Live Snapshots in Red Hat Virtualization
qemu-guest-agent
to enable quiescing before snapshots.
8.3. Snapshot Creation

Figure 8.1. Initial Snapshot Creation

Figure 8.2. Additional Snapshot Creation
8.4. Snapshot Previews

Figure 8.3. Preview Snapshot
8.5. Snapshot Deletion
Delete_snapshot
is 200 GB, and the subsequent snapshot, Next_snapshot
, is 100 GB. Delete_snapshot
is deleted and a new logical volume is created, temporarily named Snapshot_merge
, with a size of 200 GB. Snapshot_merge
ultimately resizes to 300 GB to accommodate the total merged contents of both Delete_snapshot
and Next_snapshot
. Next_snapshot
is then renamed Delete_me_too_snapshot
so that Snapshot_merge
can be renamed Next_snapshot
. Finally, Delete_snapshot
and Delete_me_too_snapshot
are deleted.

Figure 8.4. Snapshot Deletion
Chapter 9. Hardware Drivers and Devices
9.1. Virtualized Hardware
- Emulated devices
- Emulated devices, sometimes referred to as virtual devices, exist entirely in software. Emulated device drivers are a translation layer between the operating system running on the host (which manages the source device) and the operating systems running on the guests. The device level instructions directed to and from the emulated device are intercepted and translated by the hypervisor. Any device of the same type as that being emulated and recognized by the Linux kernel is able to be used as the backing source device for the emulated drivers.
- Para-virtualized Devices
- Para-virtualized devices require the installation of device drivers on the guest operating system providing it with an interface to communicate with the hypervisor on the host machine. This interface is used to allow traditionally intensive tasks such as disk I/O to be performed outside of the virtualized environment. Lowering the overhead inherent in virtualization in this manner is intended to allow guest operating system performance closer to that expected when running directly on physical hardware.
- Physically shared devices
- Certain hardware platforms allow virtualized guests to directly access various hardware devices and components. This process in virtualization is known as passthrough or device assignment. Passthrough allows devices to appear and behave as if they were physically attached to the guest operating system.
9.2. Stable Device Addresses in Red Hat Virtualization
9.3. Central Processing Unit (CPU)
Note
9.4. System Devices
- the host bridge,
- the ISA bridge and USB bridge (The USB and ISA bridges are the same device),
- the graphics card (using either the Cirrus or qxl driver), and
- the memory balloon device.
9.5. Network Devices
- The
e1000
network interface controller exposes a virtualized Intel PRO/1000 (e1000) to guests. - The
virtio
network interface controller exposes a para-virtualized network device to guests. - The
rtl8139
network interface controller exposes a virtualizedRealtek Semiconductor Corp RTL8139
to guests.
9.6. Graphics Devices
- The ac97 emulates a
Cirrus CLGD 5446 PCI VGA
card. - The vga emulates a dummy VGA card with BochsVESA extensions (hardware level, including all non-standard modes).
9.7. Storage Devices
- The
IDE
driver exposes an emulated block device to guests. The emulatedIDE
driver can be used to attach any combination of up to four virtualizedIDE
hard disks or virtualizedIDE
CD-ROM drives to each virtualized guest. The emulatedIDE
driver is also used to provide virtualized DVD-ROM drives. - The VirtIO driver exposes a para-virtualized block device to guests. The para-virtualized block driver is a driver for all storage devices supported by the hypervisor attached to the virtualized guest (except for floppy disk drives, which must be emulated).
9.8. Sound Devices
- The ac97 emulates an
Intel 82801AA AC97 Audio
compatible sound card. - The es1370 emulates an
ENSONIQ AudioPCI ES1370
sound card.
9.9. Serial Driver
virtio-serial
) is a bytestream-oriented, character stream driver. The para-virtualized serial driver provides a simple communication interface between the host's user space and the guest's user space where networking is not be available or unusable.
9.10. Balloon Driver
Chapter 10. Minimum Requirements and Technical Limitations
10.1. Minimum Requirements and Supported Limits
10.2. Resource Limitations
Table 10.1. Resource Limitations
Item | Limitations |
---|---|
Storage Domains | A minimum of 2 storage domains per data center is recommended:
|
Hosts | Red Hat supports a maximum of 200 hosts per Red Hat Virtualization Manager. |
10.3. Cluster Limitations
- All managed hypervisors must be in a cluster.
- All managed hypervisors within a cluster must have the same CPU type. Intel and AMD CPUs cannot co-exist within the same cluster.
Note
10.4. Storage Domain Limitations
Table 10.2. Storage Domain Limitations
Item | Limitations |
---|---|
Storage Types |
Supported storage types are:
New ISO and export storage domains in Red Hat Virtualization 4.0 can be provided by any file-based storage (NFS, Posix or GlusterFS).
|
Logical Unit Numbers (LUNs) | No more than 300 LUNs are permitted for each storage domain that is provided by iSCSI or FCP. |
Logical Volumes (LVs) |
In Red Hat Virtualization, logical volumes represent virtual disks for virtual machines, templates, and virtual machine snapshots.
No more than 350 logical volumes are recommended for each storage domain that is provided by iSCSI or FCP. If the number of logical volumes in a given storage domain exceeds this number, splitting available storage into separate storage domains with no more than 350 logical volumes each is recommended.
The root cause of this limitation is the size of LVM metadata. As the number of logical volumes increases, the LVM metadata associated with those logical volumes also increases. When this metadata exceeds 1 MB in size, the performance of provisioning operations such as creating new disks or snapshots decreases, and lvextend operations for thinly provisioning a logical volume when running a QCOW disk take longer to run.
Further detail about logical volumes is available in https://access.redhat.com/solutions/441203.
|
Note
10.5. Red Hat Virtualization Manager Limitations
Table 10.3. Red Hat Virtualization Manager Limitations
Item | Limitations |
---|---|
RAM |
|
PCI Devices |
|
Storage |
|
Note
10.6. Hypervisor Requirements
Table 10.4. Red Hat Virtualization Host Requirements and Supported Limits
Item | Support Limit |
---|---|
CPU |
A minimum of 1 physical CPU is required. Red Hat Virtualization supports the use of these CPU models in hosts:
All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V™ or Intel VT® hardware virtualization extensions enabled. Support for the
No eXecute flag (NX) is also required.
|
RAM |
The amount of RAM required for each virtual machine varies depending on:
Additionally KVM is able to over-commit physical RAM for virtual machines. It does this by only allocating RAM for virtual machines as required and shifting underutilized virtual machines into swap.
See https://access.redhat.com/articles/rhel-limits for the maximum and minimum supported RAM.
|
Storage |
The minimum supported internal storage for a host is the total of the following list:
Please note that these are the minimum storage requirements for host installation. It is recommended to use the default allocations which use more storage space.
|
PCI Devices |
At least one network controller is required with a recommended minimum bandwidth of 1 Gbps.
|
Important
Virtualization hardware is unavailable. (No virtualization hardware was detected on this system)
- At the host boot screen press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. After the last kernel parameter listed ensure there is a Space and append the
rescue
parameter. - Press Enter to boot into rescue mode.
- At the prompt which appears, determine that your processor has the virtualization extensions and that they are enabled by running this command:
# grep -E 'svm|vmx' /proc/cpuinfo
If any output is shown, the processor is hardware virtualization capable. If no output is shown it is still possible that your processor supports hardware virtualization. In some circumstances manufacturers disable the virtualization extensions in the BIOS. Where you believe this to be the case consult the system's BIOS and the motherboard manual provided by the manufacturer. - As an additional check, verify that the
kvm
modules are loaded in the kernel:# lsmod | grep kvm
If the output includeskvm_intel
orkvm_amd
then thekvm
hardware virtualization modules are loaded and your system meets requirements.
10.7. Guest Requirements and Support Limits
Table 10.5. Virtualized Hardware
Item | Limitations |
---|---|
CPU |
A maximum of 240 virtualized CPUs per guest is supported in Red Hat Enterprise Linux 7.
|
RAM |
Different guests have different RAM requirements. The amount of RAM required for each guest varies based on the requirements of the guest operating system and the load under which the guest is operating.
See https://access.redhat.com/articles/rhel-kvm-limits for the maximum and minimum supported RAM for guest machines.
|
PCI devices |
A maximum of 31 virtualized PCI devices per guest is supported. A number of system devices count against this limit, some of which are mandatory. Mandatory devices which count against the PCI devices limit include the PCI host bridge, ISA bridge, USB bridge, board bridge, graphics card, and the IDE or VirtIO block device.
|
Storage |
A maximum of 28 virtualized storage devices per guest is supported, composed of a possible 3 IDE and 25 Virtio.
|
10.8. SPICE Limitations
10.9. Additional References
- Red Hat Enterprise Linux - System Administrator's Guide
- A guide to the deployment, configuration and administration of Red Hat Enterprise Linux.
- Red Hat Enterprise Linux - DM-Multipath Guide
- A guide to the use of Device-Mapper Multipathing on Red Hat Enterprise Linux.
- Red Hat Enterprise Linux - Installation Guide
- A guide to the installation of Red Hat Enterprise Linux.
- Red Hat Enterprise Linux - Storage Administration Guide
- A guide to the management of storage devices and file systems on Red Hat Enterprise Linux.
- Red Hat Enterprise Linux - Virtualization Deployment and Administration Guide
- A guide to the installation, configuration, administration and troubleshooting of virtualization technologies in Red Hat Enterprise Linux.