Menu Close
Virtualization
OpenShift Virtualization installation, usage, and release notes
Abstract
Chapter 1. About OpenShift Virtualization
Learn about OpenShift Virtualization’s capabilities and support scope.
1.1. What you can do with OpenShift Virtualization
OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads.
OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include:
- Creating and managing Linux and Windows virtual machines
- Connecting to virtual machines through a variety of consoles and CLI tools
- Importing and cloning existing virtual machines
- Managing network interface controllers and storage disks attached to virtual machines
- Live migrating virtual machines between nodes
An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure.
OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features.
You can use OpenShift Virtualization with the OVN-Kubernetes, OpenShift SDN, or one of the other certified default Container Network Interface (CNI) network providers listed in Certified OpenShift CNI Plug-ins.
1.1.1. OpenShift Virtualization supported cluster version
OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
Chapter 2. Start here with OpenShift Virtualization
Use the following tables to find content to help you learn about and use OpenShift Virtualization.
2.1. Cluster administrator
Learn | Plan | Deploy | Additional resources |
---|---|---|---|
Installing OpenShift Virtualization using the OpenShift Virtualization console or CLI | |||
2.2. Virtualization administrator
Learn | Deploy | Manage | Use |
---|---|---|---|
Connecting virtual machines to the default pod network for virtual machines and multiple networks | Importing virtual machines with the Migration Toolkit for containers | ||
2.3. Virtual machine administrator / developer
Learn | Use | Manage | Additional resources |
---|---|---|---|
Pass configuration data to virtual machines using secrets, configuration maps, and service accounts |
Chapter 3. OpenShift Virtualization release notes
3.1. About Red Hat OpenShift Virtualization
Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects.
OpenShift Virtualization is represented by the
logo.
You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.
Learn more about what you can do with OpenShift Virtualization.
3.1.1. OpenShift Virtualization supported cluster version
OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
3.1.2. Supported guest operating systems
To view the supported guest operating systems for OpenShift Virtualization, refer to Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization.
3.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
3.3. New and changed features
OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.
The SVVP Certification applies to:
- Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 8.
- Intel and AMD CPUs.
- OpenShift Virtualization is now integrated with OpenShift Service Mesh. You can connect virtual machines to a service mesh to monitor, visualize, and control traffic between pods that run virtual machine workloads on the default pod network with IPv4.
- OpenShift Virtualization now provides a unified API for the automatic import and update of pre-defined boot sources.
3.3.1. Quick starts
-
Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts. You can filter the available tours by entering the
virtual machine
keyword in the Filter field.
3.3.2. Installation
-
OpenShift Virtualization workloads, such as
virt-launcher
pods, now automatically update if they support live migration. You can configure workload update strategies or opt out of future automatic updates by editing theHyperConverged
custom resource.
You can now use OpenShift Virtualization with single node clusters, also known as Single Node OpenShift (SNO).
NoteSingle node clusters are not configured for high-availability operation, which results in significant changes to OpenShift Virtualization behavior.
- Resource requests and priority classes are now defined for all OpenShift Virtualization control plane components.
3.3.3. Networking
-
You can now configure multiple nmstate-enabled nodes concurrently by using a single
NodeNetworkConfigurationPolicy
manifest.
- Live migration is now supported by default for virtual machines that are attached to an SR-IOV network interface.
3.3.4. Storage
- Online snapshots are supported for virtual machines that have hot-plugged virtual disks. However, hot-plugged disks that are not in the virtual machine specification are not included in the snapshot.
- You can use the Kubernetes Container Storage Interface (CSI) driver with the hostpath provisioner (HPP) to configure local storage for your virtual machines. Using the CSI driver minimizes disruption to your existing OpenShift Container Platform nodes and clusters when configuring local storage.
3.3.5. Web console
- The OpenShift Virtualization dashboard provides resource consumption data for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries.
3.4. Deprecated and removed features
3.4.1. Deprecated features
Deprecated features are included in the current release and supported. However, they will be removed in a future release and are not recommended for new deployments.
- In a future release, support for the legacy HPP custom resource, and the associated storage class, will be deprecated. Beginning in OpenShift Virtualization 4.10, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. The Operator continues to support the existing (legacy) format of the HPP custom resource and the associated storage class. If you use the HPP Operator, plan to create a storage class for the CSI driver as part of your migration strategy.
3.4.2. Removed features
Removed features are not supported in the current release.
- The VM Import Operator has been removed from OpenShift Virtualization with this release. It is replaced by the Migration Toolkit for Virtualization.
This release removes the template for CentOS Linux 8, which reached End of Life (EOL) on December 31, 2021. However, OpenShift Container Platform now includes templates for CentOS Stream 8 and CentOS Stream 9.
NoteAll CentOS distributions are community-supported.
3.5. Technology Preview features
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. The Red Hat Customer Portal provides the Technology Preview Features Support Scope for these features:
- You can now use the Red Hat Enterprise Linux 9 Beta template to create virtual machines.
- You can now deploy OpenShift Virtualization on AWS bare metal nodes.
- OpenShift Virtualization critical alerts now have corresponding descriptions of problems that require immediate attention, reasons for why each alert occurs, a troubleshooting process to diagnose the source of the problem, and steps for resolving each alert.
- A cluster administrator can now back up namespaces that contain VMs by using the OpenShift API for Data Protection with the OpenShift Virtualization plug-in.
-
Administrators can now declaratively create and expose mediated devices such as virtual graphics processing units (vGPUs) by editing the
HyperConverged
CR. Virtual machine owners can then assign these devices to VMs.
-
You can transfer the static IP configuration of the NIC attached to the bridge by applying a single
NodeNetworkConfigurationPolicy
manifest to the cluster.
- You can now install OpenShift Virtualization on IBM Cloud bare-metal servers. Bare-metal servers offered by other cloud providers are not supported.
3.6. Bug fixes
- If you initiate a cloning operation before the clone source becomes available, the cloning operation now completes successfully without using a workaround. (BZ#1855182)
- Editing a virtual machine fails if the VM references a deleted template that was provided by OpenShift Virtualization before version 4.8. In OpenShift Virtualization 4.8 and later, deleted OpenShift Virtualization-provided templates are automatically recreated by the OpenShift Virtualization Operator. (BZ#1929165)
-
You can now successfully use the
Send Keys
andDisconnect
buttons when using a virtual machine with a VNC console. (BZ#1964789) - When you create a virtual machine, its unique fully qualified domain name (FQDN) now contains the cluster domain name. (BZ#1998300)
-
If you hot-plug a virtual disk and then force delete the
virt-launcher
pod, you no longer lose data. (BZ#2007397) OpenShift Virtualization now issues a HPPSharingPoolPathWithOS alert if you try to install the hostpath provisioner (HPP) on a path that shares the filesystem with other critical components.
To use the HPP to provide storage for virtual machine disks, configure it with dedicated storage that is separate from the node’s root filesystem. Otherwise, the node might run out of storage and become non-functional. (BZ#2038985)
- If you provision a virtual machine disk, OpenShift Virtualization now allocates a persistent volume claim (PVC) that is just large enough to accommodate the requested disk size, rather than issuing a KubePersistentVolumeFillingUp alert for each VM disk PVC. You can monitor disk usage from within the virtual machine itself. (BZ#2039489)
- You can now create a virtual machine snapshot for VMs with hot-plugged disks. (BZ#2042908)
- You can now successfully import a VM image when using a cluster-wide proxy configuration. (BZ#2046271)
3.7. Known issues
If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. (BZ#1984442)
-
As a workaround, you can disable the image limit by editing the
KubeletConfig
object and setting the value ofnodeStatusMaxImages
to-1
.
-
As a workaround, you can disable the image limit by editing the
If you deploy the hostpath provisioner on a cluster where any node has a fully qualified domain name (FQDN) that exceeds 42 characters, the provisioner fails to bind PVCs. (BZ#2057157)
Example error message
E0222 17:52:54.088950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: "external-provisioner-<node_FQDN>": must be no more than 63 characters 1
- 1
- Though the error message refers to a maximum of 63 characters, this includes the
external-provisioner-
string that is prefixed to the node’s FQDN.
As a workaround, disable the
storageCapacity
option in the hostpath provisioner CSI driver by running the following command:$ oc patch csidriver kubevirt.io.hostpath-provisioner --type merge --patch '{"spec": {"storageCapacity": false}}'
If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host’s default interface because of a change in the host network topology of OVN-Kubernetes. (BZ#1885605)
- As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider.
Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces.
As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the
spec
section of the virtual machine configuration file:Modify the
evictionStrategy
andrunStrategy
fields.-
Remove the
evictionStrategy: LiveMigrate
field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy. -
Set the
runStrategy
field toAlways
.
-
Remove the
Set the default CPU model by running the following command:
NoteYou must make this change before starting the virtual machines that support live migration.
$ oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { "op": "add", "path": "/spec/configuration/cpuModel", "value": "<cpu_model>" 1 } ]'
- 1
- Replace
<cpu_model>
with the actual CPU model value. You can determine this value by runningoc describe node <node>
for all nodes and looking at thecpu-model-<name>
labels. Select the CPU model that is present on all of your nodes.
If you use Red Hat Ceph Storage or Red Hat OpenShift Data Foundation Storage, cloning more than 100 VMs at once might fail. (BZ#1989527)
As a workaround, you can perform a host-assisted copy by setting
spec.cloneStrategy: copy
in the storage profile manifest. For example:apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy 1 status: provisioner: <provisioner> storageClass: <provisioner_class>
- 1
- The default cloning method set as
copy
.
In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (BZ#1992753)
- As a workaround, avoid using a single PVC in read-write mode with multiple VMs.
The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then
openshift-monitoring
sends aPodDisruptionBudgetAtLimit
alert every 60 minutes for virtual machine images that use theLiveMigrate
eviction strategy. (BZ#2026733)- As a workaround, Silencing alerts.
On a large cluster, the OpenShift Virtualization MAC pool manager might take too much time to boot and OpenShift Virtualization might not become ready. (BZ#2035344)
As a workaround, if you do not require MAC pooling functionality, then disable this sub-component by running the following command:
$ oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[ { "op": "replace" "path": "/spec/kubeMacPool" "value": null } ]'
OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (BZ#2037611)
- As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.
- If a VM crashes or hangs during shutdown, new shutdown requests do not stop the VM. (BZ#2040766)
If you configure the
HyperConverged
custom resource (CR) to enable mediated devices before drivers are installed, enablement of mediated devices does not occur. This issue can be triggered by updates. For example, ifvirt-handler
is updated beforedaemonset
, which installs NVIDIA drivers, then nodes cannot provide virtual machine GPUs. (BZ#2046298)As a workaround:
-
Remove
mediatedDevicesConfiguration
andpermittedHostDevices
from theHyperConverged
CR. -
Update both
mediatedDevicesConfiguration
andpermittedHostDevices
stanzas with the configuration you want to use.
-
Remove
- YAML examples in the VM wizard are hardcoded and do not always contain the latest upstream changes. (BZ#2055492)
If you clone more than 100 VMs using the
csi-clone
cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones can also fail. (BZ#2055595)-
As a workaround, you can restart the
ceph-mgr
to purge the VM clones.
-
As a workaround, you can restart the
A non-privileged user cannot use the Add Network Interface button on the
VM Network Interfaces
tab. (BZ#2056420)- As a workaround, non-privileged users can add additional network interfaces while creating the VM by using the VM wizard.
A non-privileged user cannot add disks to a VM due to RBAC rules. (BZ#2056421)
- As a workaround, manually add the RBAC rule to allow specific users to add disks.
The web console does not display virtual machine templates that are deployed to a custom namespace. Only templates deployed to the default namespace display in the web console. (BZ#2054650)
- As a workaround, avoid deploying templates to a custom namespace.
On a Single Node OpenShift (SNO) cluster, updating the cluster fails if a VMI has the
spec.evictionStrategy
field set toLiveMigrate
. For live migration to succeed, the cluster must have more than one worker node. (BZ#2073880)There are two workaround options:
-
Remove the
spec.evictionStrategy
field from the VM declaration. - Manually stop the VM before you update OpenShift Container Platform.
-
Remove the
Chapter 4. Installing
4.1. Configuring your cluster for OpenShift Virtualization
Before you install OpenShift Virtualization, ensure that your OpenShift Container Platform cluster meets the following requirements:
Your cluster must be installed on on-premise bare metal infrastructure with Red Hat Enterprise Linux CoreOS (RHCOS) workers. You can use any installation method including user-provisioned, installer-provisioned, or assisted installer to deploy your cluster.
NoteOpenShift Virtualization only supports RHCOS worker nodes. RHEL 7 or RHEL 8 nodes are not supported.
You can install OpenShift Virtualization on Amazon Web Services (AWS) bare metal instances. Bare metal instances offered by other cloud providers are not supported.
ImportantInstalling OpenShift Virtualization on AWS bare metal instances is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Additionally, there are three options to maintain high availability (HA) of virtual machines:
Use installer-provisioned infrastructure and deploy machine health checks.
NoteIn OpenShift Virtualization clusters installed using installer-provisioned infrastructure and with MachineHealthCheck properly configured, if a node fails the MachineHealthCheck and becomes unavailable to the cluster, it is recycled. What happens next with VMs that ran on the failed node depends on a series of conditions. See About RunStrategies for virtual machines for more detailed information about the potential outcomes and how RunStrategies affect those outcomes.
If you are not using installer-provisioned infrastructure, use either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run
oc delete node <lost_node>
.NoteWithout an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability.
Use the Node Health Check Operator on any OpenShift Container Platform cluster to deploy the
NodeHealthCheck
controller. The controller identifies unhealthy nodes and uses the Poison Pill Operator to remediate the unhealthy nodes.ImportantNode Health Check Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
- Shared storage is required to enable live migration.
- You must manage your Compute nodes according to the number and size of the virtual machines that you want to host in the cluster.
- If you have limited internet connectivity, you can configure proxy support in Operator Lifecycle Manager to access the Red Hat-provided OperatorHub. If you are using a restricted network with no internet connectivity, you must configure Operator Lifecycle Manager for restricted networks.
- If your cluster uses worker nodes with different CPUs, live migration failures can occur because different CPUs have different capacities. To avoid such failures, use CPUs with appropriate capacity for each node and set node affinity on your virtual machines to ensure successful migration. See Configuring a required node affinity rule for more information.
All CPUs must be supported by Red Hat Enterprise Linux 8 and meet the following requirements:
- Intel 64 or AMD64 CPU extensions are supported
- Intel VT or AMD-V hardware virtualization extensions are enabled
- The no-execute (NX) flag is enabled
- If FIPS mode is enabled for your cluster, no additional setup is needed for OpenShift Virtualization. Support for FIPS cryptography must be enabled before the operating system that your cluster uses boots for the first time.
OpenShift Virtualization works with OpenShift Container Platform by default, but the following installation configurations are recommended:
- Configure monitoring in the cluster.
To obtain an evaluation version of OpenShift Container Platform, download a trial from the OpenShift Container Platform home page.
4.1.1. Changes in OpenShift Virtualization behavior on a single node cluster
You can use OpenShift Virtualization with single node clusters, also known as Single Node OpenShift (SNO). These clusters are not configured for high-availability operation, which results in significant changes to OpenShift Virtualization behavior.
- OpenShift Virtualization components have only one replica, and pod disruption budgets do not apply.
- Live migration is not supported.
-
Due to differences in storage behavior, some virtual machine templates are incompatible with SNO. To ensure compatibility, do not set the
evictionStrategy
field for any templates or virtual machines that use data volumes or storage profiles.
Additional resources
4.1.2. Additional hardware requirements for OpenShift Virtualization
OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance.
The numbers noted in this documentation are based on Red Hat’s test methodology and setup. These numbers can vary based on your own individual setup and environments.
4.1.2.1. Memory overhead
Calculate the memory overhead values for OpenShift Virtualization by using the equations below.
Cluster memory overhead
Memory overhead per infrastructure node ≈ 150 MiB
Memory overhead per worker node ≈ 360 MiB
Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes.
Virtual machine memory overhead
Memory overhead per virtual machine ≈ (1.002 * requested memory) + 146 MiB \ + 8 MiB * (number of vCPUs) \ 1 + 16 MiB * (number of graphics devices) 2
If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device.
4.1.2.2. CPU overhead
Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup.
Cluster CPU overhead
CPU overhead for infrastructure nodes ≈ 4 cores
OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes.
CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine
Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads.
Virtual machine CPU overhead
If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires.
4.1.2.3. Storage overhead
Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment.
Cluster storage overhead
Aggregated storage overhead per node ≈ 10 GiB
10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization.
Virtual machine storage overhead
Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself.
4.1.2.4. Example
As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores.
4.2. Planning your environment according to OpenShift Virtualization object maximums
Plan your cluster for OpenShift Virtualization by using tested object maximums as guidelines.
4.2.1. About cluster limits for OpenShift Virtualization
To view the supported cluster limits for guests, hosts, and other objects, refer to Supported Limits for OpenShift Virtualization 4.x.
Red Hat has tested these values to ensure optimal cluster and workload performance. Exceeding the documented limits might result in unexpected behavior and degraded performance.
Guidelines for object maximums are based on the largest possible cluster. For smaller clusters, the maximums are lower.
4.2.2. Additional resources
4.3. Specifying nodes for OpenShift Virtualization components
Specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules.
You can configure node placement for some components after installing OpenShift Virtualization, but there must not be virtual machines present if you want to configure node placement for workloads.
4.3.1. About node placement for virtualization components
You might want to customize where OpenShift Virtualization deploys its components to ensure that:
- Virtual machines only deploy on nodes that are intended for virtualization workloads.
- Operators only deploy on infrastructure nodes.
- Certain nodes are unaffected by OpenShift Virtualization. For example, you have workloads unrelated to virtualization running on your cluster, and you want those workloads to be isolated from OpenShift Virtualization.
4.3.1.1. How to apply node placement rules to virtualization components
You can specify node placement rules for a component by editing the corresponding object directly or by using the web console.
-
For the OpenShift Virtualization Operators that Operator Lifecycle Manager (OLM) deploys, edit the OLM
Subscription
object directly. Currently, you cannot configure node placement rules for theSubscription
object by using the web console. -
For components that the OpenShift Virtualization Operators deploy, edit the
HyperConverged
object directly or configure it by using the web console during OpenShift Virtualization installation. For the hostpath provisioner, edit the
HostPathProvisioner
object directly or configure it by using the web console.WarningYou must schedule the hostpath provisioner and the virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run.
Depending on the object, you can use one or more of the following rule types:
nodeSelector
- Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
- Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, rather than a hard requirement, so that pods are still scheduled if the rule is not satisfied.
tolerations
- Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint.
4.3.1.2. Node placement in the OLM Subscription object
To specify the nodes where OLM deploys the OpenShift Virtualization Operators, edit the Subscription
object during OpenShift Virtualization installation. You can include node placement rules in the spec.config
field, as shown in the following example:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: hco-operatorhub
namespace: openshift-cnv
spec:
source: redhat-operators
sourceNamespace: openshift-marketplace
name: kubevirt-hyperconverged
startingCSV: kubevirt-hyperconverged-operator.v4.10.1
channel: "stable"
config: 1
- 1
- The
config
field supportsnodeSelector
andtolerations
, but it does not supportaffinity
.
4.3.1.3. Node placement in the HyperConverged object
To specify the nodes where OpenShift Virtualization deploys its components, you can include the nodePlacement
object in the HyperConverged Cluster custom resource (CR) file that you create during OpenShift Virtualization installation. You can include nodePlacement
under the spec.infra
and spec.workloads
fields, as shown in the following example:
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: openshift-cnv
spec:
infra:
nodePlacement: 1
...
workloads:
nodePlacement:
...
- 1
- The
nodePlacement
fields supportnodeSelector
,affinity
, andtolerations
fields.
4.3.1.4. Node placement in the HostPathProvisioner object
You can configure node placement rules in the spec.workload
field of the HostPathProvisioner
object that you create when you install the hostpath provisioner.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
kind: HostPathProvisioner
metadata:
name: hostpath-provisioner
spec:
imagePullPolicy: IfNotPresent
pathConfig:
path: "</path/to/backing/directory>"
useNamingPrefix: false
workload: 1
- 1
- The
workload
field supportsnodeSelector
,affinity
, andtolerations
fields.
4.3.1.5. Additional resources
- Specifying nodes for virtual machines
- Placing pods on specific nodes using node selectors
- Controlling pod placement on nodes using node affinity rules
- Controlling pod placement using node taints
- Installing OpenShift Virtualization using the CLI
- Installing OpenShift Virtualization using the web console
- Configuring local storage for virtual machines
4.3.2. Example manifests
The following example YAML files use nodePlacement
, affinity
, and tolerations
objects to customize node placement for OpenShift Virtualization components.
4.3.2.1. Operator Lifecycle Manager Subscription object
4.3.2.1.1. Example: Node placement with nodeSelector in the OLM Subscription object
In this example, nodeSelector
is configured so that OLM places the OpenShift Virtualization Operators on nodes that are labeled with example.io/example-infra-key = example-infra-value
.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.1 channel: "stable" config: nodeSelector: example.io/example-infra-key: example-infra-value
4.3.2.1.2. Example: Node placement with tolerations in the OLM Subscription object
In this example, nodes that are reserved for OLM to deploy OpenShift Virtualization Operators are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.1 channel: "stable" config: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule"
4.3.2.2. HyperConverged object
4.3.2.2.1. Example: Node placement with nodeSelector in the HyperConverged Cluster CR
In this example, nodeSelector
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-infra-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value
4.3.2.2.2. Example: Node placement with affinity in the HyperConverged Cluster CR
In this example, affinity
is configured so that infrastructure resources are placed on nodes that are labeled with example.io/example-infra-key = example-value
and workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
. Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8
4.3.2.2.3. Example: Node placement with tolerations in the HyperConverged Cluster CR
In this example, nodes that are reserved for OpenShift Virtualization components are labeled with the key=virtualization:NoSchedule
taint. Only pods with the matching tolerations are scheduled to these nodes.
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule"
4.3.2.3. HostPathProvisioner object
4.3.2.3.1. Example: Node placement with nodeSelector in the HostPathProvisioner object
In this example, nodeSelector
is configured so that workloads are placed on nodes labeled with example.io/example-workloads-key = example-workloads-value
.
apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value
4.4. Installing OpenShift Virtualization using the web console
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster.
You can use the OpenShift Container Platform 4.10 web console to subscribe to and deploy the OpenShift Virtualization Operators.
4.4.1. Installing the OpenShift Virtualization Operator
You can install the OpenShift Virtualization Operator from the OpenShift Container Platform web console.
Prerequisites
- Install OpenShift Container Platform 4.10 on your cluster.
-
Log in to the OpenShift Container Platform web console as a user with
cluster-admin
permissions.
Procedure
- From the Administrator perspective, click Operators → OperatorHub.
- In the Filter by keyword field, type OpenShift Virtualization.
- Select the OpenShift Virtualization tile.
- Read the information about the Operator and click Install.
On the Install Operator page:
- Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
For Installed Namespace, ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory
openshift-cnv
namespace, which is automatically created if it does not exist.WarningAttempting to install the OpenShift Virtualization Operator in a namespace other than
openshift-cnv
causes the installation to fail.For Approval Strategy, it is highly recommended that you select Automatic, which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel.
While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic.
WarningBecause OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported.
-
Click Install to make the Operator available to the
openshift-cnv
namespace. - When the Operator installs successfully, click Create HyperConverged.
- Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components.
- Click Create to launch OpenShift Virtualization.
Verification
- Navigate to the Workloads → Pods page and monitor the OpenShift Virtualization pods until they are all Running. After all the pods display the Running state, you can use OpenShift Virtualization.
4.4.2. Next steps
You might want to additionally configure the following components:
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
4.5. Installing OpenShift Virtualization using the CLI
Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. You can subscribe to and deploy the OpenShift Virtualization Operators by using the command line to apply manifests to your cluster.
To specify the nodes where you want OpenShift Virtualization to install its components, configure node placement rules.
4.5.1. Prerequisites
- Install OpenShift Container Platform 4.10 on your cluster.
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
4.5.2. Subscribing to the OpenShift Virtualization catalog by using the CLI
Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv
namespace access to the OpenShift Virtualization Operators.
To subscribe, configure Namespace
, OperatorGroup
, and Subscription
objects by applying a single manifest to your cluster.
Procedure
Create a YAML file that contains the following manifest:
apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.1 channel: "stable" 1
- 1
- Using the
stable
channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version.
Create the required
Namespace
,OperatorGroup
, andSubscription
objects for OpenShift Virtualization by running the following command:$ oc apply -f <file name>.yaml
You can configure certificate rotation parameters in the YAML file.
4.5.3. Deploying the OpenShift Virtualization Operator by using the CLI
You can deploy the OpenShift Virtualization Operator by using the oc
CLI.
Prerequisites
-
An active subscription to the OpenShift Virtualization catalog in the
openshift-cnv
namespace.
Procedure
Create a YAML file that contains the following manifest:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:
Deploy the OpenShift Virtualization Operator by running the following command:
$ oc apply -f <file_name>.yaml
Verification
Ensure that OpenShift Virtualization deployed successfully by watching the
PHASE
of the cluster service version (CSV) in theopenshift-cnv
namespace. Run the following command:$ watch oc get csv -n openshift-cnv
The following output displays if deployment was successful:
Example output
NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.10.1 OpenShift Virtualization 4.10.1 Succeeded
4.5.4. Next steps
You might want to additionally configure the following components:
- The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
4.6. Enabling the virtctl client
The virtctl
client is a command-line utility for managing OpenShift Virtualization resources. It is available for Linux, macOS, and Windows distributions.
4.6.1. Downloading and installing the virtctl client
4.6.1.1. Downloading the virtctl client
Download the virtctl
client by using the link provided in the ConsoleCLIDownload
custom resource (CR).
Procedure
View the
ConsoleCLIDownload
object by running the following command:$ oc get ConsoleCLIDownload virtctl-clidownloads-kubevirt-hyperconverged -o yaml
-
Download the
virtctl
client by using the link listed for your distribution.
4.6.1.2. Installing the virtctl client
Extract and install the virtctl
client after downloading from the appropriate location for your operating system.
Prerequisites
-
You must have downloaded the
virtctl
client.
Procedure
For Linux:
Extract the tarball. The following CLI command extracts it into the same directory as the tarball:
$ tar -xvf <virtctl-version-distribution.arch>.tar.gz
Navigate the extracted folder hierachy and run the following command to make the
virtctl
binary executable:$ chmod +x <virtctl-file-name>
-
Move the
virtctl
binary to a directory in yourPATH
environment variable. To check your path, run the following command:
$ echo $PATH
For Windows users:
- Unpack and unzip the archive.
-
Navigate the extracted folder hierarchy and double-click the
virtctl
executable file to install the client. -
Move the
virtctl
binary to a directory in yourPATH
environment variable. To check your path, run the following command:
C:\> path
For macOS users:
- Unpack and unzip the archive.
-
Move the
virtctl
binary to a directory in yourPATH
environment variable. To check your path, run the following command:
echo $PATH
4.6.2. Additional setup options
4.6.2.1. Installing the virtctl client using the yum utility
Install the virtctl
client from the kubevirt-virtctl
package.
Procedure
Install the
kubevirt-virtctl
package:# yum install kubevirt-virtctl
4.6.2.2. Enabling OpenShift Virtualization repositories
Red Hat offers OpenShift Virtualization repositories for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 7:
-
Red Hat Enterprise Linux 8 repository:
cnv-4.10-for-rhel-8-x86_64-rpms
-
Red Hat Enterprise Linux 7 repository:
rhel-7-server-cnv-4.10-rpms
The process for enabling the repository in subscription-manager
is the same in both platforms.
Procedure
Enable the appropriate OpenShift Virtualization repository for your system by running the following command:
# subscription-manager repos --enable <repository>
4.6.3. Additional resources
- Using the CLI tools for OpenShift Virtualization.
4.7. Uninstalling OpenShift Virtualization using the web console
You can uninstall OpenShift Virtualization by using the OpenShift Container Platform web console.
4.7.1. Prerequisites
- You must have OpenShift Virtualization 4.10 installed.
You must delete all virtual machines, virtual machine instances, and data volumes.
ImportantAttempting to uninstall OpenShift Virtualization without deleting these objects results in failure.
4.7.2. Deleting the OpenShift Virtualization Operator Deployment custom resource
To uninstall OpenShift Virtualization, you must first delete the OpenShift Virtualization Operator Deployment custom resource.
Prerequisites
- Create the OpenShift Virtualization Operator Deployment custom resource.
Procedure
-
From the OpenShift Container Platform web console, select
openshift-cnv
from the Projects list. - Navigate to the Operators → Installed Operators page.
- Click OpenShift Virtualization.
- Click the OpenShift Virtualization Operator Deployment tab.
-
Click the Options menu
in the row containing the kubevirt-hyperconverged custom resource. In the expanded menu, click Delete HyperConverged Cluster.
- Click Delete in the confirmation window.
- Navigate to the Workloads → Pods page to verify that only the Operator pods are running.
Open a terminal window and clean up the remaining resources by running the following command:
$ oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv
4.7.3. Deleting the OpenShift Virtualization catalog subscription
To finish uninstalling OpenShift Virtualization, delete the OpenShift Virtualization catalog subscription.
Prerequisites
- An active subscription to the OpenShift Virtualization catalog
Procedure
- Navigate to the Operators → OperatorHub page.
- Search for OpenShift Virtualization and then select it.
- Click Uninstall.
You can now delete the openshift-cnv
namespace.
4.7.4. Deleting a namespace using the web console
You can delete a namespace by using the OpenShift Container Platform web console.
If you do not have permissions to delete the namespace, the Delete Namespace option is not available.
Procedure
- Navigate to Administration → Namespaces.
- Locate the namespace that you want to delete in the list of namespaces.
-
On the far right side of the namespace listing, select Delete Namespace from the Options menu
.
- When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field.
- Click Delete.
4.8. Uninstalling OpenShift Virtualization using the CLI
You can uninstall OpenShift Virtualization by using the OpenShift Container Platform CLI.
4.8.1. Prerequisites
- You must have OpenShift Virtualization 4.10 installed.
You must delete all virtual machines, virtual machine instances, and data volumes.
ImportantAttempting to uninstall OpenShift Virtualization without deleting these objects results in failure.
4.8.2. Deleting OpenShift Virtualization
You can delete OpenShift Virtualization by using the CLI.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Access to a OpenShift Virtualization cluster using an account with
cluster-admin
permissions.
When you delete the subscription of the OpenShift Virtualization operator in the OLM by using the CLI, the ClusterServiceVersion
(CSV) object is not deleted from the cluster. To completely uninstall OpenShift Virtualization, you must explicitly delete the CSV.
Procedure
Delete the
HyperConverged
custom resource:$ oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv
Delete the subscription of the OpenShift Virtualization operator in the Operator Lifecycle Manager (OLM):
$ oc delete subscription kubevirt-hyperconverged -n openshift-cnv
Set the cluster service version (CSV) name for OpenShift Virtualization as an environment variable:
$ CSV_NAME=$(oc get csv -n openshift-cnv -o=custom-columns=:metadata.name)
Delete the CSV from the OpenShift Virtualization cluster by specifying the CSV name from the previous step:
$ oc delete csv ${CSV_NAME} -n openshift-cnv
OpenShift Virtualization is uninstalled when a confirmation message indicates that the CSV was deleted successfully:
Example output
clusterserviceversion.operators.coreos.com "kubevirt-hyperconverged-operator.v4.10.1" deleted
Chapter 5. Updating OpenShift Virtualization
Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization.
5.1. About updating OpenShift Virtualization
- Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster.
- OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the next minor version. You cannot update OpenShift Virtualization to the next minor version without first updating OpenShift Container Platform.
- OpenShift Virtualization subscriptions use a single update channel that is named stable. The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible.
If your subscription’s approval strategy is set to Automatic, the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.10 on OpenShift Container Platform 4.10.
- Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported.
- The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes.
- Updating OpenShift Virtualization does not interrupt network connections.
- Data volumes and their associated persistent volume claims are preserved during update.
If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update.
As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate
field and set the runStrategy
field to Always
.
5.2. Configuring automatic workload updates
5.2.1. About workload updates
When you update OpenShift Virtualization, virtual machine workloads, including libvirt
, virt-launcher
, and qemu
, update automatically if they support live migration.
Each virtual machine has a virt-launcher
pod that runs the virtual machine instance (VMI). The virt-launcher
pod runs an instance of libvirt
, which is used to manage the virtual machine (VM) process.
You can configure how workloads are updated by editing the spec.workloadUpdateStrategy
stanza of the HyperConverged
custom resource (CR). There are two available workload update methods: LiveMigrate
and Evict
.
Because the Evict
method shuts down VMI pods, only the LiveMigrate
update strategy is enabled by default.
When LiveMigrate
is the only update strategy enabled:
- VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled.
VMIs that do not support live migration are not disrupted or updated.
-
If a VMI has the
LiveMigrate
eviction strategy but does not support live migration, it is not updated.
-
If a VMI has the
If you enable both LiveMigrate
and Evict
:
-
VMIs that support live migration use the
LiveMigrate
update strategy. -
VMIs that do not support live migration use the
Evict
update strategy. If a VMI is controlled by aVirtualMachine
object that has arunStrategy
value ofalways
, a new VMI is created in a new pod with updated components.
Migration attempts and timeouts
When updating workloads, live migration fails if a pod is in the Pending
state for the following periods:
- 5 minutes
-
If the pod is pending because it is
Unschedulable
. - 15 minutes
- If the pod is stuck in the pending state for any reason.
When a VMI fails to migrate, the virt-controller
tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher
pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely.
Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging.
5.2.2. Configuring workload update methods
You can configure workload update methods by editing the HyperConverged
custom resource (CR).
Prerequisites
To use live migration as an update method, you must first enable live migration in the cluster.
NoteIf a
VirtualMachineInstance
CR containsevictionStrategy: LiveMigrate
and the virtual machine instance (VMI) does not support live migration, the VMI will not update.
Procedure
To open the
HyperConverged
CR in your default editor, run the following command:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Edit the
workloadUpdateStrategy
stanza of theHyperConverged
CR. For example:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 ...
- 1
- The methods that can be used to perform automated workload updates. The available values are
LiveMigrate
andEvict
. If you enable both options as shown in this example, updates useLiveMigrate
for VMIs that support live migration andEvict
for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove theworkloadUpdateStrategy
stanza or setworkloadUpdateMethods: []
to leave the array empty. - 2
- The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If
LiveMigrate
is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. - 3
- A disruptive method that shuts down VMI pods during upgrade.
Evict
is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by aVirtualMachine
object that hasrunStrategy: always
configured, a new VMI is created in a new pod with updated components. - 4
- The number of VMIs that can be forced to be updated at a time by using the
Evict
method. This does not apply to theLiveMigrate
method. - 5
- The interval to wait before evicting the next batch of workloads. This does not apply to the
LiveMigrate
method.
NoteYou can configure live migration limits and timeouts by editing the
spec.liveMigrationConfig
stanza of theHyperConverged
CR.- To apply your changes, save and exit the editor.
5.3. Approving pending Operator updates
5.3.1. Manually approving a pending Operator upgrade
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin.
Prerequisites
- An Operator previously installed using Operator Lifecycle Manager (OLM).
Procedure
- In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
- Operators that have a pending upgrade display a status with Upgrade available. Click the name of the Operator you want to upgrade.
- Click the Subscription tab. Any upgrades requiring approval are displayed next to Upgrade Status. For example, it might display 1 requires approval.
- Click 1 requires approval, then click Preview Install Plan.
- Review the resources that are listed as available for upgrade. When satisfied, click Approve.
- Navigate back to the Operators → Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date.
5.4. Monitoring update status
5.4.1. Monitoring OpenShift Virtualization upgrade status
To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE
. You can also monitor the CSV conditions in the web console or by running the command provided here.
The PHASE
and conditions values are approximations that are based on available information.
Prerequisites
-
Log in to the cluster as a user with the
cluster-admin
role. -
Install the OpenShift CLI (
oc
).
Procedure
Run the following command:
$ oc get csv -n openshift-cnv
Review the output, checking the
PHASE
field. For example:Example output
VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing
Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command:
$ oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}'
A successful upgrade results in the following output:
Example output
ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully
5.4.2. Viewing outdated OpenShift Virtualization workloads
You can view a list of outdated workloads by using the CLI.
If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads
alert fires.
Procedure
To view a list of outdated virtual machine instances (VMIs), run the following command:
$ kubectl get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Configure workload updates to ensure that VMIs update automatically.
5.5. Additional resources
Chapter 6. Additional security privileges granted for kubevirt-controller and virt-launcher
The kubevirt-controller
and virt-launcher pods are granted some SELinux policies and Security Context Constraints privileges that are in addition to typical pod owners. These privileges enable virtual machines to use OpenShift Virtualization features.
6.1. Extended SELinux policies for virt-launcher pods
The container_t
SELinux policy for virt-launcher pods is extended with the following rules:
-
allow process self (tun_socket (relabelfrom relabelto attach_queue))
-
allow process sysfs_t (file (write))
-
allow process hugetlbfs_t (dir (add_name create write remove_name rmdir setattr))
-
allow process hugetlbfs_t (file (create unlink))
These rules enable the following virtualization features:
- Relabel and attach queues to its own TUN sockets, which is required to support network multi-queue. Multi-queue enables network performance to scale as the number of available vCPUs increases.
-
Allows virt-launcher pods to write information to sysfs (
/sys
) files, which is required to enable Single Root I/O Virtualization (SR-IOV). -
Read/write
hugetlbfs
entries, which is required to support huge pages. Huge pages are a method of managing large amounts of memory by increasing the memory page size.
6.2. Additional OpenShift Container Platform security context constraints and Linux capabilities for the kubevirt-controller service account
Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system.
The kubevirt-controller
is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These virt-launcher pods are granted permissions by the kubevirt-controller
service account.
6.2.1. Additional SCCs granted to the kubevirt-controller service account
The kubevirt-controller
service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to take advantage of OpenShift Virtualization features that are beyond the scope of typical pods.
The kubevirt-controller
service account is granted the following SCCs:
-
scc.AllowHostDirVolumePlugin = true
This allows virtual machines to use the hostpath volume plug-in. -
scc.AllowPrivilegedContainer = false
This ensures the virt-launcher pod is not run as a privileged container. -
scc.AllowedCapabilities = []corev1.Capability{"NET_ADMIN", "NET_RAW", "SYS_NICE"}
This provides the following additional Linux capabilitiesNET_ADMIN
,NET_RAW
, andSYS_NICE
.
6.2.2. Viewing the SCC and RBAC definitions for the kubevirt-controller
You can view the SecurityContextConstraints
definition for the kubevirt-controller
by using the oc
tool:
$ oc get scc kubevirt-controller -o yaml
You can view the RBAC definition for the kubevirt-controller
clusterrole by using the oc
tool:
$ oc get clusterrole kubevirt-controller -o yaml
6.3. Additional resources
- The Red Hat Enterprise Linux Virtualization Tuning and Optimization Guide has more information on network multi-queue and huge pages.
-
The
capabilities
man page has more information on the Linux capabilities. -
The
sysfs(5)
man page has more information on sysfs. - The OpenShift Container Platform Authentication guide has more information on Security Context Constraints.
Chapter 7. Using the CLI tools
The two primary CLI tools used for managing resources in the cluster are:
-
The OpenShift Virtualization
virtctl
client -
The OpenShift Container Platform
oc
client
7.1. Prerequisites
-
You must enable the
virtctl
client.
7.2. OpenShift Container Platform client commands
The OpenShift Container Platform oc
client is a command-line utility for managing OpenShift Container Platform resources, including the VirtualMachine
(vm
) and VirtualMachineInstance
(vmi
) object types.
You can use the -n <namespace>
flag to specify a different project.
Table 7.1. oc
commands
Command | Description |
---|---|
|
Log in to the OpenShift Container Platform cluster as |
| Display a list of objects for the specified object type in the current project. |
| Display details of the specific resource in the current project. |
| Create a resource in the current project from a file name or from stdin. |
| Edit a resource in the current project. |
| Delete a resource in the current project. |
For more comprehensive information on oc
client commands, see the OpenShift Container Platform CLI tools documentation.
7.3. Virtctl client commands
The virtctl
client is a command-line utility for managing OpenShift Virtualization resources.
To view a list of virtctl
commands, run the following command:
$ virtctl help
To view a list of options that you can use with a specific command, run it with the -h
or --help
flag. For example:
$ virtctl image-upload -h
To view a list of global command options that you can use with any virtctl
command, run the following command:
$ virtctl options
The following table contains the virtctl
commands used throughout the OpenShift Virtualization documentation.
Table 7.2. virtctl
client commands
Command | Description |
---|---|
| Start a virtual machine. |
| Stop a virtual machine. |
| Pause a virtual machine or virtual machine instance. The machine state is kept in memory. |
| Unpause a virtual machine or virtual machine instance. |
| Migrate a virtual machine. |
| Restart a virtual machine. |
| Create a service that forwards a designated port of a virtual machine or virtual machine instance and expose the service on the specified port of the node. |
| Connect to a serial console of a virtual machine instance. |
| Open a VNC (Virtual Network Client) connection to a virtual machine instance. Access the graphical console of a virtual machine instance through a VNC which requires a remote viewer on your local machine. |
| Display the port number and connect manually to the virtual machine instance by using any viewer through the VNC connection. |
| Specify a port number to run the proxy on the specified port, if that port is available. If a port number is not specified, the proxy runs on a random port. |
| Upload a virtual machine image to a data volume that already exists. |
| Upload a virtual machine image to a new data volume. |
| Display the client and server version information. |
|
Display a descriptive list of |
| Return a full list of file systems available on the guest machine. |
| Return guest agent information about the operating system. |
| Return a full list of logged-in users on the guest machine. |
7.4. Creating a container using virtctl guestfs
You can use the virtctl guestfs
command to deploy an interactive container with libguestfs-tools
and a persistent volume claim (PVC) attached to it.
Procedure
To deploy a container with
libguestfs-tools
, mount the PVC, and attach a shell to it, run the following command:$ virtctl guestfs -n <namespace> <pvc_name> 1
- 1
- The PVC name is a required argument. If you do not include it, an error message appears.
7.5. Libguestfs tools and virtctl guestfs
Libguestfs
tools help you access and modify virtual machine (VM) disk images. You can use libguestfs
tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks.
You can also use the virtctl guestfs
command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter virt-
on the command line and press the Tab key. For example:
Command | Description |
---|---|
| Edit a file interactively in your terminal. |
| Inject an ssh key into the guest and create a login. |
| See how much disk space is used by a VM. |
| See the full list of all RPMs installed on a guest by creating an output file containing the full list. |
|
Display the output file list of all RPMs created using the |
| Seal a virtual machine disk image to be used as a template. |
By default, virtctl guestfs
creates a session with everything needed to manage a VM disk. However, the command also supports several flag options if you want to customize the behavior:
Flag Option | Description |
---|---|
|
Provides help for |
| To use a PVC from a specific namespace.
If you do not use the
If you do not include a |
|
Lists the
You can configure the container to use a custom image by using the |
|
Indicates that
By default,
If a cluster does not have any
If not set, the |
|
Shows the pull policy for the
You can also overwrite the image’s pull policy by setting the |
The command also checks if a PVC is in use by another pod, in which case an error message appears. However, once the libguestfs-tools
process starts, the setup cannot avoid a new pod using the same PVC. You must verify that there are no active virtctl guestfs
pods before starting the VM that accesses the same PVC.
The virtctl guestfs
command accepts only a single PVC attached to the interactive pod.
7.6. Additional resources
Chapter 8. Virtual machines
8.1. Creating virtual machines
Use one of these procedures to create a virtual machine:
- Quick Start guided tour
- Running the wizard
- Pasting a pre-configured YAML file with the virtual machine wizard
- Using the CLI
Do not create virtual machines in openshift-*
namespaces. Instead, create a new namespace or use an existing namespace without the openshift
prefix.
When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines.
Templates without a boot source are labeled as Boot source required. You can use these templates if you complete the steps for adding a boot source to the virtual machine.
Due to differences in storage behavior, some virtual machine templates are incompatible with SNO. To ensure compatibility, do not set the evictionStrategy
field for any templates or virtual machines that use data volumes or storage profiles.
8.1.1. Using a Quick Start to create a virtual machine
The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process.
Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine.
Quick Start tours for creating virtual machines include the following:
- Creating a Red Hat Enterprise Linux virtual machine
- Creating a Windows 10 virtual machine
- Importing a VMWare virtual machine
Prerequisites
- Access to the website where you can download the URL link for the operating system image.
Procedure
- In the web console, select Quick Starts from the Help menu.
- Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine.
- Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtual Machines tab displays the virtual machine.
8.1.2. Running the virtual machine wizard to create a virtual machine
The web console features a wizard that guides you through the process of selecting a virtual machine template and creating a virtual machine. Red Hat virtual machine templates are preconfigured with an operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). When templates are configured with a boot source, they are labeled with a customized label text or the default label text Available boot source. These templates are then ready to be used for creating virtual machines.
You can select a template from the list of preconfigured templates, review the settings, and create a virtual machine in the Create virtual machine from template wizard. If you choose to customize your virtual machine, the wizard guides you through the General, Networking, Storage, Advanced, and Review steps. All required fields displayed by the wizard are marked by a *.
Create network interface controllers (NICs) and storage disks later and attach them to virtual machines.
Procedure
- Click Workloads → Virtualization from the side menu.
- From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard.
- Select a template that is configured with a boot source.
- Click Next to go to the Review and create step.
- Clear the Start this virtual machine after creation checkbox if you do not want to start the virtual machine now.
- Click Create virtual machine and exit the wizard or continue with the wizard to customize the virtual machine.
Click Customize virtual machine to go to the General step.
- Optional: Edit the Name field to specify a custom name for the virtual machine.
- Optional: In the Description field, add a description.
Click Next to go to the Networking step. A
nic0
NIC is attached by default.- Optional: Click Add Network Interface to create additional NICs.
-
Optional: You can remove any or all NICs by clicking the Options menu
and selecting Delete. A virtual machine does not need a NIC attached to be created. You can create NICs after the virtual machine has been created.
Click Next to go to the Storage step.
-
Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu
and selecting Delete.
-
Optional: Click the Options menu
to edit the disk and save your changes.
-
Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu
Click Next to go to the Advanced step and choose one of the following options:
If you selected a Linux template to create the VM, review the details for Cloud-init and configure SSH access.
NoteStatically inject an SSH key by using the custom script in cloud-init or in the wizard. This allows you to securely and remotely manage virtual machines and manage and transfer information. This step is strongly recommended to secure your VM.
- If you selected a Windows template to create the VM, use the SysPrep section to upload answer files in XML format for automated Windows setup.
- Click Next to go to the Review step and review the settings for the virtual machine.
- Click Create Virtual Machine.
Click See virtual machine details to view the Overview for this virtual machine.
The virtual machine is listed in the Virtual Machines tab.
Refer to the virtual machine wizard fields section when running the web console wizard.
8.1.2.1. Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Operating System | The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. | |
Boot Source | Import via URL (creates PVC) | Import content from an image available from an HTTP or S3 endpoint. Example: Obtaining a URL link from the web page with the operating system image. |
Clone existing PVC (creates PVC) | Select an existent persistent volume claim available on the cluster and clone it. | |
Import via Registry (creates PVC) |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
PXE (network boot - adds network interface) | Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. | |
Persistent Volume Claim project | Project name that you want to use for cloning the PVC. | |
Persistent Volume Claim name | PVC name that should apply to this virtual machine template if you are cloning an existing PVC. | |
Mount this as a CD-ROM boot source | A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. | |
Flavor | Tiny, Small, Medium, Large, Custom | Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.
If you choose a default template, you can override the |
Workload Type Note If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI). | Desktop |
A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. Use this template class or the Server template class to prioritize VM density over |
Server |
Balances performance and it is compatible with a wide range of server workloads. Use this template class or the Desktop template class to prioritize VM density over | |
High-Performance (requires CPU Manager) |
A virtual machine configuration that is optimized for high-performance workloads. Use this template class to prioritize | |
Start this virtual machine after creation. | This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. |
Enable the CPU Manager to use the high-performance workload profile.
8.1.2.2. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.1.2.3. Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or S3 endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced → Volume Mode Note Default values are used from the storage profile. | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. |
Advanced storage settings
Name | Parameter | Description |
---|---|---|
Volume Mode Note Default values are used from the storage profile. | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use |
8.1.2.4. Cloud-init fields
Name | Description |
---|---|
Hostname | Sets a specific hostname for the virtual machine. |
Authorized SSH Keys | The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine. |
Custom script | Replaces other options with a field in which you paste a custom cloud-init script. |
To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.
8.1.3. Pasting in a pre-configured YAML file to create a virtual machine
Create a virtual machine by writing or pasting a YAML configuration file. A valid example
virtual machine configuration is provided by default whenever you open the YAML edit screen.
If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Click Create and select Virtual Machine With YAML.
Write or paste your virtual machine configuration in the editable window.
-
Alternatively, use the
example
virtual machine provided by default in the YAML screen.
-
Alternatively, use the
- Optional: Click Download to download the YAML configuration file in its present state.
- Click Create to create the virtual machine.
The virtual machine is listed in the Virtual Machines tab.
8.1.4. Using the CLI to create a virtual machine
Procedure
The spec
object of the virtual machine configuration file references the virtual machine settings, such as the number of cores and the amount of memory, the disk type, and the volumes to use.
-
Attach the virtual machine disk to the virtual machine by referencing the relevant PVC
claimName
as a volume. To create a virtual machine with the OpenShift Container Platform client, run this command:
$ oc create -f <vm.yaml>
- Since virtual machines are created in a Stopped state, run a virtual machine instance by starting it.
A ReplicaSet's purpose is often used to guarantee the availability of a specified number of identical pods. ReplicaSet is not currently supported in OpenShift Virtualization.
Table 8.1. Domain settings
Setting | Description |
---|---|
Cores | The number of cores inside the virtual machine. Must be a value greater than or equal to 1. |
Memory | The amount of RAM that is allocated to the virtual machine by the node. Specify a value in M for Megabyte or Gi for Gigabyte. |
Disks | The name of the volume that is referenced. Must match the name of a volume. |
Table 8.2. Volume settings
Setting | Description |
---|---|
Name | The name of the volume, which must be a DNS label and unique within the virtual machine. |
PersistentVolumeClaim |
The PVC to attach to the virtual machine. The |
8.1.5. Virtual machine storage volume types
Storage volume type | Description |
---|---|
ephemeral | A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. |
persistentVolumeClaim | Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. |
dataVolume |
Data volumes build on the
Specify |
cloudInitNoCloud | Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. |
containerDisk | References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.
A Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note
A |
emptyDisk | Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. |
8.1.6. About RunStrategies for virtual machines
A RunStrategy
for virtual machines determines a virtual machine instance’s (VMI) behavior, depending on a series of conditions. The spec.runStrategy
setting exists in the virtual machine configuration process as an alternative to the spec.running
setting. The spec.runStrategy
setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running
setting with only true
or false
responses. However, the two settings are mutually exclusive. Only either spec.running
or spec.runStrategy
can be used. An error occurs if both are used.
There are four defined RunStrategies.
Always
-
A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as
spec.running: true
. RerunOnFailure
- A VMI is re-created if the previous instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down.
Manual
-
The
start
,stop
, andrestart
virtctl client commands can be used to control the VMI’s state and existence. Halted
-
No VMI is present when a virtual machine is created, which is the same behavior as
spec.running: false
.
Different combinations of the start
, stop
and restart
virtctl commands affect which RunStrategy
is used.
The following table follows a VM’s transition from different states. The first column shows the VM’s initial RunStrategy
. Each additional column shows a virtctl command and the new RunStrategy
after that command is run.
Initial RunStrategy | start | stop | restart |
---|---|---|---|
Always | - | Halted | Always |
RerunOnFailure | - | Halted | RerunOnFailure |
Manual | Manual | Manual | Manual |
Halted | Always | - | - |
In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always
or RerunOnFailure
are rescheduled on a new node.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
RunStrategy: Always 1
template:
...
- 1
- The VMI’s current
RunStrategy
setting.
8.1.7. Additional resources
The
VirtualMachineSpec
definition in the KubeVirt v0.49.0 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification.NoteThe KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization.
-
See Prepare a container disk before adding it to a virtual machine as a
containerDisk
volume. - See Deploying machine health checks for further details on deploying and enabling machine health checks.
- See Installer-provisioned infrastructure overview for further details on installer-provisioned infrastructure.
- See Configuring the SR-IOV Network Operator for further details on the SR-IOV Network Operator.
- Customizing the storage profile
8.2. Editing virtual machines
You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen.
8.2.1. Editing a virtual machine in the web console
Edit select values of a virtual machine in the web console by clicking the pencil icon next to the relevant field. Other values can be edited using the CLI.
Labels and annotations are editable for both preconfigured Red Hat templates and your custom virtual machine templates. All other values are editable only for custom virtual machine templates that users have created using the Red Hat templates or the Create Virtual Machine Template wizard.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine.
- Click the Details tab.
- Click the pencil icon to make a field editable.
- Make the relevant changes and click Save.
If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.2.2. Editing a virtual machine YAML configuration using the web console
You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed.
If you edit the YAML configuration while the virtual machine is running, changes will not take effect until you restart the virtual machine.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
- Click Workloads → Virtualization from the side menu.
- Select a virtual machine.
- Click the YAML tab to display the editable configuration.
- Optional: You can click Download to download the YAML file locally in its current state.
- Edit the file and click Save.
A confirmation message shows that the modification has been successful and includes the updated version number for the object.
8.2.3. Editing a virtual machine YAML configuration using the CLI
Use this procedure to edit a virtual machine YAML configuration using the CLI.
Prerequisites
- You configured a virtual machine with a YAML object configuration file.
-
You installed the
oc
CLI.
Procedure
Run the following command to update the virtual machine configuration:
$ oc edit <object_type> <object_ID>
- Open the object configuration.
- Edit the YAML.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
$ oc apply <object_type> <object_ID>
8.2.4. Adding a virtual disk to a virtual machine
Use this procedure to add a virtual disk to a virtual machine.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Disks tab.
In the Add Disk window, specify the Source, Name, Size, Type, Interface, and Storage Class.
- Advanced: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox.
-
Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
kubevirt-storage-class-defaults
config map.
- Click Add.
If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine.
The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.
8.2.4.1. Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or S3 endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced → Volume Mode Note Default values are used from the storage profile. | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. |
Advanced storage settings
Name | Parameter | Description |
---|---|---|
Volume Mode Note Default values are used from the storage profile. | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use |
8.2.5. Adding a network interface to a virtual machine
Use this procedure to add a network interface to a virtual machine.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Network Interfaces tab.
- Click Add Network Interface.
- In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
- Click Add.
If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine.
The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.2.5.1. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.2.6. Editing CD-ROMs for Virtual Machines
Use the following procedure to edit CD-ROMs for virtual machines.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Disks tab.
-
Click the Options menu
for the CD-ROM that you want to edit and select Edit.
- In the Edit CD-ROM window, edit the fields: Source, Persistent Volume Claim, Name, Type, and Interface.
- Click Save.
8.2.7. Additional resources
8.3. Editing boot order
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
8.3.1. Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.3.2. Editing a boot order list in the web console
Edit the boot order list in the web console.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.3.3. Editing a boot order list in the YAML configuration file
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm example
Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default
- Save the YAML file.
- Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.
8.3.4. Removing items from a boot order list in the web console
Remove items from a boot order list by using the web console.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
-
Click the Remove icon
next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.4. Deleting virtual machines
You can delete a virtual machine from the web console or by using the oc
command line interface.
8.4.1. Deleting a virtual machine using the web console
Deleting a virtual machine permanently removes it from the cluster.
When you delete a virtual machine, the data volume it uses is automatically deleted.
Procedure
- In the OpenShift Virtualization console, click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
Click the Options menu
of the virtual machine that you want to delete and select Delete Virtual Machine.
- Alternatively, click the virtual machine name to open the Virtual Machine Overview screen and click Actions → Delete Virtual Machine.
- In the confirmation pop-up window, click Delete to permanently delete the virtual machine.
8.4.2. Deleting a virtual machine by using the CLI
You can delete a virtual machine by using the oc
command line interface (CLI). The oc
client enables you to perform actions on multiple virtual machines.
When you delete a virtual machine, the data volume it uses is automatically deleted.
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.
8.5. Managing virtual machine instances
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or the command-line interface (CLI).
8.5.1. About virtual machine instances
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc
command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
8.5.2. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis
8.5.3. Listing standalone virtual machine instances using the web console
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
- Click Workloads → Virtualization from the side menu. A list of VMs and standalone VMIs displays. You can identify standalone VMIs by the dark colored badges that display next to the virtual machine instance names.
8.5.4. Editing a standalone virtual machine instance using the web console
You can edit annotations and labels for a standalone virtual machine instance (VMI) using the web console. Other items displayed in the Details page for a standalone VMI are not editable.
Procedure
- Click Workloads → Virtualization from the side menu. A list of virtual machines (VMs) and standalone VMIs displays.
- Click the name of a standalone VMI to open the Virtual Machine Instance Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Annotations.
- Make the relevant changes and click Save.
To edit labels for a standalone VMI, click Actions and select Edit Labels. Make the relevant changes and click Save.
8.5.5. Deleting a standalone virtual machine instance using the CLI
You can delete a standalone virtual machine instance (VMI) by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the VMI that you want to delete.
Procedure
Delete the VMI by running the following command:
$ oc delete vmi <vmi_name>
8.5.6. Deleting a standalone virtual machine instance using the web console
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
- In the OpenShift Container Platform web console, click Workloads → Virtualization from the side menu.
Click the ⋮ button of the standalone virtual machine instance (VMI) that you want to delete and select Delete Virtual Machine Instance.
- Alternatively, click the name of the standalone VMI. The Virtual Machine Instance Overview page displays.
- Select Actions → Delete Virtual Machine Instance.
- In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
8.6. Controlling virtual machine states
You can stop, start, restart, and unpause virtual machines from the web console.
To control virtual machines from the command-line interface (CLI), use the virtctl
client.
8.6.1. Starting a virtual machine
You can start a virtual machine from the web console.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you start it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Start Virtual Machine.
- In the confirmation window, click Start to start the virtual machine.
When you start virtual machine that is provisioned from a URL
source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
8.6.2. Restarting a virtual machine
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you restart it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Restart Virtual Machine.
- In the confirmation window, click Restart to restart the virtual machine.
8.6.3. Stopping a virtual machine
You can stop a virtual machine from the web console.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
-
Click the Options menu
located at the far right end of the row.
-
Click the Options menu
To view comprehensive information about the selected virtual machine before you stop it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Stop Virtual Machine.
- In the confirmation window, click Stop to stop the virtual machine.
8.6.4. Unpausing a virtual machine
You can unpause a paused virtual machine from the web console.
Prerequisites
At least one of your virtual machines must have a status of Paused.
NoteYou can pause virtual machines by using the
virtctl
client.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- In the Status column, click Paused.
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click the pencil icon that is located on the right side of Status.
- In the confirmation window, click Unpause to unpause the virtual machine.
8.7. Accessing virtual machine consoles
OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the web console and by using CLI commands.
8.7.1. About virtual machine console sessions
You can connect to the VNC and serial consoles of a running virtual machine from the Console tab on the Virtual Machine Details page of the web console.
The VNC Console opens by default when you navigate to the Console tab. You can open a connection to the serial console by clicking the VNC Console drop-down list and selecting Serial Console.
Console sessions remain active in the background unless they are disconnected. To ensure that only one console session is open at a time, click the Disconnect before switching check box before switching consoles.
You can open the active console session in a detached window by clicking Open Console in New Window or by clicking Actions → Open Console.
Options for the VNC Console
- Send key combinations to the virtual machine by clicking Send Key.
Options for the Serial Console
- Manually disconnect the Serial Console session from the virtual machine by clicking Disconnect.
- Manually open a Serial Console session to the virtual machine by clicking Reconnect.
8.7.2. Connecting to the virtual machine with the web console
8.7.2.1. Connecting to the terminal
You can connect to a virtual machine by using the web console.
Procedure
- Ensure you are in the correct project. If not, click the Project list and select the appropriate project.
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
-
In the Details tab, click the
virt-launcher-<vm-name>
pod. - Click the Terminal tab. If the terminal is blank, select the terminal and press any key to initiate connection.
8.7.2.2. Connecting to the serial console
Connect to the Serial Console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.
Procedure
- In the OpenShift Virtualization console, click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview page.
- Click Console. The VNC console opens by default.
- Click the VNC Console drop-down list and select Serial Console.
- Optional: Open the serial console in a separate window by clicking Open Console in New Window.
8.7.2.3. Connecting to the VNC console
Connect to the VNC console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.
Procedure
- In the OpenShift Virtualization console, click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview page.
- Click the Console tab. The VNC console opens by default.
- Optional: Open the VNC console in a separate window by clicking Open Console in New Window.
8.7.2.4. Connecting to the RDP console
The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, download the console.rdp
file for the virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console and supply it to your preferred RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer-2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
- In the OpenShift Virtualization console, click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a Windows virtual machine to open the Virtual Machine Overview screen.
- Click the Console tab.
- In the Console list, select Desktop Viewer.
- In the Network Interface list, select the layer-2 NIC.
-
Click Launch Remote Desktop to download the
console.rdp
file. Open an RDP client and reference the
console.rdp
file. For example, using remmina:$ remmina --connect /path/to/console.rdp
- Enter the Administrator user name and password to connect to the Windows virtual machine.
8.7.3. Accessing virtual machine consoles by using CLI commands
8.7.3.1. Accessing a virtual machine instance via SSH
You can use SSH to access a virtual machine (VM) after you expose port 22 on it.
The virtctl expose
command forwards a virtual machine instance (VMI) port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh
service that forwards traffic from a specific port of cluster nodes to port 22 of the <fedora-vm>
virtual machine.
Prerequisites
- You must be in the same project as the VMI.
-
The VMI you want to access must be connected to the default pod network by using the
masquerade
binding method. - The VMI you want to access must be running.
-
Install the OpenShift CLI (
oc
).
Procedure
Run the following command to create the
fedora-vm-ssh
service:$ virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1
- 1
<fedora-vm>
is the name of the VM that you run thefedora-vm-ssh
service on.
Check the service to find out which port the service acquired:
$ oc get svc
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s
In this example, the service acquired the
32551
port.Log in to the VMI via SSH. Use the
ipAddress
of any of the cluster nodes and the port that you found in the previous step:$ ssh username@<node_IP_address> -p 32551
8.7.3.2. Accessing a virtual machine via SSH with YAML configurations
You can enable an SSH connection to a virtual machine (VM) without the need to run the virtctl expose
command. When the YAML file for the VM and the YAML file for the service are configured and applied, the service forwards the SSH traffic to the VM.
The following examples show the configurations for the VM’s YAML file and the service YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Create a namespace for the VM’s YAML file by using the
oc create namespace
command and specifying a name for the namespace.
Procedure
In the YAML file for the VM, add the label and a value for exposing the service for SSH connections. Enable the
masquerade
feature for the interface:Example
VirtualMachine
definition... apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: "" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin ...
- 1
- Name of the namespace created by the
oc create namespace
command. - 2
- Label used by the service to identify the virtual machine instances that are enabled for SSH traffic connections. The label can be any
key:value
pair that is added as alabel
to this YAML file and as aselector
in the service YAML file. - 3
- The interface type is
masquerade
. - 4
- The name of this interface is
testmasquerade
.
Create the VM:
$ oc create -f <path_for_the_VM_YAML_file>
Start the VM:
$ virtctl start vm-ssh
In the YAML file for the service, specify the service name, port number, and the target port.
Example
Service
definition... apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort ...
Create the service:
$ oc create -f <path_for_the_service_YAML_file>
Verify that the VM is running:
$ oc get vmi
Example output
NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01
Check the service to find out which port the service acquired:
$ oc get svc
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s
In this example, the service acquired the port number 30093.
Run the following command to obtain the IP address for the node:
$ oc get node <node_name> -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.23.0 192.168.55.101 <none>
Log in to the VM via SSH by specifying the IP address of the node where the VM is running and the port number. Use the port number displayed by the
oc get svc
command and the IP address of the node displayed by theoc get node
command. The following example shows thessh
command with the username, node’s IP address, and the port number:$ ssh fedora@192.168.55.101 -p 30093
8.7.3.3. Accessing the serial console of a virtual machine instance
The virtctl console
command opens a serial console to the specified virtual machine instance.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
Procedure
Connect to the serial console with
virtctl
:$ virtctl console <VMI>
8.7.3.4. Accessing the graphical console of a virtual machine instances with VNC
The virtctl
client utility can use the remote-viewer
function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer
package.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
If you use virtctl
via SSH on a remote machine, you must forward the X session to your machine.
Procedure
Connect to the graphical interface with the
virtctl
utility:$ virtctl vnc <VMI>
If the command failed, try using the
-v
flag to collect troubleshooting information:$ virtctl vnc <VMI> -v 4
8.7.3.5. Connecting to a Windows virtual machine with an RDP console
The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer 2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
Log in to the OpenShift Virtualization cluster through the
oc
CLI tool as a user with an access token.$ oc login -u <user> https://<cluster.example.com>:8443
Use
oc describe vmi
to display the configuration of the running Windows virtual machine.$ oc describe vmi <windows-vmi-name>
Example output
... spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net ... status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net ...
-
Identify and copy the IP address of the layer 2 network interface. This is
192.0.2.0
in the above example, or2001:db8::
if you prefer IPv6. - Open an RDP client and use the IP address copied in the previous step for the connection.
- Enter the Administrator user name and password to connect to the Windows virtual machine.
8.8. Automating Windows installation with sysprep
You can use Microsoft DVD images and sysprep
to automate the installation, setup, and software provisioning of Windows virtual machines.
8.8.1. Using a Windows DVD to create a VM disk image
Microsoft does not provide disk images for download, but you can create a disk image using a Windows DVD. This disk image can then be used to create virtual machines.
Procedure
- In the OpenShift Virtualization web console, click Storage → PersistentVolumeClaims → Create PersistentVolumeClaim With Data upload form.
- Select the intended project.
- Set the Persistent Volume Claim Name.
- Upload the VM disk image from the Windows DVD. The image is now available as a boot source to create a new Windows VM.
8.8.2. Using a disk image to install Windows
After creating a disk image using a Windows DVD, you can then use that disk image to install Windows on your VM.
Procedure
- Use the OpenShift Virtualization web console VM wizard to create a new Windows VM, using the template available for your version of Windows.
- Select the DVD image as the boot source.
- Uncheck Clone available operating system source to this Virtual Machine.
- Clear the Start this virtual machine after creation checkbox.
- Click Customize virtual machine → Advanced.
-
Under Sysprep, specify the
autounattend.xml
answer file settings by following Microsoft guidelines. -
In the YAML, replace
running:false
withrunStrategy: RerunOnFailure
, and save. The VM will start automatically. Thesysprep
disk containing theautounattend.xml
answer file is now attached to the VM.
8.8.3. Generalizing a Windows VM using sysprep
Generalizing an image allows that image to remove all system-specific configuration data when the image is deployed on a virtual machine.
Before generalizing the VM, you must ensure the sysprep
tool cannot detect an answer file after the unattended Windows installation.
Procedure
Remove the
sysprep
disk.- In the web console, select Virtualization → Virtual Machines, and select the relevant VM.
- Click Disks.
-
Click the Options menu
for the
sysprep
disk, then click Delete. - Click Detach in the Detach sysprep disk dialog.
-
Rename
C:\Windows\Panther\unattend.xml
to avoid detection by thesysprep
tool. Start the
sysprep
program by running the following command:%WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm
-
After the
sysprep
tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs.
You can now specialize the VM.
8.8.4. Specializing a Windows VM
Specializing a virtual machine configures the computer-specific information from the image onto the VM.
You must generalize the root disk before specializing the virtual machine.
Procedure
- Use the OpenShift Virtualization web console VM wizard to create a new Windows VM.
-
When selecting the
Boot Source
, chooseClone existing PVC
, and clone the PVC from the initial VM root disk. - Click Customize virtual machine → Advanced
-
Under Sysprep, specify the
unattend.xml
answer file settings following Microsoft guidelines. -
Add filler information to the
autounattend.xml
answer file settings. -
Start the VM. On first boot, Windows will use the
unattend.xml
answer file to specialize the VM. The VM is now ready to use.
8.8.5. Additional resources
8.9. Triggering virtual machine failover by resolving a failed node
If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always
configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node
object.
If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks:
- Failed nodes are automatically recycled.
-
Virtual machines with
RunStrategy
set toAlways
orRerunOnFailure
are automatically scheduled on healthy nodes.
8.9.1. Prerequisites
-
A node where a virtual machine was running has the
NotReady
condition. -
The virtual machine that was running on the failed node has
RunStrategy
set toAlways
. -
You have installed the OpenShift CLI (
oc
).
8.9.2. Deleting nodes from a bare metal cluster
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:
Mark the node as unschedulable:
$ oc adm cordon <node_name>
Drain all pods on the node:
$ oc adm drain <node_name> --force=true
This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
Delete the node from the cluster:
$ oc delete node <node_name>
Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.
- If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.
8.9.3. Verifying virtual machine failover
After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc
CLI.
8.9.3.1. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis
8.10. Installing the QEMU guest agent on virtual machines
The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.
8.10.1. Installing QEMU guest agent on a Linux virtual machine
The qemu-guest-agent
is widely available and available by default in Red Hat virtual machines. Install the agent and start the service.
To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected
is listed in the VM spec.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
Procedure
- Access the virtual machine command line through one of the consoles or by SSH.
Install the QEMU guest agent on the virtual machine:
$ yum install -y qemu-guest-agent
Ensure the service is persistent and start it:
$ systemctl enable --now qemu-guest-agent
You can also install and start the QEMU guest agent by using the custom script field in the cloud-init section of the wizard when creating either virtual machines or virtual machines templates in the web console.
8.10.2. Installing QEMU guest agent on a Windows virtual machine
For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers. Install the drivers on an existng or new Windows system.
To check if your virtual machine (VM) has the QEMU guest agent installed and running, verify that AgentConnected
is listed in the VM spec.
To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent.
The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM’s file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI.
8.10.2.1. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
8.10.2.2. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
8.11. Viewing the QEMU guest agent information for virtual machines
When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.
8.11.1. Prerequisites
- Install the QEMU guest agent on the virtual machine.
8.11.2. About the QEMU guest agent information in the web console
When the QEMU guest agent is installed, the Details pane within the Virtual Machine Overview tab and the Details tab display information about the hostname, operating system, time zone, and logged in users.
The Virtual Machine Overview shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems.
If the QEMU guest agent is not installed, the Virtual Machine Overview tab and the Details tab display information about the operating system that was specified when the virtual machine was created.
8.11.3. Viewing the QEMU guest agent information in the web console
You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host.
Procedure
- Click Workloads → Virtual Machines from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine name to open the Virtual Machine Overview screen and view the Details pane.
- Click Logged in users to view the Details tab that shows information for users.
- Click the Disks tab to view information about the file systems.
8.12. Managing config maps, secrets, and service accounts in virtual machines
You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can:
- Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine.
- Store non-confidential configuration data in a config map so that a pod or another object can consume the data.
- Allow a component to access the API server by associating a service account with that component.
OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead.
8.12.1. Adding a secret, config map, or service account to a virtual machine
Add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Environment tab.
- Click Select a resource and select a secret, config map, or service account from the list. A six character serial number is automatically generated for the selected resource.
- Click Save.
- Optional. Add another object by clicking Add Config Map, Secret or Service Account.
- You can reset the form to the last saved state by clicking Reload.
- The Environment resources are added to the virtual machine as disks. You can mount the secret, config map, or service account as you would mount any other disk.
- If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page.
Verification
- From the Virtual Machine Overview page, click the Disks tab.
- Check to ensure that the secret, config map, or service account is included in the list of disks.
Optional. Choose the appropriate method to apply your changes:
- If the virtual machine is running, restart the virtual machine by clicking Actions → Restart Virtual Machine.
- If the virtual machine is stopped, start the virtual machine by clicking Actions → Start Virtual Machine.
You can now mount the secret, config map, or service account as you would mount any other disk.
8.12.2. Removing a secret, config map, or service account from a virtual machine
Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console.
Prerequisites
- You must have at least one secret, config map, or service account that is attached to a virtual machine.
Procedure
- Click Workloads → Virtualization from the side menu.
- Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Environment tab.
-
Find the item that you want to delete in the list, and click Remove
on the right side of the item.
- Click Save.
You can reset the form to the last saved state by clicking Reload.
Verification
- From the Virtual Machine Overview page, click the Disks tab.
- Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks.
8.12.3. Additional resources
8.13. Installing VirtIO driver on an existing Windows virtual machine
8.13.1. About VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Ecosystem Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing Virtio drivers on a new Windows virtual machine.
8.13.2. Supported VirtIO drivers for Microsoft Windows virtual machines
Table 8.3. Supported drivers
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
8.13.3. Adding VirtIO drivers container disk to a virtual machine
OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
8.13.4. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
8.13.5. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
8.14. Installing VirtIO driver on a new Windows virtual machine
8.14.1. Prerequisites
- Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine.
8.14.2. About VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Ecosystem Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing VirtIO driver on an existing Windows virtual machine.
8.14.3. Supported VirtIO drivers for Microsoft Windows virtual machines
Table 8.4. Supported drivers
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
8.14.4. Adding VirtIO drivers container disk to a virtual machine
OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
8.14.5. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
8.14.6. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
8.15. Advanced virtual machine management
8.15.1. Specifying nodes for virtual machines
You can place virtual machines (VMs) on specific nodes by using node placement rules.
8.15.1.1. About node placement for virtual machines
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the spec
field of a VirtualMachine
manifest:
nodeSelector
- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachine
workload type is based on thePod
object.NoteAffinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.
tolerations
- Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
8.15.1.2. Node placement examples
The following example YAML file snippets use nodePlacement
, affinity
, and tolerations
fields to customize node placement for virtual machines.
8.15.1.2.1. Example: VM node placement with nodeSelector
In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1
and example-key-2 = example-value-2
labels.
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 ...
8.15.1.2.2. Example: VM node placement with pod affinity and pod anti-affinity
In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1
. If there is no such pod running on any node, the VM is not scheduled.
If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2
. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.15.1.2.3. Example: VM node placement with node affinity
In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1
or the label example.io/example-key = example-value-2
. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.
If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value
. However, if all candidate nodes have this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.15.1.2.4. Example: VM node placement with tolerations
In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule
taint. Because this virtual machine has matching tolerations
, it can schedule onto the tainted nodes.
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" ...
8.15.1.3. Additional resources
8.15.2. Configuring certificate rotation
Configure certificate rotation parameters to replace existing certificates.
8.15.2.1. Configuring certificate rotation
You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged
custom resource (CR).
Procedure
Open the
HyperConverged
CR by running the following command:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Edit the
spec.certConfig
fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golangParseDuration
format.apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3
- Apply the YAML file to your cluster.
8.15.2.2. Troubleshooting certificate rotation parameters
Deleting one or more certConfig
values causes them to revert to the default values, unless the default values conflict with one of the following conditions:
-
The value of
ca.renewBefore
must be less than or equal to the value ofca.duration
. -
The value of
server.duration
must be less than or equal to the value ofca.duration
. -
The value of
server.renewBefore
must be less than or equal to the value ofserver.duration
.
If the default values conflict with these conditions, you will receive an error.
If you remove the server.duration
value in the following example, the default value of 24h0m0s
is greater than the value of ca.duration
, conflicting with the specified conditions.
Example
certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s
This results in the following error message:
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
The error message only mentions the first conflict. Review all certConfig values before you proceed.
8.15.3. Automating management tasks
You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.
8.15.3.1. About Red Hat Ansible Automation
Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.
Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc
CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.
8.15.3.2. Automating virtual machine creation
You can use the kubevirt_vm
Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.
Prerequisites
- Red Hat Ansible Engine version 2.8 or newer
Procedure
Edit an Ansible Playbook YAML file so that it includes the
kubevirt_vm
task:kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:
NoteThis snippet only includes the
kubevirt_vm
portion of the playbook.Edit the values to reflect the virtual machine you want to create, including the
namespace
, the number ofcpu_cores
, thememory
, and thedisks
. For example:kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
If you want the virtual machine to boot immediately after creation, add
state: running
to the YAML file. For example:kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1
- 1
- Changing this value to
state: absent
deletes the virtual machine, if it already exists.
Run the
ansible-playbook
command, using your playbook’s file name as the only argument:$ ansible-playbook create-vm.yaml
Review the output to determine if the play was successful:
Example output
(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If you did not include
state: running
in your playbook file and you want to boot the VM now, edit the file so that it includesstate: running
and run the playbook again:$ ansible-playbook create-vm.yaml
To verify that the virtual machine was created, try to access the VM console.
8.15.3.3. Example: Ansible Playbook for creating virtual machines
You can use the kubevirt_vm
Ansible Playbook to automate virtual machine creation.
The following YAML file is an example of the kubevirt_vm
playbook. It includes sample values that you must replace with your own information if you run the playbook.
--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
Additional information
8.15.4. Using EFI mode for virtual machines
You can boot a virtual machine (VM) in Extensible Firmware Interface (EFI) mode.
8.15.4.1. About EFI mode for virtual machines
Extensible Firmware Interface (EFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. EFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a .efi
extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.
8.15.4.2. Booting virtual machines in EFI mode
You can configure a virtual machine to boot in EFI mode by editing the VM manifest.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Create a YAML file that defines a VM object. Use the firmware stanza of the example YAML file:
Booting in EFI mode with secure boot active
apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 #...
- 1
- OpenShift Virtualization requires System Management Mode (
SMM
) to be enabled for Secure Boot in EFI mode to occur. - 2
- OpenShift Virtualization supports a VM with or without Secure Boot when using EFI mode. If Secure Boot is enabled, then EFI mode is required. However, EFI mode can be enabled without using Secure Boot.
Apply the manifest to your cluster by running the following command:
$ oc create -f <file_name>.yaml
8.15.5. Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
8.15.5.1. Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
8.15.5.2. OpenShift Virtualization networking glossary
OpenShift Virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
- Multus
- a "meta" CNI plug-in that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Node network configuration policy (NNCP)
-
a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a
NodeNetworkConfigurationPolicy
manifest to the cluster. - Preboot eXecution Environment (PXE)
- an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.
8.15.5.3. PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1", "vlan": 1 1 }, { "type": "cnv-tuning" 2 } ] }'
NoteThe virtual machine instance will be attached to the bridge
br1
through an access port with the requested VLAN.
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1
NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2
Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OpenShift Container Platform.$ ip addr
Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
8.15.5.4. Template: Virtual machine configuration file for PXE booting
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: special: vm-pxe-boot name: vm-pxe-boot spec: template: metadata: labels: special: vm-pxe-boot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 machine: type: "" resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: pxe-net-conf name: pxe-net terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin name: cloudinitdisk status: {}
8.15.6. Managing guest memory
If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest’s YAML configuration file. OpenShift Virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting.
The following procedures increase the chance that virtual machine processes will be killed due to memory pressure. Proceed only if you understand the risks.
8.15.6.1. Configuring guest memory overcommitment
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.
Memory overcommitment increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set
spec.domain.memory.guest
to a higher value thanspec.domain.resources.requests.memory
. This process is called memory overcommitment.In this example,
1024M
is requested from the cluster, but the virtual machine instance is told that it has2048M
available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M.kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M
NoteThe same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure.
Create the virtual machine:
$ oc create -f <file_name>.yaml
8.15.6.2. Disabling guest memory overhead accounting
A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance
process.
Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting.
Disabling guest memory overhead accounting increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To disable guest memory overhead accounting, edit the YAML configuration file and set the
overcommitGuestOverhead
value totrue
. This parameter is disabled by default.kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M
NoteIf
overcommitGuestOverhead
is enabled, it adds the guest overhead to memory limits, if present.Create the virtual machine:
$ oc create -f <file_name>.yaml
8.15.7. Using huge pages with virtual machines
You can use huge pages as backing memory for virtual machines in your cluster.
8.15.7.1. Prerequisites
- Nodes must have pre-allocated huge pages configured.
8.15.7.2. What huge pages do
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.
8.15.7.3. Configuring huge pages for virtual machines
You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize
and resources.requests.memory
parameters in your virtual machine configuration.
The memory request must be divisible by the page size. For example, you cannot request 500Mi
memory with a page size of 1Gi
.
The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.
If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.
Prerequisites
- Nodes must have pre-allocated huge pages configured.
Procedure
In your virtual machine configuration, add the
resources.requests.memory
andmemory.hugepages.pageSize
parameters to thespec.domain
. The following configuration snippet is for a virtual machine that requests a total of4Gi
memory with a page size of1Gi
:kind: VirtualMachine ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 ...
Apply the virtual machine configuration:
$ oc apply -f <virtual_machine>.yaml
8.15.8. Enabling dedicated resources for virtual machines
To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.
8.15.8.1. About dedicated resources
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
8.15.8.2. Prerequisites
-
The CPU Manager must be configured on the node. Verify that the node has the
cpumanager = true
label before scheduling virtual machine workloads. - The virtual machine must be powered off.
8.15.8.3. Enabling dedicated resources for a virtual machine
You can enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created by using a Red Hat template or the wizard can be enabled with dedicated resources.
Procedure
- Click Workloads → Virtual Machines from the side menu.
- Select a virtual machine to open the Virtual Machine tab.
- Click the Details tab.
- Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
8.15.9. Scheduling virtual machines
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
8.15.9.1. Policy attributes
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
Policy attribute | Description |
---|---|
force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
disable | The VM cannot be scheduled with CPU node discovery. |
forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
8.15.9.2. Setting a policy attribute and CPU feature
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
Procedure
Edit the
domain
spec of your VM configuration file. The following example sets the CPU feature and therequire
policy for a virtual machine (VM):apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2
8.15.9.3. Scheduling virtual machines with the supported CPU model
You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
domain
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1
- 1
- CPU model for the VM.
8.15.9.4. Scheduling virtual machines with the host model
When the CPU model for a virtual machine (VM) is set to host-model
, the VM inherits the CPU model of the node where it is scheduled.
Procedure
Edit the
domain
spec of your VM configuration file. The following example showshost-model
being specified for the virtual machine:apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1
- 1
- The VM that inherits the CPU model of the node where it is scheduled.
8.15.10. Configuring PCI passthrough
The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.
Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc
command-line interface (CLI).
8.15.10.1. About preparing a host device for PCI passthrough
To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig
object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices
field of the HyperConverged
custom resource (CR). The permittedHostDevices
list is empty when you first install the OpenShift Virtualization Operator.
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged
CR.
8.15.10.1.1. Adding kernel arguments to enable the IOMMU driver
To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig
object and add the kernel arguments.
Prerequisites
- Administrative privilege to a working OpenShift Container Platform cluster.
- Intel or AMD CPU hardware.
- Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.
Procedure
Create a
MachineConfig
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ...
Create the new
MachineConfig
object:$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
MachineConfig
object was added.$ oc get MachineConfig
8.15.10.1.2. Binding PCI devices to the VFIO driver
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID
and device-ID
from each device and create a list with the values. Add this list to the MachineConfig
object. The MachineConfig
Operator generates the /etc/modprobe.d/vfio.conf
on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.
Prerequisites
- You added kernel arguments to enable IOMMU for the CPU.
Procedure
Run the
lspci
command to obtain thevendor-ID
and thedevice-ID
for the PCI device.$ lspci -nnv | grep -i nvidia
Example output
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
Create a Butane config file,
100-worker-vfiopci.bu
, binding the PCI device to the VFIO driver.NoteSee "Creating machine configs with Butane" for information about Butane.
Example
variant: openshift version: 4.10.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci
- 1
- Applies the new kernel argument only to worker nodes.
- 2
- Specify the previously determined
vendor-ID
value (10de
) and thedevice-ID
value (1eb8
) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. - 3
- The file that loads the vfio-pci kernel module on the worker nodes.
Use Butane to generate a
MachineConfig
object file,100-worker-vfiopci.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
Apply the
MachineConfig
object to the worker nodes:$ oc apply -f 100-worker-vfiopci.yaml
Verify that the
MachineConfig
object was added.$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s
Verification
Verify that the VFIO driver is loaded.
$ lspci -nnk -d 10de:
The output confirms that the VFIO driver is being used.
Example output
04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau
8.15.10.1.3. Exposing PCI host devices in the cluster using the CLI
To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices
array of the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the PCI device information to the
spec.permittedHostDevices.pciHostDevices
array. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 ...
- 1
- The host devices that are permitted to be used in the cluster.
- 2
- The list of PCI devices available on the node.
- 3
- The
vendor-ID
and thedevice-ID
required to identify the PCI device. - 4
- The name of a PCI host device.
- 5
- Optional: Setting this field to
true
indicates that the resource is provided by an external device plug-in. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plug-in.
NoteThe above example snippet shows two PCI host devices that are named
nvidia.com/GV100GL_Tesla_V100
andnvidia.com/TU104GL_Tesla_T4
added to the list of permitted host devices in theHyperConverged
CR. These devices have been tested and verified to work with OpenShift Virtualization.- Save your changes and exit the editor.
Verification
Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the
nvidia.com/GV100GL_Tesla_V100
,nvidia.com/TU104GL_Tesla_T4
, andintel.com/qat
resource names.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250
8.15.10.1.4. Removing PCI host devices from the cluster using the CLI
To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Remove the PCI device information from the
spec.permittedHostDevices.pciHostDevices
array by deleting thepciDeviceSelector
,resourceName
andexternalResourceProvider
(if applicable) fields for the appropriate device. In this example, theintel.com/qat
resource has been deleted.Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" ...
- Save your changes and exit the editor.
Verification
Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the
intel.com/qat
resource name.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250
8.15.10.2. Configuring virtual machines for PCI passthrough
After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.
8.15.10.2.1. Assigning a PCI device to a virtual machine
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.
Procedure
Assign the PCI device to a virtual machine as a host device.
Example
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1
- 1
- The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
Verification
Use the following command to verify that the host device is available from the virtual machine.
$ lspci -nnk | grep NVIDIA
Example output
$ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
8.15.10.3. Additional resources
8.15.11. Configuring vGPU passthrough
Your virtual machines can access a virtual GPU (vGPU) hardware. Assigning a vGPU to your virtual machine allows you do the following:
- Access a fraction of the underlying hardware’s GPU to achieve high performance benefits in your virtual machine.
- Streamline resource-intensive I/O operations.
vGPU passthrough can only be assigned to devices that are connected to clusters running in a bare metal environment.
8.15.11.1. Assigning vGPU passthrough to virtual machines
Use the OpenShift Container Platform web console to assign vGPU devices to your virtual machines.
Prerequisites
- Ensure your cluster and virtual machines are deployed in a bare metal environment. At this time, no other environments are supported.
Procedure
Assign a virtual GPU device to your virtual machine:
- In the OpenShift Container Platform web console, click Virtualization → Virtual Machines from the side menu.
- Select the virtual machine to which you want to assign the device.
Click the Details tab:
- The Hardware Devices field includes links to add or remove GPU devices and Host devices.
- Assigning a vGPU using GPU devices enables VNC console access for the attached virtual GPU. Assigning a vGPU using Host Devices does not enable VNC console access.
- Use the minus icon to remove an existing hardware device.
- You can only add or remove devices from your virtual machine when it is stopped.
- Click the pencil icon and use the pop-up windows to add or remove devices, selecting the appropriate hardware resource names.
- Click Save.
-
Click the YAML tab to verify that the new devices have been added to your cluster configuration in the
hostDevices
section.
You can add hardware devices to virtual machines using the OpenShift Container Platform web console when you create a virtual machine or create a virtual machine using a template that you customize. You cannot add devices to pre-supplied boot source templates for specific operating systems, such as Windows 10 or RHEL 7.
To add or remove hardware devices to a custom template that you create, click the Advanced tab in the Create Virtual Machine wizard and click Hardware devices. Use the minus icon to remove an existing hardware device. You can only add or remove devices from your virtual machine when it is stopped.
To display resources that are connected to your cluster, click Compute → Hardware Devices from the side menu.
8.15.11.2. Additional resources
8.15.12. Configuring mediated devices
OpenShift Virtualization automatically creates mediated devices, such as virtual GPUs (vGPUs), if you provide a list of devices in the HyperConverged
custom resource (CR).
Declarative configuration of mediated devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
8.15.12.1. Prerequisites
If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices.
- If you use NVIDIA cards, you installed the NVIDIA GRID driver.
8.15.12.2. About using virtual GPUs with OpenShift Virtualization
Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged
custom resource (CR). This automation is especially useful for large clusters.
Refer to your hardware vendor’s documentation for functionality and support details.
- Mediated device
- A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests.
8.15.12.2.1. Configuration overview
When configuring mediated devices, an administrator must:
- Create the mediated devices.
- Expose the mediated devices to the cluster.
The HyperConverged
CR includes APIs that accomplish both tasks:
Creating mediated devices
... spec: mediatedDevicesConfiguration: mediatedDevicesTypes: <.> - <device_type> nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - <device_type> nodeSelector: <.> <node_selector_key>: <node_selector_value> ...
<.> Required: Configures global settings for the cluster. <.> Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDevicesTypes
configuration. <.> Required if you use nodeMediatedDeviceTypes
. Overrides the global mediatedDevicesTypes
configuration for select nodes. <.> Required if you use nodeMediatedDeviceTypes
. Must include a key:value
pair.
Exposing mediated devices to the cluster
... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q <.> resourceName: nvidia.com/GRID_T4-2Q ...
<.> Exposes the mediated devices that map to this value on the host.
+
You can see the mediated device types that your device supports by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name
, substituting the correct values for your system.
For example, the name file for the nvidia-231
type contains the selector string GRID T4-2Q
. Using GRID T4-2Q
as the mdevNameSelector
value allows nodes to use the nvidia-231
type.
8.15.12.2.2. How vGPUs are assigned to nodes
For each physical device, OpenShift Virtualization configures:
- A single mdev type.
- The maximum number of instances of the selected mdev type.
The cluster architecture affects how devices are created and assigned to nodes.
- Large cluster with multiple cards per node
On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example:
... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 ...
In this scenario, each node has two cards, both of which support the following vGPU types:
nvidia-105 ... nvidia-108 nvidia-217 nvidia-299 ...
On each node, OpenShift Virtualization creates:
- 16 vGPUs of type nvidia-105 on the first card.
- 2 vGPUs of type nvidia-108 on the second card.
- One node has a single card that supports more than one requested vGPU type
OpenShift Virtualization uses the supported type that comes first on the
mediatedDevicesTypes
list.For example, a node’s card supports
nvidia-223
andnvidia-224
. The followingmediatedDevicesTypes
list is configured:... mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224 ...
In this example, OpenShift Virtualization uses the
nvidia-223
type.
8.15.12.2.3. About changing and removing mediated devices
OpenShift Virtualization updates the cluster’s mediated device configuration if:
-
You edit the
HyperConverged
CR and change the contents of themediatedDevicesTypes
stanza. -
You change the node labels that match the
nodeMediatedDeviceTypes
node selector. You remove the device information from the
spec.mediatedDevicesConfiguration
andspec.permittedHostDevices
stanzas of theHyperConverged
CR.NoteIf you remove the device information from the
spec.permittedHostDevices
stanza without also removing it from thespec.mediatedDevicesConfiguration
stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.
Depending on the specific changes, these actions cause OpenShift Virtualization to reconfigure mediated devices or remove them from the cluster nodes.
8.15.12.3. Preparing hosts for mediated devices
You must enable the IOMMU (Input-Output Memory Management Unit) driver before you can configure mediated devices.
8.15.12.3.1. Adding kernel arguments to enable the IOMMU driver
To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig
object and add the kernel arguments.
Prerequisites
- Administrative privilege to a working OpenShift Container Platform cluster.
- Intel or AMD CPU hardware.
- Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.
Procedure
Create a
MachineConfig
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ...
Create the new
MachineConfig
object:$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
MachineConfig
object was added.$ oc get MachineConfig
8.15.12.4. Adding and removing mediated devices
8.15.12.4.1. Creating and exposing mediated devices
You can expose and create mediated devices such as virtual GPUs (vGPUs) by editing the HyperConverged
custom resource (CR).
Prerequisites
- You enabled the IOMMU (Input-Output Memory Management Unit) driver.
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the mediated device information to the
HyperConverged
CRspec
, ensuring that you include themediatedDevicesConfiguration
andpermittedHostDevices
stanzas. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q ...
<.> Creates mediated devices. <.> Required: Global
mediatedDevicesTypes
configuration. <.> Optional: Overrides the global configuration for specific nodes. <.> Required if you usenodeMediatedDeviceTypes
. <.> Exposes mediated devices to the cluster.- Save your changes and exit the editor.
Verification
You can verify that a device was added to a specific node by running the following command:
$ oc describe node <node_name>
8.15.12.4.2. Removing mediated devices from the cluster using the CLI
To remove a mediated device from the cluster, delete the information for that device from the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Remove the device information from the
spec.mediatedDevicesConfiguration
andspec.permittedHostDevices
stanzas of theHyperConverged
CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q
- Save your changes and exit the editor.
8.15.12.5. Assigning a mediated device to a virtual machine
Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines.
Prerequisites
-
The mediated device is configured in the
HyperConverged
custom resource.
Procedure
Assign the mediated device to a virtual machine (VM) by editing the
spec.domain.devices.gpus
stanza of theVirtualMachine
manifest:Example virtual machine manifest
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2
Verification
To verify that the device is available from the virtual machine, run the following command, substituting
<device_name>
with thedeviceName
value from theVirtualMachine
manifest:$ lspci -nnk | grep <device_name>
8.15.12.6. Additional resources
8.15.13. Configuring a watchdog
Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service.
8.15.13.1. Prerequisites
-
The virtual machine must have kernel support for an
i6300esb
watchdog device. Red Hat Enterprise Linux (RHEL) images supporti6300esb
.
8.15.13.2. Defining a watchdog device
Define how the watchdog proceeds when the operating system (OS) no longer responds.
Table 8.5. Available actions
|
The virtual machine (VM) powers down immediately. If |
| The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it. |
| The VM gracefully powers down by stopping all services. |
Procedure
Create a YAML file with the following contents:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 ...
- 1
- Specify the
watchdog
action (poweroff
,reset
, orshutdown
).
The example above configures the
i6300esb
watchdog device on a RHEL8 VM with the poweroff action and exposes the device as/dev/watchdog
.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -i
Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-trigger
Terminate the watchdog service:
# pkill -9 watchdog
8.15.13.3. Installing a watchdog device
Install the watchdog
package on your virtual machine and start the watchdog service.
Procedure
As a root user, install the
watchdog
package and dependencies:# yum install watchdog
Uncomment the following line in the
/etc/watchdog.conf
file, and save the changes:#watchdog-device = /dev/watchdog
Enable the watchdog service to start on boot:
# systemctl enable --now watchdog.service
8.15.13.4. Additional resources
8.15.14. Automatic importing and updating of pre-defined boot sources
As of version 4.10, OpenShift Virtualization automatically imports and updates pre-defined boot sources, unless you manually opt-out. If you upgrade to version OpenShift Virtualization 4.10 from version 4.9 or earlier and have pre-defined boot sources from the earlier version, you must manually opt-in to automatic imports and updates for those pre-defined boot sources.
8.15.14.1. Enabling automatic boot source updates
If you have pre-defined boot sources from OpenShift Virtualization 4.9, then you must manually opt them in to the automatic boot source updates. All pre-defined boot sources from OpenShift Virtualization 4.10 and later are automatically updated by default.
Procedure
Use the following command to apply the
dataImportCron
label to the data source:$ oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true
8.15.14.2. Disabling automatic boot source updates
You can reduce the number of logs on disconnected environments or reduce resource usage by disabling the automatic imports and updates of pre-defined boot sources. Set the spec.featureGates.enableCommonBootImageImport
field in the HyperConverged
custom resource (CR) to false
.
Custom boot sources are not affected by this setting.
Procedure
Use the following command to disable automatic updates:
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": false}]'
8.15.14.3. Re-enabling automatic boot source updates
If you have previously disabled automatic boot source updates, you must manually re-enable the feature. Set the spec.featureGates.enableCommonBootImageImport
field in the HyperConverged
custom resource (CR) to true
.
Procedure
Use the following command to re-enable automatic updates:
$ oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/enableCommonBootImageImport", "value": true}]'
8.15.14.4. Enabling automatic updates on custom boot sources
OpenShift Virtualization automatically updates pre-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic imports and updates on any custom boot sources by editing the HyperConverged
custom resource (CR).
Procedure
Use the following command to open the
HyperConverged
CR for editing:$ oc edit -n openshift-cnv HyperConverged
Edit the
HyperConverged
CR, specifying the appropriate template and boot source in thedataImportCronTemplates
section. For example:Example in CentOS 7
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: "None" 5
- 1
- This annotation is required for storage classes with
volumeBindingMode
set toWaitForFirstConsumer
. - 2
- Schedule for the job specified in cron format.
- 3
- Use to create a data volume from a registry source. Use the default
pod
pullMethod
and notnode
pullMethod
, which is based on thenode
docker cache. Thenode
docker cache is useful when a registry image is available viaContainer.Image
, but the CDI importer is not authorized to access it. - 4
- For the custom image to be detected as an available boot source, the name of the image’s
managedDataSource
must match the name of the template’sDataSource
, which is found underspec.dataVolumeTemplates.spec.sourceRef.name
in the VM template YAML file. - 5
- Use
All
to retain data volumes and data sources when the cron job is deleted. UseNone
to delete data volumes and data sources when the cron job is deleted.
8.15.15. Enabling descheduler evictions on virtual machines
Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
The descheduler can be used to evict a running pod to allow the pod to be rescheduled onto a more suitable node. You must install the descheduler by using the OpenShift Container Platform web console or OpenShift CLI (oc
) before you can enable it on your virtual machine (VM).
8.15.15.1. Descheduler profiles
Use the Technology Preview DevPreviewLongLifecycle
profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load.
DevPreviewLongLifecycle
This profile balances resource usage between nodes and enables the following strategies:
-
RemovePodsHavingTooManyRestarts
: removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization
: evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler.- A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods).
- A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods).
-
8.15.15.2. Installing the descheduler
The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles.
Prerequisites
- Cluster administrator privileges.
- Access to the OpenShift Container Platform web console.
Procedure
- Log in to the OpenShift Container Platform web console.
Create the required namespace for the Kube Descheduler Operator.
- Navigate to Administration → Namespaces and click Create Namespace.
-
Enter
openshift-kube-descheduler-operator
in the Name field, enteropenshift.io/cluster-monitoring=true
in the Labels field to enable descheduler metrics, and click Create.
Install the Kube Descheduler Operator.
- Navigate to Operators → OperatorHub.
- Type Kube Descheduler Operator into the filter box.
- Select the Kube Descheduler Operator and click Install.
- On the Install Operator page, select A specific namespace on the cluster. Select openshift-kube-descheduler-operator from the drop-down menu.
- Adjust the values for the Update Channel and Approval Strategy to the desired values.
- Click Install.
Create a descheduler instance.
- From the Operators → Installed Operators page, click the Kube Descheduler Operator.
- Select the Kube Descheduler tab and click Create KubeDescheduler.
Edit the settings as necessary.
Expand the Profiles section and select
DevPreviewLongLifecycle
. TheAffinityAndTaints
profile is enabled by default.ImportantThe only profile currently available for OpenShift Virtualization is
DevPreviewLongLifecycle
.
You can also configure the profiles and settings for the descheduler later using the OpenShift CLI (oc
).
8.15.15.3. Enabling descheduler evictions on a virtual machine (VM)
After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine
custom resource (CR).
Prerequisites
-
Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI (
oc
). - Ensure that the VM is not running.
Procedure
Before starting the VM, add the
descheduler.alpha.kubernetes.io/evict
annotation to theVirtualMachine
CR:apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true"
If you did not already set the
DevPreviewLongLifecycle
profile in the web console during installation, specify theDevPreviewLongLifecycle
in thespec.profile
section of theKubeDescheduler
object:apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle
The descheduler is now enabled on the VM.
8.16. Importing virtual machines
8.16.1. TLS certificates for data volume imports
8.16.1.1. Adding TLS certificates for authenticating data volume imports
TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume.
Create the config map by referencing the relative file path for the TLS certificate.
Procedure
Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace.
$ oc get ns
Create the config map:
$ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>
8.16.1.2. Example: Config map created from a TLS certificate
The following example is of a config map created from ca.pem
TLS certificate.
apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE-----
8.16.2. Importing virtual machine images with data volumes
Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage.
The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.
8.16.2.1. Prerequisites
- If the endpoint requires a TLS certificate, the certificate must be included in a config map in the same namespace as the data volume and referenced in the data volume configuration.
To import a container disk:
- You might need to prepare a container disk from a virtual machine image and store it in your container registry before importing it.
-
If the container registry does not have TLS, you must add the registry to the
insecureRegistries
field of theHyperConverged
custom resource before you can import a container disk from it.
- You might need to define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.16.2.2. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.
8.16.2.3. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.2.4. Importing a virtual machine image into a persistent volume claim by using a data volume
You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.
The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or the image can be built into a container disk and stored in a container registry.
To create a virtual machine from an imported virtual machine image, specify the image or container disk endpoint in the VirtualMachine
configuration file before you create the virtual machine.
Prerequisites
-
You have installed the OpenShift CLI (
oc
). - Your cluster has at least one available persistent volume.
To import a virtual machine image you must have the following:
-
A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
An HTTP endpoint where the image is hosted, along with any authentication credentials needed to access the data source. For example:
http://www.example.com/path/to/data
-
A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using
To import a container disk you must have the following:
-
A container disk built from a virtual machine image stored in your container image registry, along with any authentication credentials needed to access the data source. For example:
docker://registry.example.com/container-image
-
A container disk built from a virtual machine image stored in your container image registry, along with any authentication credentials needed to access the data source. For example:
Procedure
Optional: If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster:apiVersion: v1 kind: Secret metadata: name: <endpoint-secret> labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
$ oc apply -f endpoint-secret.yaml
Edit the virtual machine configuration file, specifying the data source for the virtual machine image you want to import. In this example, a Fedora image is imported from an
http
source:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local source: http: 1 url: "https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2" 2 secretRef: "" 3 certConfigMap: "" 4 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" 5 resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}
- 1
- The source type to import the image from. This example uses an HTTP endpoint. To import a container disk from a registry, replace
http
withregistry
. - 2
- The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTP endpoint. An example of a container registry endpoint is
url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"
. - 3
- The
secretRef
parameter is optional. - 4
- The
certConfigMap
is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced config map must be in the same namespace as the data volume. - 5
- Specify
type: dataVolume
ortype: ""
. If you specify any other value fortype
, such aspersistentVolumeClaim
, a warning is displayed, and the virtual machine does not start.
Create the virtual machine:
$ oc create -f vm-<name>-datavolume.yaml
NoteThe
oc create
command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation, and the import process begins. When the import completes, the data volume status changes toSucceeded
, and the virtual machine is allowed to start.Data volume provisioning happens in the background, so there is no need to monitor it. You can start the virtual machine, and it will not run until the import is complete.
Verification
The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command:
$ oc get pods
Monitor the data volume status until it shows
Succeeded
by running the following command:$ oc describe dv <datavolume-name> 1
- 1
- The name of the data volume as specified under
dataVolumeTemplates.metadata.name
in the virtual machine configuration file. In the example configuration above, this isfedora-dv
.
To verify that provisioning is complete and that the VMI has started, try accessing its serial console by running the following command:
$ virtctl console <vm-fedora-datavolume>
8.16.2.5. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.16.3. Importing virtual machine images to block storage with data volumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details.
8.16.3.1. Prerequisites
- If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.16.3.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.3.3. About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
8.16.3.4. Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The file name of the persistent volume created in the previous step.
8.16.3.5. Importing a virtual machine image to a block persistent volume using data volumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the importing data and the creation of an underlying persistent volume claim (PVC). You can then reference the data volume in a virtual machine manifest.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
An
HTTP
ors3
endpoint where the image is hosted, along with any authentication credentials needed to access the data source - At least one available block PV.
Procedure
If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster.Edit the
endpoint-secret.yaml
file with your preferred text editor:apiVersion: v1 kind: Secret metadata: name: <endpoint-secret> labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
Update the secret by running the following command:
$ oc apply -f endpoint-secret.yaml
Create a
DataVolume
manifest that specifies the data source for the image you want to import andvolumeMode: Block
so that an available block PV is used.apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <import-pv-datavolume> 1 spec: storageClassName: local 2 source: http: url: <http://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2> 3 secretRef: <endpoint-secret> 4 pvc: volumeMode: Block 5 accessModes: - ReadWriteOnce resources: requests: storage: <2Gi>
Create the data volume to import the virtual machine image by running the following command:
$ oc create -f <import-pv-datavolume.yaml>1
- 1
- The file name of the data volume that you created in the previous step.
8.16.3.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.
8.16.3.7. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.17. Cloning virtual machines
8.17.1. Enabling user permissions to clone data volumes across namespaces
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin
role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.
8.17.1.1. Prerequisites
-
Only a user with the
cluster-admin
role can create cluster roles.
8.17.1.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.17.1.3. Creating RBAC resources for cloning data volumes
Create a new cluster role that enables permissions for all actions for the datavolumes
resource.
Procedure
Create a
ClusterRole
manifest:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]
- 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
ClusterRole
manifest created in the previous step.
Create a
RoleBinding
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io
Create the role binding in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
RoleBinding
manifest created in the previous step.
8.17.2. Cloning a virtual machine disk into a new data volume
You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.17.2.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.17.2.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.17.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume
You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.
When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume.
For example:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4
Start cloning the PVC by creating the data volume:
$ oc create -f <cloner-datavolume>.yaml
NoteData volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.
8.17.2.4. Template: Data volume clone configuration file
example-clone-dv.yaml
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: "example-clone-dv" spec: source: pvc: name: source-pvc namespace: example-ns pvc: accessModes: - ReadWriteOnce resources: requests: storage: "1G"
8.17.2.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.17.3. Cloning a virtual machine by using a data volume template
You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate
in your virtual machine configuration file, you create a new data volume from the original PVC.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.17.3.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.17.3.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.17.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template
You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate
in the virtual machine manifest and the source
PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine.
When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a
VirtualMachine
object. The following virtual machine example clonesmy-favorite-vm-disk
, which is located in thesource-namespace
namespace. The2Gi
data volume calledfavorite-clone
is created frommy-favorite-vm-disk
.For example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk"
- 1
- The virtual machine to create.
Create the virtual machine with the PVC-cloned data volume:
$ oc create -f <vm-clone-datavolumetemplate>.yaml
8.17.3.4. Template: Data volume virtual machine configuration file
example-dv-vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: example-vm
name: example-vm
spec:
dataVolumeTemplates:
- metadata:
name: example-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
source:
http:
url: "" 1
running: false
template:
metadata:
labels:
kubevirt.io/vm: example-vm
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: example-dv-disk
machine:
type: q35
resources:
requests:
memory: 1G
terminationGracePeriodSeconds: 180
volumes:
- dataVolume:
name: example-dv
name: example-dv-disk
- 1
- The
HTTP
source of the image you want to import, if applicable.
8.17.3.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.17.4. Cloning a virtual machine disk into a new block storage data volume
You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block data volume by referencing the source PVC in your data volume configuration file.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.17.4.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.17.4.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.17.4.3. About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
8.17.4.4. Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The file name of the persistent volume created in the previous step.
8.17.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume
You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.
When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
). - At least one available block persistent volume (PV) that is the same size as or larger than the source PVC.
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC,
volumeMode: Block
so that an available block PV is used, and the size of the new data volume.For example:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5
- 1
- The name of the new data volume.
- 2
- The namespace where the source PVC exists.
- 3
- The name of the source PVC.
- 4
- The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
- 5
- Specifies that the destination is a block PV
Start cloning the PVC by creating the data volume:
$ oc create -f <cloner-datavolume>.yaml
NoteData volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.
8.17.4.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.18. Virtual machine networking
8.18.1. Using the default pod network for virtual machines
You can use the default pod network with OpenShift Virtualization. To do so, you must use the masquerade
binding method. Do not use masquerade
mode with non-default networks.
For secondary networks, use the bridge
binding method.
8.18.1.1. Configuring masquerade mode from the command line
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Procedure
Edit the
interfaces
spec of your virtual machine configuration file:kind: VirtualMachine spec: domain: devices: interfaces: - name: red masquerade: {} 1 ports: 2 - port: 80 networks: - name: red pod: {}
- 1
- Connect using masquerade mode
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
port
field. Theport
value must be a number between 0 and 65536. When theports
array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port 80.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
$ oc create -f <vm-name>.yaml
8.18.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6)
You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The Network.pod.vmIPv6NetworkCIDR
field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is fd10:0:2::2/120
. You can edit this value based on your network requirements.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack.
Procedure
In a new virtual machine configuration, include an interface with
masquerade
and configure the IPv6 address and default gateway by using cloud-init.apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 ... interfaces: - name: red masquerade: {} 1 ports: - port: 80 2 networks: - name: red pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4
- 1
- Connect using masquerade mode.
- 2
- Allows incoming traffic on port 80 to the virtual machine.
- 3
- The static IPv6 address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::2/120
. - 4
- The gateway IP address as determined by the
Network.pod.vmIPv6NetworkCIDR
field in the virtual machine instance configuration. The default value isfd10:0:2::1
.
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
8.18.1.3. Selecting binding method
If you create a virtual machine from the OpenShift Virtualization web console wizard, select the required binding method from the Networking screen.
8.18.1.3.1. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.18.1.4. Virtual machine configuration examples for the default network
8.18.1.4.1. Template: Virtual machine configuration file
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: default spec: running: false template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin
8.18.1.4.2. Template: Windows virtual machine configuration file
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-windows name: vm-windows spec: template: metadata: labels: special: vm-windows spec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 2 devices: disks: - disk: bus: sata name: pvcdisk interfaces: - masquerade: {} model: e1000 name: default features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} firmware: uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 machine: type: q35 resources: requests: memory: 2Gi networks: - name: default pod: {} terminationGracePeriodSeconds: 3600 volumes: - name: pvcdisk persistentVolumeClaim: claimName: disk-windows
8.18.1.5. Creating a service from a virtual machine
Create a service from a running virtual machine by first creating a Service
object to expose the virtual machine.
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy
and the spec.ipFamilies
fields in the Service
object.
The spec.ipFamilyPolicy
field can be set to one of the following values:
-
SingleStack
: The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range. -
PreferDualStack
: The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured. -
RequireDualStack
: This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set toPreferDualStack
. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies
field to one of the following array values:
-
[IPv4]
-
[IPv6]
-
[IPv4, IPv6]
-
[IPv6, IPv4]
The ClusterIP
service type exposes the virtual machine internally, within the cluster. The NodePort
or LoadBalancer
service types expose the virtual machine externally, outside of the cluster.
This procedure presents an example of how to create, connect to, and expose a Service
object of type: ClusterIP
as a virtual machine-backed service.
ClusterIP
is the default service type
, if the service type
is not specified.
Procedure
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin
- 1
- Add the label
special: key
in thespec.template.metadata.labels
section.
NoteLabels on a virtual machine are passed through to the pod. The labels on the
VirtualMachine
configuration, for examplespecial: key
, must match the labels in theService
YAMLselector
attribute, which you create later in this procedure.- Save the virtual machine YAML to apply your changes.
Edit the
Service
YAML to configure the settings necessary to create and expose theService
object:apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: ports: - port: 27017 protocol: TCP targetPort: 22 3 selector: special: key 4 type: ClusterIP 5
- 1
- Specify the
name
of the service you are creating and exposing. - 2
- Specify
namespace
in themetadata
section of theService
YAML that corresponds to thenamespace
you specify in the virtual machine YAML. - 3
- Add
targetPort: 22
, exposing the service on SSH port22
. - 4
- In the
spec
section of theService
YAML, addspecial: key
to theselector
attribute, which corresponds to thelabels
you added in the virtual machine YAML configuration file. - 5
- In the
spec
section of theService
YAML, addtype: ClusterIP
for a ClusterIP service. To create and expose other types of services externally, outside of the cluster, such asNodePort
andLoadBalancer
, replacetype: ClusterIP
withtype: NodePort
ortype: LoadBalancer
, as appropriate.
-
Save the
Service
YAML to store the service configuration. Create the
ClusterIP
service:$ oc create -f <service_name>.yaml
- Start the virtual machine. If the virtual machine is already running, restart it.
Query the
Service
object to verify it is available and is configured with typeClusterIP
.Verification
Run the
oc get service
command, specifying thenamespace
that you reference in the virtual machine andService
YAML files.$ oc get service -n example-namespace
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
-
As shown from the output,
vmservice
is running. -
The
TYPE
displays asClusterIP
, as you specified in theService
YAML.
-
As shown from the output,
Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.
Edit the virtual machine YAML as follows:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-connect namespace: example-namespace spec: running: false template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin
Run the
oc create
command to create a second virtual machine, wherefile.yaml
is the name of the virtual machine YAML:$ oc create -f <file.yaml>
- Start the virtual machine.
Connect to the virtual machine by running the following
virtctl
command:$ virtctl -n example-namespace console <new-vm-name>
NoteFor service type
LoadBalancer
, use thevinagre
client to connect your virtual machine by using the public IP and port. External ports are dynamically allocated when using service typeLoadBalancer
.Run the
ssh
command to authenticate the connection, where172.30.3.149
is the ClusterIP of the service andfedora
is the user name of the virtual machine:$ ssh fedora@172.30.3.149 -p 27017
Verification
- You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.
Additional resources
8.18.2. Attaching a virtual machine to multiple networks
You can attach virtual machines to multiple networks by using Linux bridges. You can also import virtual machines with existing workloads that depend on access to multiple interfaces.
To attach a virtual machine to an additional network:
- Configure a Linux bridge.
Configure a bridge network attachment definition for a namespace in the web console or CLI.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
Attach the virtual machine to the network attachment definition by using either the web console or the CLI:
- In the web console, create a NIC for a new or existing virtual machine.
- In the CLI, include the network information in the virtual machine configuration.
There are multiple methods for configuring a VLAN, including network attachment definition and node network configuration policy. However, a network attachment definition provides a more efficient and more manageable configuration.
8.18.2.1. OpenShift Virtualization networking glossary
OpenShift Virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout OpenShift Virtualization documentation:
- Container Network Interface (CNI)
- a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
- Multus
- a "meta" CNI plug-in that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
- Custom resource definition (CRD)
- a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- Network attachment definition (NAD)
- a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtu