Chapter 7. Virtual machines

7.1. Creating virtual machines

Use one of these procedures to create a virtual machine:

Warning

Do not create virtual machines in openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix.

7.1.1. Running the virtual machine wizard to create a virtual machine

The web console features an interactive wizard that guides you through General, Networking, Storage, Advanced, and Review steps to simplify the process of creating virtual machines. All required fields are marked by a *. When the required fields are completed, you can review and create your virtual machine.

Network Interface Cards (NICs) and storage disks can be created and attached to virtual machines after they have been created.

Bootable Disk

If either URL or Container are selected as the Source in the General step, a rootdisk disk is created and attached to the virtual machine as the Bootable Disk. You can modify the rootdisk but you cannot remove it.

A Bootable Disk is not required for virtual machines provisioned from a PXE source if there are no disks attached to the virtual machine. If one or more disks are attached to the virtual machine, you must select one as the Bootable Disk.

Prerequisites

  • When you create your virtual machine using the wizard, your virtual machine’s storage medium must support Read-Write-Many (RWX) PVCs.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Click Create Virtual Machine and select New with Wizard.
  4. Fill in all required fields in the General step. Selecting a Template automatically fills in these fields.
  5. Click Next to progress to the Networking step. A nic0 NIC is attached by default.

    1. Optional: Click Add Network Interface to create additional NICs.
    2. Optional: You can remove any or all NICs by clicking the Options menu kebab and selecting Delete. A virtual machine does not need a NIC attached to be created. NICs can be created after the virtual machine has been created.
  6. Click Next to progress to the Storage screen.

    1. Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu kebab and selecting Delete.
    2. Optional: Click the Options menu kebab to edit the disk and save your changes.
  7. Click Review and Create. The Results screen displays the JSON configuration file for the virtual machine.

The virtual machine is listed in the Virtual Machines tab.

Refer to the virtual machine wizard fields section when running the web console wizard.

7.1.1.1. Virtual machine wizard fields

NameParameterDescription

Template

 

Template from which to create the virtual machine. Selecting a template will automatically complete other fields.

Source

PXE

Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster.

URL

Provision virtual machine from an image available from an HTTP or S3 endpoint.

Container

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

Disk

Provision virtual machine from a disk.

Operating System

 

The primary operating system that is selected for the virtual machine.

Flavor

small, medium, large, tiny, Custom

Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system.

Memory

 

Size in GiB of the memory allocated to the virtual machine.

CPUs

 

The amount of CPU allocated to the virtual machine.

Workload Profile

High Performance

A virtual machine configuration that is optimized for high-performance workloads.

Server

A profile optimized to run server workloads.

Desktop

A virtual machine configuration for use on a desktop.

Name

 

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

 

Optional description field.

Start virtual machine on creation

 

Select to automatically start the virtual machine upon creation.

7.1.1.2. Cloud-init fields

NameDescription

Hostname

Sets a specific host name for the virtual machine.

Authenticated SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

7.1.1.3. CD-ROM fields

SourceDescription

Container

Specify the container path. For example: kubevirt/fedora-registry-disk: latest.

URL

Specify the URL path and size in GiB. Then, select the storage class for this URL from the drop-down list.

Attach Disk

Select the virtual machine disk that you want to attach.

7.1.1.4. Networking fields

NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

7.1.1.5. Storage fields

NameDescription

Source

Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs).

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size (GiB)

Size, in GiB, of the disk.

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The StorageClass that is used to create the disk.

Advanced → Volume Mode

 

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Advanced → Access Mode

 

Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX).

Advanced storage settings

The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.

NameParameterDescription

Volume Mode

Filesystem

Stores the virtual disk on a filesystem-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

Note

This is required for some features, such as live migration of virtual machines between nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

For more information on the kubevirt-storage-class-defaults ConfigMap, see Storage defaults for DataVolumes.

7.1.2. Pasting in a pre-configured YAML file to create a virtual machine

Create a virtual machine by writing or pasting a YAML configuration file. A valid example virtual machine configuration is provided by default whenever you open the YAML edit screen.

If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Click Create Virtual Machine and select New from YAML.
  4. Write or paste your virtual machine configuration in the editable window.

    1. Alternatively, use the example virtual machine provided by default in the YAML screen.
  5. Optional: Click Download to download the YAML configuration file in its present state.
  6. Click Create to create the virtual machine.

The virtual machine is listed in the Virtual Machines tab.

7.1.3. Using the CLI to create a virtual machine

Procedure

The spec object of the VirtualMachine configuration file references the virtual machine settings, such as the number of cores and the amount of memory, the disk type, and the volumes to use.

  1. Attach the virtual machine disk to the virtual machine by referencing the relevant PVC claimName as a volume.
  2. To create a virtual machine with the OpenShift Container Platform client, run this command:

    $ oc create -f <vm.yaml>
  3. Since virtual machines are created in a Stopped state, run a virtual machine instance by starting it.
Note

A ReplicaSet’s purpose is often used to guarantee the availability of a specified number of identical pods. ReplicaSet is not currently supported in OpenShift Virtualization.

Table 7.1. Domain settings

SettingDescription

Cores

The number of cores inside the virtual machine. Must be a value greater than or equal to 1.

Memory

The amount of RAM that is allocated to the virtual machine by the node. Specify a value in M for Megabyte or Gi for Gigabyte.

Disks

The name of the volume that is referenced. Must match the name of a volume.

Table 7.2. Volume settings

SettingDescription

Name

The name of the volume, which must be a DNS label and unique within the virtual machine.

PersistentVolumeClaim

The PVC to attach to the virtual machine. The claimName of the PVC must be in the same project as the virtual machine.

7.1.4. Virtual machine storage volume types

Storage volume typeDescription

ephemeral

A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way.

persistentVolumeClaim

Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions.

Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC.

dataVolume

DataVolumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready.

Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.

cloudInitNoCloud

Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk.

containerDisk

References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.

A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage.

Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size.

Note

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only filesystems such as CD-ROMs or for disposable virtual machines.

emptyDisk

Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk.

The disk capacity size must also be provided.

7.1.5. Additional resources

The VirtualMachineSpec definition in the KubeVirt v0.30.5 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification.

Note

The KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization.

7.2. Editing virtual machines

You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift client on the command line. You can also update a subset of the parameters in the Virtual Machine Overview of the web console.

7.2.1. Editing a virtual machine in the web console

Edit select values of a virtual machine in the Virtual Machine Overview screen of the web console by clicking on the pencil icon next to the relevant field. Other values can be edited using the CLI.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the Details tab.
  5. Click the pencil icon to make a field editable.
  6. Make the relevant changes and click Save.

If the virtual machine is running, changes will not take effect until you reboot the virtual machine.

7.2.2. Editing a virtual machine YAML configuration using the web console

Using the web console, edit the YAML configuration of a virtual machine.

Not all parameters can be updated. If you edit values that cannot be changed and click Save, an error message indicates the parameter that was not able to be updated.

The YAML configuration can be edited while the virtual machine is Running, however the changes will only take effect after the virtual machine has been stopped and started again.

Note

Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the YAML tab to display the editable configuration.
  5. Optional: You can click Download to download the YAML file locally in its current state.
  6. Edit the file and click Save.

A confirmation message shows that the modification has been successful and includes the updated version number for the object.

7.2.3. Editing a virtual machine YAML configuration using the CLI

Use this procedure to edit a virtual machine YAML configuration using the CLI.

Prerequisites

  • You configured a virtual machine with a YAML object configuration file.
  • You installed the oc CLI.

Procedure

  1. Run the following command to update the virtual machine configuration:

    $ oc edit <object_type> <object_ID>
  2. Open the object configuration.
  3. Edit the YAML.
  4. If you edit a running virtual machine, you need to do one of the following:

    • Restart the virtual machine.
    • Run the following command for the new configuration to take effect:

      $ oc apply <object_type> <object_ID>

7.2.4. Adding a virtual disk to a virtual machine

Use this procedure to add a virtual disk to a virtual machine.

Procedure

  1. From the Virtual Machines tab, select your virtual machine.
  2. Select the Disks tab.
  3. Click Add Disks to open the Add Disk window.
  4. In the Add Disk window, specify Source, Name, Size, Interface, and Storage Class.

    1. Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults ConfigMap.
  5. Use the drop-down lists and check boxes to edit the disk configuration.
  6. Click OK.

For more information on the kubevirt-storage-class-defaults ConfigMap, see Storage defaults for DataVolumes.

7.2.4.1. Storage fields

NameDescription

Source

Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs).

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size (GiB)

Size, in GiB, of the disk.

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The StorageClass that is used to create the disk.

Advanced → Volume Mode

 

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Advanced → Access Mode

 

Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX).

Advanced storage settings

The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.

NameParameterDescription

Volume Mode

Filesystem

Stores the virtual disk on a filesystem-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

Note

This is required for some features, such as live migration of virtual machines between nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

7.2.5. Adding a network interface to a virtual machine

Use this procedure to add a network interface to a virtual machine.

Procedure

  1. From the Virtual Machines tab, select the virtual machine.
  2. Select the Network Interfaces tab.
  3. Click Add Network Interface.
  4. In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
  5. Click Add to add the network interface.
  6. Restart the virtual machine to enable access.
  7. Edit the drop-down lists and check boxes to configure the network interface.
  8. Click Save Changes.
  9. Click OK.

The new network interface displays at the top of the Create Network Interface list until the user restarts it.

The new network interface has a Pending VM restart Link State until you restart the virtual machine. Hover over the Link State to display more detailed information.

The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network.

7.2.5.1. Networking fields

NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

7.2.6. Editing CD-ROMs for Virtual Machines

Use the following procedure to configure CD-ROMs for virtual machines.

Procedure

  1. From the Virtual Machines tab, select your virtual machine.
  2. Select the Overview tab.
  3. To add or edit a CD-ROM configuration, click the pencil icon to the right of the CD-ROMs label. The Edit CD-ROM window opens.

    • If CD-ROMs are unavailable for editing, the following message displays: The virtual machine doesn’t have any CD-ROMs attached.
    • If there are CD-ROMs available, you can remove a CD-ROM by clicking -.
  4. In the Edit CD-ROM window, do the following:

    1. Select the type of CD-ROM configuration from the drop-down list for Media Type. CD-ROM configuration types are Container, URL, and Persistent Volume Claim.
    2. Complete the required information for each Type.
    3. When all CD-ROMs are added, click Save.

7.3. Editing boot order

You can update the values for a boot order list by using the web console or the CLI.

With Boot Order in the Virtual Machine Overview page, you can:

  • Select a disk or Network Interface Card (NIC) and add it to the boot order list.
  • Edit the order of the disks or NICs in the boot order list.
  • Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.

7.3.1. Adding items to a boot order list in the web console

Add items to a boot order list by using the web console.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the Details tab.
  5. Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot disks from YAML by order of appearance in YAMLv file. Please select a boot source.
  6. Click Add Source and select a bootable disk or Network Interface Card (NIC) for the virtual machine.
  7. Add any additional disks or NICs to the boot order list.
  8. Click Save.

7.3.2. Editing a boot order list in the web console

Edit the boot order list in the web console.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the Details tab.
  5. Click the pencil icon that is located on the right side of Boot Order.
  6. Choose the appropriate method to move the item in the boot order list:

    • If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
    • If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
  7. Click Save.

7.3.3. Editing a boot order list in the YAML configuration file

Edit the boot order list in a YAML configuration file by using the CLI.

Procedure

  1. Open the YAML configuration file for the virtual machine by running the following command:

    $ oc edit vm example
  2. Edit the YAML file and modify the values for the boot order associated with a disk or Network Interface Card (NIC). For example:

    disks:
      - bootOrder: 1 1
        disk:
          bus: virtio
        name: containerdisk
      - disk:
          bus: virtio
        name: cloudinitdisk
      - cdrom:
          bus: virtio
        name: cd-drive-1
    interfaces:
      - boot Order: 2 2
        macAddress: '02:96:c4:00:00'
        masquerade: {}
        name: default
    1
    The boot order value specified for the disk.
    2
    The boot order value specified for the Network Interface Card.
  3. Save the YAML file.
  4. Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.

7.3.4. Removing items from a boot order list in the web console

Remove items from a boot order list by using the web console.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the Details tab.
  5. Click the pencil icon that is located on the right side of Boot Order.
  6. Click the Remove icon next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot disks from YAML by order of appearance in YAML file. Please select a boot source.

7.4. Deleting virtual machines

You can delete a virtual machine from the web console or by using the oc command-line interface.

7.4.1. Deleting a virtual machine using the web console

Deleting a virtual machine permanently removes it from the cluster.

Note

When you delete a virtual machine, the DataVolume it uses is automatically deleted.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Click the ⋮ button of the virtual machine that you want to delete and select Delete Virtual Machine.

    • Alternatively, click the virtual machine name to open the Virtual Machine Overview screen and click ActionsDelete Virtual Machine.
  4. In the confirmation pop-up window, click Delete to permanently delete the virtual machine.

7.4.2. Deleting a virtual machine by using the CLI

You can delete a virtual machine by using the oc command-line interface (CLI). The oc client enables you to perform actions on multiple virtual machines.

Note

When you delete a virtual machine, the DataVolume it uses is automatically deleted.

Prerequisites

  • Identify the name of the virtual machine that you want to delete.

Procedure

  • Delete the virtual machine by running the following command:

    $ oc delete vm <vm_name>
    Note

    This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace.

7.5. Managing virtual machine instances

If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or the command-line interface (CLI).

7.5.1. About virtual machine instances

A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI).

A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:

  • List standalone VMIs and their details.
  • Edit labels and annotations for a standalone VMI.
  • Delete a standalone VMI.

When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.

Note

Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.

7.5.2. Listing all virtual machine instances using the CLI

You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI).

Procedure

  • List all VMIs by running the following command:

    $ oc get vmis

7.5.3. Listing standalone virtual machine instances using the web console

Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).

Note

VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.

Procedure

  • Click Workloads → Virtualization from the side menu. A list of VMs and standalone VMIs displays. You can identify standalone VMIs by the dark colored badges that display next to the virtual machine instance names.

7.5.4. Editing a standalone virtual machine instance using the web console

You can edit annotations and labels for a standalone virtual machine instance (VMI) using the web console. Other items displayed in the Details page for a standalone VMI are not editable.

Procedure

  1. Click WorkloadsVirtualization from the side menu. A list of virtual machines (VMs) and standalone VMIs displays.
  2. Click the name of a standalone VMI to open the Virtual Machine Instance Overview screen.
  3. Click the Details tab.
  4. Click the pencil icon that is located on the right side of Annotations.
  5. Make the relevant changes and click Save.
Note

To edit labels for a standalone VMI, click Actions and select Edit Labels. Make the relevant changes and click Save.

7.5.5. Deleting a standalone virtual machine instance using the CLI

You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the VMI that you want to delete.

Procedure

  • Delete the VMI by running the following command:

    $ oc delete vmi <vmi_name>

7.5.6. Deleting a standalone virtual machine instance using the web console

Delete a standalone virtual machine instance (VMI) from the web console.

Procedure

  1. In the OpenShift Container Platform web console, click WorkloadsVirtualization from the side menu.
  2. Click the ⋮ button of the standalone virtual machine instance (VMI) that you want to delete and select Delete Virtual Machine Instance.

    • Alternatively, click the name of the standalone VMI. The Virtual Machine Instance Overview page displays.
  3. Select ActionsDelete Virtual Machine Instance.
  4. In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.

7.6. Controlling virtual machine states

You can stop, start, restart, and unpause virtual machines from the web console.

Note

To control virtual machines from the command-line interface (CLI), use the virtctl client.

7.6.1. Starting a virtual machine

You can start a virtual machine from the web console.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Find the row that contains the virtual machine that you want to start.
  4. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you start it:

      1. Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
      2. Click Actions.
  5. Select Start Virtual Machine.
  6. In the confirmation window, click Start to start the virtual machine.
Note

When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.

7.6.2. Restarting a virtual machine

You can restart a running virtual machine from the web console.

Important

To avoid errors, do not restart a virtual machine while it has a status of Importing.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Find the row that contains the virtual machine that you want to restart.
  4. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you restart it:

      1. Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
      2. Click Actions.
  5. Select Restart Virtual Machine.
  6. In the confirmation window, click Restart to restart the virtual machine.

7.6.3. Stopping a virtual machine

You can stop a virtual machine from the web console.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Find the row that contains the virtual machine that you want to stop.
  4. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. Click the Options menu kebab located at the far right end of the row.
    • To view comprehensive information about the selected virtual machine before you stop it:

      1. Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
      2. Click Actions.
  5. Select Stop Virtual Machine.
  6. In the confirmation window, click Stop to stop the virtual machine.

7.6.4. Unpausing a virtual machine

You can unpause a paused virtual machine from the web console.

Prerequisites

  • At least one of your virtual machines must have a status of Paused.

    Note

    You can pause virtual machines by using the virtctl client.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Find the row that contains the virtual machine that you want to unpause.
  4. Navigate to the appropriate menu for your use case:

    • To stay on this page, where you can perform actions on multiple virtual machines:

      1. In the Status column, click Paused.
    • To view comprehensive information about the selected virtual machine before you unpause it:

      1. Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
      2. Click the pencil icon that is located on the right side of Status.
  5. In the confirmation window, click Unpause to unpause the virtual machine.

7.7. Accessing virtual machine consoles

OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the web console and by using CLI commands.

7.7.1. Virtual machine console sessions

You can connect to the VNC and serial consoles of a running virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console.

There are two consoles available: the graphical VNC Console and the Serial Console. The VNC Console opens by default whenever you navigate to the Consoles tab. You can switch between the consoles using the VNC Console Serial Console list.

Console sessions remain active in the background unless they are disconnected. When the Disconnect before switching checkbox is active and you switch consoles, the current console session is disconnected and a new session with the selected console connects to the virtual machine. This ensures only one console session is open at a time.

Options for the VNC Console

The Send Key button lists key combinations to send to the virtual machine.

Options for the Serial Console

Use the Disconnect button to manually disconnect the Serial Console session from the virtual machine.
Use the Reconnect button to manually open a Serial Console session to the virtual machine.

7.7.2. Connecting to the virtual machine with the web console

7.7.2.1. Connecting to the terminal

You can connect to a virtual machine by using the web console.

Procedure

  1. Ensure you are in the correct project. If not, click the Project list and select the appropriate project.
  2. Click WorkloadsVirtualization from the side menu.
  3. Click the Virtual Machines tab.
  4. Select a virtual machine to open the Virtual Machine Overview screen.
  5. In the Details tab, click the virt-launcher-<vm-name> pod.
  6. Click the Terminal tab. If the terminal is blank, select the terminal and press any key to initiate connection.

7.7.2.2. Connecting to the serial console

Connect to the Serial Console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click Console. The VNC console opens by default.
  5. Click the VNC Console drop-down list and select Serial Console.

7.7.2.3. Connecting to the VNC console

Connect to the VNC console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click the Console tab. The VNC console opens by default.

7.7.2.4. Connecting to the RDP console

The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, download the console.rdp file for the virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console and supply it to your preferred RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • A layer-2 NIC attached to the virtual machine.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a Windows virtual machine to open the Virtual Machine Overview screen.
  4. Click the Console tab.
  5. In the Console list, select Desktop Viewer.
  6. In the Network Interface list, select the layer-2 NIC.
  7. Click Launch Remote Desktop to download the console.rdp file.
  8. Open an RDP client and reference the console.rdp file. For example, using remmina:

    $ remmina --connect /path/to/console.rdp
  9. Enter the Administrator user name and password to connect to the Windows virtual machine.

7.7.3. Accessing virtual machine consoles by using CLI commands

7.7.3.1. Accessing a virtual machine instance via SSH

You can use SSH to access a virtual machine after you expose port 22 on it.

The virtctl expose command forwards a virtual machine instance port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh service that forwards port 22 of the <fedora-vm> virtual machine to a port on the node:

Prerequisites

  • You must be in the same project as the virtual machine instance.
  • The virtual machine instance you want to access must be connected to the default Pod network by using the masquerade binding method.
  • The virtual machine instance you want to access must be running.
  • Install the OpenShift CLI (oc).

Procedure

  1. Run the following command to create the fedora-vm-ssh service:

    $ virtctl expose vm <fedora-vm> --port=20022 --target-port=22 --name=fedora-vm-ssh --type=NodePort 1
    1
    <fedora-vm> is the name of the virtual machine that you run the fedora-vm-ssh service on.
  2. Check the service to find out which port the service acquired:

    $ oc get svc

Example output

NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
fedora-vm-ssh   NodePort   127.0.0.1      <none>        20022:32551/TCP   6s

+ In this example, the service acquired the 32551 port.

  1. Log in to the virtual machine instance via SSH. Use the ipAddress of the node and the port that you found in the previous step:

    $ ssh username@<node_IP_address> -p 32551

7.7.3.2. Accessing the serial console of a virtual machine instance

The virtctl console command opens a serial console to the specified virtual machine instance.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.

Procedure

  • Connect to the serial console with virtctl:

    $ virtctl console <VMI>

7.7.3.3. Accessing the graphical console of a virtual machine instances with VNC

The virtctl client utility can use the remote-viewer function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer package.

Prerequisites

  • The virt-viewer package must be installed.
  • The virtual machine instance you want to access must be running.
Note

If you use virtctl via SSH on a remote machine, you must forward the X session to your machine.

Procedure

  1. Connect to the graphical interface with the virtctl utility:

    $ virtctl vnc <VMI>
  2. If the command failed, try using the -v flag to collect troubleshooting information:

    $ virtctl vnc <VMI> -v 4

7.7.3.4. Connecting to a Windows virtual machine with an RDP console

The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines.

To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client.

Prerequisites

  • A running Windows virtual machine with the QEMU guest agent installed. The qemu-guest-agent is included in the VirtIO drivers.
  • A layer 2 NIC attached to the virtual machine.
  • An RDP client installed on a machine on the same network as the Windows virtual machine.

Procedure

  1. Log in to the OpenShift Virtualization cluster through the oc CLI tool as a user with an access token.

    $ oc login -u <user> https://<cluster.example.com>:8443
  2. Use oc describe vmi to display the configuration of the running Windows virtual machine.

    $ oc describe vmi <windows-vmi-name>

    Example output

    ...
    spec:
      networks:
      - name: default
        pod: {}
      - multus:
          networkName: cnv-bridge
        name: bridge-net
    ...
    status:
      interfaces:
      - interfaceName: eth0
        ipAddress: 198.51.100.0/24
        ipAddresses:
          198.51.100.0/24
        mac: a0:36:9f:0f:b1:70
        name: default
      - interfaceName: eth1
        ipAddress: 192.0.2.0/24
        ipAddresses:
          192.0.2.0/24
          2001:db8::/32
        mac: 00:17:a4:77:77:25
        name: bridge-net
    ...

  3. Identify and copy the IP address of the layer 2 network interface. This is 192.0.2.0 in the above example, or 2001:db8:: if you prefer IPv6.
  4. Open an RDP client and use the IP address copied in the previous step for the connection.
  5. Enter the Administrator user name and password to connect to the Windows virtual machine.

7.8. Managing ConfigMaps, secrets, and service accounts in virtual machines

You can use secrets, ConfigMaps, and service accounts to pass configuration data to virtual machines. For example, you can:

  • Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine.
  • Store non-confidential configuration data in a ConfigMap so that a Pod or another object can consume the data.
  • Allow a component to access the API server by associating a service account with that component.
Note

OpenShift Virtualization exposes secrets, ConfigMaps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead.

7.8.1. Adding a secret, ConfigMap, or service account to a virtual machine

Add a secret, ConfigMap, or service account to a virtual machine by using the OpenShift Container Platform web console.

Prerequisites

  • The secret, ConfigMap, or service account that you want to add must exist in the same namespace as the target virtual machine.
  • The virtual machine must be powered off.

Procedure

  1. From the side menu, click Virtualization.
  2. Click the Virtual Machine tab.
  3. Select a virtual machine to open its Virtual Machine Overview page.
  4. Click the Environment tab.
  5. Click Select a resource and select a secret, ConfigMap, or service account from the list.
  6. Click Save.
  7. Optional. Add another object by clicking Add Config Map, Secret or Service Account.
Note

You can reset the form to the last saved state by clicking Reload.

Verification

  1. From the Virtual Machine Overview page, click the Disks tab.
  2. Check to ensure that the secret, ConfigMap, or service account is included in the list of disks.
  3. Optional. Start the virtual machine by clicking ActionsStart Virtual Machine. You can now mount the secret, ConfigMap, or service account as you would mount any other disk.

7.8.2. Removing a secret, ConfigMap, or service account from a virtual machine

Remove a secret, ConfigMap, or service account from a virtual machine by using the OpenShift Container Platform web console.

Prerequisites

  • You must have at least one secret, ConfigMap, or service account that is attached to a virtual machine.
  • The virtual machine must be powered off.

Procedure

  1. From the side menu, click Virtualization.
  2. Click the Virtual Machine tab.
  3. Select a virtual machine to open its Virtual Machine Overview page.
  4. Click the Environment tab.
  5. Find the item that you want to delete in the list, and click the Delete button delete on the right side of the item.
  6. Click Save.
Note

You can reset the form to the last saved state by clicking Reload.

Verification

  1. From the Virtual Machine Overview page, click the Disks tab.
  2. Check to ensure that the secret, ConfigMap, or service account that you removed is no longer included in the list of disks.

7.8.3. Additional resources

7.9. Installing VirtIO driver on an existing Windows virtual machine

7.9.1. Understanding VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing Virtio drivers on a new Windows virtual machine.

7.9.2. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 7.3. Supported drivers

Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

7.9.3. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

7.9.4. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. Refer to the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.

7.9.5. Removing the VirtIO container disk from a virtual machine

After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file.

Procedure

  1. Edit the configuration file and remove the disk and the volume.

    $ oc edit vm <vm-name>
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
  2. Reboot the virtual machine for the changes to take effect.

7.10. Installing VirtIO driver on a new Windows virtual machine

7.10.1. Prerequisites

7.10.2. Understanding VirtIO drivers

VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win container disk of the Red Hat Ecosystem Catalog.

The container-native-virtualization/virtio-win container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.

After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the virtual machine.

See also: Installing VirtIO driver on an existing Windows virtual machine.

7.10.3. Supported VirtIO drivers for Microsoft Windows virtual machines

Table 7.4. Supported drivers

Driver nameHardware IDDescription

viostor

VEN_1AF4&DEV_1001
VEN_1AF4&DEV_1042

The block driver. Sometimes displays as an SCSI Controller in the Other devices group.

viorng

VEN_1AF4&DEV_1005
VEN_1AF4&DEV_1044

The entropy source driver. Sometimes displays as a PCI Device in the Other devices group.

NetKVM

VEN_1AF4&DEV_1000
VEN_1AF4&DEV_1041

The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured.

7.10.4. Adding VirtIO drivers container disk to a virtual machine

OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.

Prerequisites

  • Download the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.

Procedure

  1. Add the container-native-virtualization/virtio-win container disk as a cdrom disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.

    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2 1
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
    1
    OpenShift Virtualization boots virtual machine disks in the order defined in the VirtualMachine configuration file. You can either define other disks for the virtual machine before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the virtual machine boots from the correct disk. If you specify the bootOrder for a disk, it must be specified for all disks in the configuration.
  2. The disk is available once the virtual machine has started:

    • If you add the container disk to a running virtual machine, use oc apply -f <vm.yaml> in the CLI or reboot the virtual machine for the changes to take effect.
    • If the virtual machine is not running, use virtctl start <vm>.

After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.

7.10.5. Installing VirtIO drivers during Windows installation

Install the VirtIO drivers from the attached SATA CD driver during Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. Refer to the documentation for the version of Windows that you are installing.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Begin the Windows installation process.
  3. Select the Advanced installation.
  4. The storage destination will not be recognized until the driver is loaded. Click Load driver.
  5. The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Repeat the previous two steps for all required drivers.
  7. Complete the Windows installation.

7.10.6. Removing the VirtIO container disk from a virtual machine

After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win container disk from the virtual machine configuration file.

Procedure

  1. Edit the configuration file and remove the disk and the volume.

    $ oc edit vm <vm-name>
    spec:
      domain:
        devices:
          disks:
            - name: virtiocontainerdisk
              bootOrder: 2
              cdrom:
                bus: sata
    volumes:
      - containerDisk:
          image: container-native-virtualization/virtio-win
        name: virtiocontainerdisk
  2. Reboot the virtual machine for the changes to take effect.

7.11. Advanced virtual machine management

7.11.1. Automating management tasks

You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.

7.11.1.1. About Red Hat Ansible Automation

Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.

Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.

7.11.1.2. Automating virtual machine creation

You can use the kubevirt_vm Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.

Prerequisites

Procedure

  1. Edit an Ansible Playbook YAML file so that it includes the kubevirt_vm task:

      kubevirt_vm:
        namespace:
        name:
        cpu_cores:
        memory:
        disks:
          - name:
            volume:
              containerDisk:
                image:
            disk:
              bus:
    Note

    This snippet only includes the kubevirt_vm portion of the playbook.

  2. Edit the values to reflect the virtual machine you want to create, including the namespace, the number of cpu_cores, the memory, and the disks. For example:

      kubevirt_vm:
        namespace: default
        name: vm1
        cpu_cores: 1
        memory: 64Mi
        disks:
          - name: containerdisk
            volume:
              containerDisk:
                image: kubevirt/cirros-container-disk-demo:latest
            disk:
              bus: virtio
  3. If you want the virtual machine to boot immediately after creation, add state: running to the YAML file. For example:

      kubevirt_vm:
        namespace: default
        name: vm1
        state: running 1
        cpu_cores: 1
    1
    Changing this value to state: absent deletes the virtual machine, if it already exists.
  4. Run the ansible-playbook command, using your playbook’s file name as the only argument:

    $ ansible-playbook create-vm.yaml
  5. Review the output to determine if the play was successful:

    Example output

    (...)
    TASK [Create my first VM] ************************************************************************
    changed: [localhost]
    
    PLAY RECAP ********************************************************************************************************
    localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  6. If you did not include state: running in your playbook file and you want to boot the VM now, edit the file so that it includes state: running and run the playbook again:

    $ ansible-playbook create-vm.yaml

To verify that the virtual machine was created, try to access the VM console.

7.11.1.3. Example: Ansible Playbook for creating virtual machines

You can use the kubevirt_vm Ansible Playbook to automate virtual machine creation.

The following YAML file is an example of the kubevirt_vm playbook. It includes sample values that you must replace with your own information if you run the playbook.

---
- name: Ansible Playbook 1
  hosts: localhost
  connection: local
  tasks:
    - name: Create my first VM
      kubevirt_vm:
        namespace: default
        name: vm1
        cpu_cores: 1
        memory: 64Mi
        disks:
          - name: containerdisk
            volume:
              containerDisk:
                image: kubevirt/cirros-container-disk-demo:latest
            disk:
              bus: virtio

7.11.2. Configuring PXE booting for virtual machines

PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.

7.11.2.1. Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

7.11.2.2. OpenShift Virtualization networking glossary

OpenShift Virtualization provides advanced networking functionality by using custom resources and plug-ins.

The following terms are used throughout OpenShift Virtualization documentation:

Container Network Interface (CNI)
a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
Multus
a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.
Custom Resource Definition (CRD)
a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
NetworkAttachmentDefinition
a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Preboot eXecution Environment (PXE)
an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

7.11.2.3. PXE booting with a specified MAC address

As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the NetworkAttachmentDefinition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.

Prerequisites

  • A Linux bridge must be connected.
  • The PXE server must be connected to the same VLAN as the bridge.

Procedure

  1. Configure a PXE network on the cluster:

    1. Create the NetworkAttachmentDefinition file for PXE network pxe-net-conf:

      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: pxe-net-conf
      spec:
        config: '{
          "cniVersion": "0.3.1",
          "name": "pxe-net-conf",
          "plugins": [
            {
              "type": "cnv-bridge",
              "bridge": "br1",
              "vlan": 1 1
            },
            {
              "type": "cnv-tuning" 2
            }
          ]
        }'
      1
      Optional: The VLAN tag.
      2
      The cnv-tuning plug-in provides support for custom MAC addresses.
      Note

      The virtual machine instance will be attached to the bridge br1 through an access port with the requested VLAN.

  2. Create the NetworkAttachmentDefinition object by using the file you created in the previous step:

    $ oc create -f pxe-net-conf.yaml
  3. Edit the virtual machine instance configuration file to include the details of the interface and network.

    1. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. However, note that at this time, MAC addresses assigned automatically are not persistent.

      Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net>:

      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
      Note

      Boot order is global for interfaces and disks.

    2. Assign a boot device number to the disk to ensure proper booting after operating system provisioning.

      Set the disk bootOrder value to 2:

      devices:
        disks:
        - disk:
            bus: virtio
          name: containerdisk
          bootOrder: 2
    3. Specify that the network is connected to the previously created NetworkAttachmentDefinition. In this scenario, <pxe-net> is connected to the NetworkAttachmentDefinition called <pxe-net-conf>:

      networks:
      - name: default
        pod: {}
      - name: pxe-net
        multus:
          networkName: pxe-net-conf
  4. Create the virtual machine instance:

    $ oc create -f vmi-pxe-boot.yaml

Example output

  virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created

  1. Wait for the virtual machine instance to run:

    $ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
      phase: Running
  2. View the virtual machine instance using VNC:

    $ virtctl vnc vmi-pxe-boot
  3. Watch the boot screen to verify that the PXE boot is successful.
  4. Log in to the virtual machine instance:

    $ virtctl console vmi-pxe-boot
  5. Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0, got an IP address from OpenShift Container Platform.

    $ ip addr

Example output

...
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
   link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff

7.11.2.4. Template: virtual machine instance configuration file for PXE booting

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
  creationTimestamp: null
  labels:
    special: vmi-pxe-boot
  name: vmi-pxe-boot
spec:
  domain:
    devices:
      disks:
      - disk:
          bus: virtio
        name: containerdisk
        bootOrder: 2
      - disk:
          bus: virtio
        name: cloudinitdisk
      interfaces:
      - masquerade: {}
        name: default
      - bridge: {}
        name: pxe-net
        macAddress: de:00:00:00:00:de
        bootOrder: 1
    machine:
      type: ""
    resources:
      requests:
        memory: 1024M
  networks:
  - name: default
    pod: {}
  - multus:
      networkName: pxe-net-conf
    name: pxe-net
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: kubevirt/fedora-cloud-container-disk-demo
  - cloudInitNoCloud:
      userData: |
        #!/bin/bash
        echo "fedora" | passwd fedora --stdin
    name: cloudinitdisk
status: {}

7.11.3. Managing guest memory

If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest’s YAML configuration file. OpenShift Virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting.

Warning

The following procedures increase the chance that virtual machine processes will be killed due to memory pressure. Proceed only if you understand the risks.

7.11.3.1. Configuring guest memory overcommitment

If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.

For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.

Important

Memory overcommitment increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).

The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.

Procedure

  1. To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set spec.domain.memory.guest to a higher value than spec.domain.resources.requests.memory. This process is called memory overcommitment.

    In this example, 1024M is requested from the cluster, but the virtual machine instance is told that it has 2048M available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M.

    kind: VirtualMachine
    spec:
      template:
        domain:
        resources:
            requests:
              memory: 1024M
        memory:
            guest: 2048M
    Note

    The same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure.

  2. Create the virtual machine:

    $ oc create -f <file_name>.yaml

7.11.3.2. Disabling guest memory overhead accounting

A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance process.

Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting.

Important

Disabling guest memory overhead accounting increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).

The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.

Procedure

  1. To disable guest memory overhead accounting, edit the YAML configuration file and set the overcommitGuestOverhead value to true. This parameter is disabled by default.

    kind: VirtualMachine
    spec:
      template:
        domain:
        resources:
            overcommitGuestOverhead: true
            requests:
              memory: 1024M
    Note

    If overcommitGuestOverhead is enabled, it adds the guest overhead to memory limits, if present.

  2. Create the virtual machine:

    $ oc create -f <file_name>.yaml

7.11.4. Using huge pages with virtual machines

You can use huge pages as backing memory for virtual machines in your cluster.

7.11.4.1. Prerequisites

7.11.4.2. What huge pages do

Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.

A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. In order to use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.

In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.

7.11.4.3. Configuring huge pages for virtual machines

You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration.

The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi.

Note

The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.

If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.

Prerequisites

  • Nodes must have pre-allocated huge pages configured.

Procedure

  1. In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain. The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi:

    kind: VirtualMachine
    ...
    spec:
      domain:
        resources:
          requests:
            memory: "4Gi" 1
        memory:
          hugepages:
            pageSize: "1Gi" 2
    ...
    1
    The total amount of memory requested for the virtual machine. This value must be divisible by the page size.
    2
    The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi. The page size must be smaller than the requested memory.
  2. Apply the virtual machine configuration:

    $ oc apply -f <virtual_machine>.yaml

7.11.5. Enabling dedicated resources for virtual machines

Virtual machines can have resources of a node, such as CPU, dedicated to them in order to improve performance.

7.11.5.1. About dedicated resources

When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.

7.11.5.2. Prerequisites

  • The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads.

7.11.5.3. Enabling dedicated resources for a virtual machine

You can enable dedicated resources for a virtual machine in the Virtual Machine Overview page of the web console.

Procedure

  1. Click WorkloadsVirtual Machines from the side menu.
  2. Select a virtual machine to open the Virtual Machine Overview page.
  3. Click the Details tab.
  4. Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window.
  5. Select Schedule this workload with dedicated resources (guaranteed policy).
  6. Click Save.

7.12. Importing virtual machines

7.12.1. TLS certificates for DataVolume imports

7.12.1.1. Adding TLS certificates for authenticating DataVolume imports

TLS certificates for registry or HTTPS endpoints must be added to a ConfigMap in order to import data from these sources. This ConfigMap must be present in the namespace of the destination DataVolume.

Create the ConfigMap by referencing the relative file path for the TLS certificate.

Procedure

  1. Ensure you are in the correct namespace. The ConfigMap can only be referenced by DataVolumes if it is in the same namespace.

    $ oc get ns
  2. Create the ConfigMap:

    $ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>

7.12.1.2. Example: ConfigMap created from a TLS certificate

The following example is of a ConfigMap created from ca.pem TLS certificate.

apiVersion: v1
kind: ConfigMap
metadata:
  name: tls-certs
data:
  ca.pem: |
    -----BEGIN CERTIFICATE-----
    ... <base64 encoded cert> ...
    -----END CERTIFICATE-----

7.12.2. Importing virtual machine images with DataVolumes

Use the Containerized Data Importer (CDI) to import a virtual machine image into a PersistentVolumeClaim (PVC) by using a DataVolume. You can attach a DataVolume to a virtual machine for persistent storage.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.

Important

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system installed on the virtual machine. Refer to the operating system documentation for details.

7.12.2.1. Prerequisites

7.12.2.2. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.12.2.3. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.12.2.4. Importing a virtual machine image into a PersistentVolumeClaim by using a DataVolume

You can import a virtual machine image into a PersistentVolumeClaim (PVC) by using a DataVolume.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or the image can be built into a container disk and stored in a container registry.

To create a virtual machine from an imported virtual machine image, specify the image or container disk endpoint in the VirtualMachine configuration file before you create the virtual machine.

Prerequisites

  • You have installed the OpenShift CLI (oc).
  • Your cluster has at least one available PersistentVolume.
  • To import a virtual machine image you must have the following:

    • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
    • An HTTP endpoint where the image is hosted, along with any authentication credentials needed to access the data source. For example: http://www.example.com/path/to/data
  • To import a container disk you must have the following:

    • A container disk built from a virtual machine image stored in your container image registry, along with any authentication credentials needed to access the data source. For example: docker://registry.example.com/container-image

Procedure

  1. Optional: If your data source requires authentication credentials, edit the endpoint-secret.yaml file, and apply the updated configuration to the cluster:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <endpoint-secret>
      labels:
        app: containerized-data-importer
    type: Opaque
    data:
      accessKeyId: "" 1
      secretKey:   "" 2
    1
    Optional: your key or user name, base64 encoded
    2
    Optional: your secret or password, base64 encoded
    $ oc apply -f endpoint-secret.yaml
  2. Edit the virtual machine configuration file, specifying the data source for the virtual machine image you want to import. In this example, a Fedora image is imported from an http source:

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      creationTimestamp: null
      labels:
        kubevirt.io/vm: vm-fedora-datavolume
      name: vm-fedora-datavolume
    spec:
      dataVolumeTemplates:
      - metadata:
          creationTimestamp: null
          name: fedora-dv
        spec:
          pvc:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi
            storageClassName: local
          source:
            http: 1
              url: "https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2" 2
              secretRef: "" 3
              certConfigMap: "" 4
        status: {}
      running: true
      template:
        metadata:
          creationTimestamp: null
          labels:
            kubevirt.io/vm: vm-fedora-datavolume
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: datavolumedisk1
            machine:
              type: "" 5
            resources:
              requests:
                memory: 1.5Gi
          terminationGracePeriodSeconds: 60
          volumes:
          - dataVolume:
              name: fedora-dv
            name: datavolumedisk1
    status: {}
    1
    The source type to import the image from. This example uses an HTTP endpoint. To import a container disk from a registry, replace http with registry.
    2
    The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTP endpoint. An example of a container registry endpoint is url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest".
    3
    The secretRef parameter is optional.
    4
    The certConfigMap is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced ConfigMap must be in the same namespace as the DataVolume.
    5
    Specify type: dataVolume or type: "". If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.
  3. Create the virtual machine:

    $ oc create -f vm-<name>-datavolume.yaml
    Note

    The oc create command creates the DataVolume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation, and the import process begins. When the import completes, the DataVolume status changes to Succeeded, and the virtual machine is allowed to start.

    DataVolume provisioning happens in the background, so there is no need to monitor it. You can start the virtual machine, and it will not run until the import is complete.

Verification

  1. The importer Pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer Pod by running the following command:

    $ oc get pods
  2. Monitor the DataVolume status until it shows Succeeded by running the following command:

    $ oc describe dv <datavolume-name> 1
    1
    The name of the DataVolume as specified under dataVolumeTemplates.metadata.name in the virtual machine configuration file. In the example configuration above, this is fedora-dv.
  3. To verify that provisioning is complete and that the VMI has started, try accessing its serial console by running the following command:

    $ virtctl console <vm-fedora-datavolume>

7.12.3. Importing virtual machine images to block storage with DataVolumes

You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses DataVolumes to automate the import of data and the creation of an underlying PersistentVolumeClaim (PVC).

Important

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system that is installed on the virtual machine. Refer to the operating system documentation for details.

7.12.3.1. Prerequisites

7.12.3.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.12.3.3. About block PersistentVolumes

A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PersistentVolumeClaim (PVC) specification.

7.12.3.4. Creating a local block PersistentVolume

Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume configuration that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a StorageClass for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The filename of the PersistentVolume created in the previous step.

7.12.3.5. Importing a virtual machine image to a block PersistentVolume using DataVolumes

You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses DataVolumes to automate the importing data and the creation of an underlying PersistentVolumeClaim (PVC). You can then reference the DataVolume in a virtual machine configuration.

Prerequisites

  • A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • An HTTP or s3 endpoint where the image is hosted, along with any authentication credentials needed to access the data source
  • At least one available block PV.

Procedure

  1. If your data source requires authentication credentials, edit the endpoint-secret.yaml file, and apply the updated configuration to the cluster.

    1. Edit the endpoint-secret.yaml file with your preferred text editor:

      apiVersion: v1
      kind: Secret
      metadata:
        name: <endpoint-secret>
        labels:
          app: containerized-data-importer
      type: Opaque
      data:
        accessKeyId: "" 1
        secretKey:   "" 2
      1
      Optional: your key or user name, base64 encoded
      2
      Optional: your secret or password, base64 encoded
    2. Update the secret by running the following command:

      $ oc apply -f endpoint-secret.yaml
  2. Create a DataVolume configuration that specifies the data source for the image you want to import and volumeMode: Block so that an available block PV is used.

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <import-pv-datavolume> 1
    spec:
      storageClassName: local 2
      source:
          http:
             url: <http://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2> 3
             secretRef: <endpoint-secret> 4
      pvc:
        volumeMode: Block 5
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi>
    1
    The name of the DataVolume.
    2
    Optional: Set the storage class or omit it to accept the cluster default.
    3
    The HTTP source of the image to import.
    4
    Only required if the data source requires authentication.
    5
    Required for importing to a block PV.
  3. Create the DataVolume to import the virtual machine image by running the following command:

    $ oc create -f <import-pv-datavolume.yaml>1
    1
    The file name of the DataVolume that you created in the previous step.

7.12.3.6. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.12.4. Importing a single Red Hat Virtualization virtual machine

You can import a single Red Hat Virtualization (RHV) virtual machine into your OpenShift Container Platform cluster by using the virtual machine wizard or the CLI.

7.12.4.1. OpenShift Virtualization storage feature matrix

The following table describes local and shared persistent storage that support VM import.

Table 7.5. OpenShift Virtualization storage feature matrix

 RHV VM import

OpenShift Container Storage: RBD block-mode volumes

Yes

OpenShift Virtualization hostpath provisioner

No

Other multi-node writable storage

Yes [1]

Other single-node writable storage

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.
  2. PVCs must request a ReadWriteOnce access mode.

7.12.4.2. Prerequisites for importing a virtual machine

Importing a virtual machine into OpenShift Virtualization has the following prerequisites:

  • You must have admin user privileges.
  • Storage:

    • The OpenShift Virtualization local and shared persistent storage classes must support VM import.
    • If you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released.
  • Networks:

    • The source and target networks must either have the same name or be mapped to each other.
    • The source network interface must be e1000, rtl8139, or virtio.
  • VM disks:

    • The disk interface must be sata, virtio_scsi, or virtio.
    • The disk must not be configured as a direct LUN.
    • The disk status must not be illegal or locked.
    • The storage type must be image.
    • SCSI reservation must be disabled.
    • ScsiGenericIO must be disabled.
  • VM configuration:

    • If the VM uses GPU resources, the nodes providing the GPUs must be configured.
    • The VM must not be configured for vGPU resources.
    • The VM must not have snapshots with disks in an illegal state.
    • The VM must not have been created with OpenShift Container Platform and subsequently added to RHV.
    • The VM must not be configured for USB devices.
    • The watchdog model must not be diag288.

7.12.4.3. Checking the default storage class

You must check the default storage class to ensure that it is NFS.

Cinder, the default storage class, does not support VM import.

7.12.4.3.1. Checking the default storage class in the OpenShift Container Platform console

You can check the default storage class in the OpenShift Container Platform console. If the default storage class is not NFS, you can change the default storage class so that it is no longer the default and change the NFS storage class so that it is the default.

If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.

Procedure

  1. Navigate to StorageStorage Classes.
  2. Check the default storage class in the Storage Classes list.
  3. If the default storage class is not NFS, edit the default storage class so that it is no longer the default:

    1. Click the Options menu kebab of the default storage class and select Edit Storage Class.
    2. In the Details tab, click the Edit button beside Annotations.
    3. Click the Delete button delete on the right side of the storageclass.kubernetes.io/is-default-class annotation and then click Save.
  4. Change an existing NFS storage class to be the default:

    1. Click the Options menu kebab of an existing NFS storage class and select Edit Storage Class.
    2. In the Details tab, click the Edit button beside Annotations.
    3. Enter storageclass.kubernetes.io/is-default-class in the Key field and true in the Value field and then click Save.
  5. Navigate to StorageStorage Classes to verify that the NFS storage class is the only default storage class.
7.12.4.3.2. Checking the default storage class from the CLI

You can check the default storage class from the CLI.

If the default storage class is not NFS, you must change the default storage class to NFS and change the existing default storage class so that it is not the default. If more than one default storage class is defined, the VirtualMachineImport CR uses the default storage class that is first in alphabetical order.

Procedure

  • Get the storage classes by entering the following command:

    $ oc get sc

The default storage class is displayed in the output:

Example output

NAME                PROVISIONER           RECLAIMPOLICY  VOLUMEBINDINGMODE     ALLOWVOLUMEEXPANS
...
standard (default)  kubernetes.io/cinder  Delete         WaitForFirstConsumer  true

Changing the default storage class

If you are using AWS, use the following process to change the default storage class. This process assumes you have two storage classes defined, gp2 and standard, and you want to change the default storage class from gp2 to standard.

  1. List the storage class:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp2 (default)        kubernetes.io/aws-ebs 1
    standard             kubernetes.io/aws-ebs

    1
    (default) denotes the default storage class.
  2. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default storage class:

    $ oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Make another storage class the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true.

    $ oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
  4. Verify the changes:

    $ oc get storageclass

    Example output

    NAME                 TYPE
    gp2                  kubernetes.io/aws-ebs
    standard (default)   kubernetes.io/aws-ebs

7.12.4.4. Creating a ConfigMap for importing a Red Hat Virtualization virtual machine

You can create a ConfigMap to map the Red Hat Virtualization (RHV) virtual machine operating system to an OpenShift Virtualization template if you want to override the default vm-import-controller mapping or to add additional mappings.

The default vm-import-controller ConfigMap contains the following RHV operating systems and their corresponding common OpenShift Virtualization templates.

Table 7.6. Operating system and template mapping

RHV VM operating systemOpenShift Virtualization template

rhel_6_9_plus_ppc64

rhel6.9

rhel_6_ppc64

rhel6.9

rhel_6

rhel6.9

rhel_6x64

rhel6.9

rhel_7_ppc64

rhel7.7

rhel_7_s390x

rhel7.7

rhel_7x64

rhel7.7

rhel_8x64

rhel8.1

sles_11_ppc64

opensuse15.0

sles_11

opensuse15.0

sles_12_s390x

opensuse15.0

ubuntu_12_04

ubuntu18.04

ubuntu_12_10

ubuntu18.04

ubuntu_13_04

ubuntu18.04

ubuntu_13_10

ubuntu18.04

ubuntu_14_04_ppc64

ubuntu18.04

ubuntu_14_04

ubuntu18.04

ubuntu_16_04_s390x

ubuntu18.04

windows_10

win10

windows_10x64

win10

windows_2003

win10

windows_2003x64

win10

windows_2008R2x64

win2k8

windows_2008

win2k8

windows_2008x64

win2k8

windows_2012R2x64

win2k12r2

windows_2012x64

win2k12r2

windows_2016x64

win2k16

windows_2019x64

win2k19

windows_7

win10

windows_7x64

win10

windows_8

win10

windows_8x64

win10

windows_xp

win10

Procedure

  1. In a web browser, identify the REST API name of the RHV VM operating system by navigating to http://<RHV_Manager_FQDN>/ovirt-engine/api/vms/<VM_ID>. The operating system name appears in the <os> section of the XML output, as shown in the following example:

    ...
    <os>
    ...
    <type>rhel_8x64</type>
    </os>
  2. View a list of the available OpenShift Virtualization templates:

    $ oc get templates -n openshift --show-labels | tr ',' '\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\1#g' | sort -u

    Example output

    fedora31
    fedora32
    ...
    rhel8.1
    rhel8.2
    ...

  3. If an OpenShift Virtualization template that matches the RHV VM operating system does not appear in the list of available templates, create a template with the OpenShift Virtualization web console.
  4. Create a ConfigMap to map the RHV VM operating system to the OpenShift Virtualization template:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: os-configmap
      namespace: default 1
    data:
      guestos2common: |
        "Red Hat Enterprise Linux Server": "rhel"
        "CentOS Linux": "centos"
        "Fedora": "fedora"
        "Ubuntu": "ubuntu"
        "openSUSE": "opensuse"
      osinfo2common: |
        "<rhv-operating-system>": "<vm-template>" 2
    EOF
    1
    Optional: You can change the value of the namespace parameter.
    2
    Specify the REST API name of the RHV operating system and its corresponding VM template as shown in the following example.

    ConfigMap example

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: os-configmap
      namespace: default
    data:
      osinfo2common: |
        "other_linux": "fedora31"
    EOF

  5. Verify that the custom ConfigMap was created:

    $ oc get cm -n default os-configmap -o yaml
  6. Edit the kubevirt-hyperconverged-operator.v2.4.9.yaml file:

    $ oc edit clusterserviceversion -n openshift-cnv kubevirt-hyperconverged-operator.v2.4.9
  7. Update the following parameters of the vm-import-operator deployment manifest:

    ...
    spec:
      containers:
      - env:
        ...
        - name: OS_CONFIGMAP_NAME
          value: os-configmap 1
        - name: OS_CONFIGMAP_NAMESPACE
          value: default 2
    1
    Add value: os-configmap to the name: OS_CONFIGMAP_NAME parameter.
    2
    Optional: You can add this value if you changed the namespace in the ConfigMap.
  8. Save the kubevirt-hyperconverged-operator.v2.4.9.yaml file.

    Updating the vm-import-operator deployment updates the vm-import-controller ConfigMap.

  9. Verify that the template appears in the OpenShift Virtualization web console:

    1. Click WorkloadsVirtualization from the side menu.
    2. Click the Virtual Machine Templates tab and find the template in the list.

7.12.4.5. Importing a virtual machine with the VM Import wizard

You can import a single virtual machine with the VM Import wizard.

Procedure

  1. In the web console, click WorkloadsVirtual Machines.
  2. Click Create Virtual Machine and select Import with Wizard.
  3. Select Red Hat Virtualization (RHV) from the Provider list.
  4. Select Connect to New Instance or a saved RHV instance.

    • If you select Connect to New Instance, fill in the following fields:

      • API URL: For example, https://<RHV_Manager_FQDN>/ovirt-engine/api
      • CA certificate: Click Browse to upload the RHV Manager CA certificate or paste the CA certificate into the field.

        View the CA certificate by running the following command:

        $ openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null

        The CA certificate is the second certificate in the output.

      • Username: RHV Manager user name, for example, admin@internal
      • Password: RHV Manager password
    • If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials.
  5. Click Check and Save and wait for the connection to complete.

    Note

    The connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click WorkloadsSecrets and delete the provider secret.

  6. Select a cluster and a virtual machine.
  7. Click Next.
  8. In the Review screen, review your settings.
  9. Optional: You can select Start virtual machine on creation.
  10. Click Edit to update the following settings:

    • GeneralName: The VM name is limited to 63 characters.
    • GeneralDescription: Optional description of the VM.

      • Storage Class: Select NFS or ocs-storagecluster-ceph-rbd.

        If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.

      • AdvancedVolume Mode: Select Block.
    • AdvancedVolume Mode: Select Block.
    • NetworkingNetwork: You can select a network from a list of available network attachment definition objects.
  11. Click Import or Review and Import, if you have edited the import settings.

    A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in WorkloadsVirtual Machines.

Virtual machine wizard fields
NameParameterDescription

Template

 

Template from which to create the virtual machine. Selecting a template will automatically complete other fields.

Source

PXE

Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster.

URL

Provision virtual machine from an image available from an HTTP or S3 endpoint.

Container

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

Disk

Provision virtual machine from a disk.

Operating System

 

The primary operating system that is selected for the virtual machine.

Flavor

small, medium, large, tiny, Custom

Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system.

Memory

 

Size in GiB of the memory allocated to the virtual machine.

CPUs

 

The amount of CPU allocated to the virtual machine.

Workload Profile

High Performance

A virtual machine configuration that is optimized for high-performance workloads.

Server

A profile optimized to run server workloads.

Desktop

A virtual machine configuration for use on a desktop.

Name

 

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

 

Optional description field.

Start virtual machine on creation

 

Select to automatically start the virtual machine upon creation.

Networking fields
NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Storage fields
NameDescription

Source

Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs).

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size (GiB)

Size, in GiB, of the disk.

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The StorageClass that is used to create the disk.

Advanced → Volume Mode

 
Advanced storage settings
NameParameterDescription

Volume Mode

Filesystem

Stores the virtual disk on a filesystem-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode [1]

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

  1. You can change the access mode by using the command line interface.

7.12.4.6. Importing a Red Hat Virtualization virtual machine with the CLI

You can import a Red Hat Virtualization (RHV) virtual machine with the CLI by creating the Secret and VirtualMachineImport Custom Resources (CRs). The Secret CR stores the RHV Manager credentials and CA certificate. The VirtualMachineImport CR defines the parameters of the VM import process.

Optional: You can create a ResourceMapping CR that is separate from the VirtualMachineImport CR. A ResourceMapping CR provides greater flexibility, for example, if you import additional RHV VMs.

Important

The default target storage class must be NFS. Cinder does not support RHV VM import.

Procedure

  1. Create the Secret CR by running the following command:

    $ cat <<EOF | oc create -f -
    apiVersion: v1
    kind: Secret
    metadata:
      name: rhv-credentials
      namespace: default 1
    type: Opaque
    stringData:
      ovirt: |
        apiUrl: <api_endpoint> 2
        username: admin@internal
        password: 3
        caCert: |
          -----BEGIN CERTIFICATE-----
          4
          -----END CERTIFICATE-----
    EOF
    1
    Optional. You can specify a different namespace in all the CRs.
    2
    Specify the API endpoint of the RHV Manager, for example, \"https://www.example.com:8443/ovirt-engine/api"
    3
    Specify the password for admin@internal.
    4
    Specify the RHV Manager CA certificate. You can obtain the CA certificate by running the following command:
    $ openssl s_client -connect :443 -showcerts < /dev/null
  2. Optional: Create a ResourceMapping CR if you want to separate the resource mapping from the VirtualMachineImport CR by running the following command:

    $ cat <<EOF | kubectl create -f -
    apiVersion: v2v.kubevirt.io/v1alpha1
    kind: ResourceMapping
    metadata:
      name: resourcemapping_example
      namespace: default
    spec:
      ovirt:
        networkMappings:
          - source:
              name: <rhv_logical_network>/<vnic_profile> 1
            target:
              name: <target_network> 2
            type: pod
        storageMappings: 3
          - source:
              name: <rhv_storage_domain> 4
            target:
              name: <target_storage_class> 5
            volumeMode: <volume_mode> 6
    EOF
    1
    Specify the RHV logical network and vNIC profile.
    2
    Specify the OpenShift Virtualization network.
    3
    If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence.
    4
    Specify the RHV storage domain.
    5
    Specify nfs or ocs-storagecluster-ceph-rbd.
    6
    If you specified the ocs-storagecluster-ceph-rbd storage class, you must specify Block as the volume mode.
  3. Create the VirtualMachineImport CR by running the following command:

    $ cat <<EOF | oc create -f -
    apiVersion: v2v.kubevirt.io/v1alpha1
    kind: VirtualMachineImport
    metadata:
      name: vm-import
      namespace: default
    spec:
      providerCredentialsSecret:
        name: rhv-credentials
        namespace: default
    # resourceMapping: 1
    #   name: resourcemapping-example
    #   namespace: default
      targetVmName: vm_example 2
      startVm: true
      source:
        ovirt:
          vm:
            id: <source_vm_id> 3
            name: <source_vm_name> 4
          cluster:
            name: <source_cluster_name> 5
          mappings: 6
            networkMappings:
              - source:
                  name: <source_logical_network>/<vnic_profile> 7
                target:
                  name: <target_network> 8
                type: pod
            storageMappings: 9
              - source:
                  name: <source_storage_domain> 10
                target:
                  name: <target_storage_class> 11
                accessMode: <volume_access_mode> 12
            diskMappings:
              - source:
                  id: <source_vm_disk_id> 13
                target:
                  name: <target_storage_class> 14
    EOF
    1
    If you create a ResourceMapping CR, uncomment the resourceMapping section.
    2
    Specify the target VM name.
    3
    Specify the source VM ID, for example, 80554327-0569-496b-bdeb-fcbbf52b827b. You can obtain the VM ID by entering https://www.example.com/ovirt-engine/api/vms/ in a web browser on the Manager machine to list all VMs. Locate the VM you want to import and its corresponding VM ID. You do not need to specify a VM name or cluster name.
    4
    If you specify the source VM name, you must also specify the source cluster. Do not specify the source VM ID.
    5
    If you specify the source cluster, you must also specify the source VM name. Do not specify the source VM ID.
    6
    If you create a ResourceMapping CR, comment out the mappings section.
    7
    Specify the logical network and vNIC profile of the source VM.
    8
    Specify the OpenShift Virtualization network.
    9
    If storage mappings are specified in both the ResourceMapping and the VirtualMachineImport CRs, the VirtualMachineImport CR takes precedence.
    10
    Specify the source storage domain.
    11
    Specify the target storage class.
    12
    Specify ReadWriteOnce, ReadWriteMany, or ReadOnlyMany. If no access mode is specified, {virt} determines the correct volume access mode based on the HostMigration mode setting of the RHV VM or on the virtual disk access mode:
    • If the RHV VM migration mode is Allow manual and automatic migration, the default access mode is ReadWriteMany.
    • If the RHV virtual disk access mode is ReadOnly, the default access mode is ReadOnlyMany.
    • For all other settings, the default access mode is ReadWriteOnce.
Specify the source VM disk ID, for example, 8181ecc1-5db8-4193-9c92-3ddab3be7b05. You can obtain the disk ID by entering https://www.example.com/ovirt-engine/api/vms/vm23 in a web browser on the Manager machine and reviewing the VM details.
Specify the target storage class.
  1. Follow the progress of the virtual machine import to verify that the import was successful:

    $ oc get vmimports vm-import -n default

    The output indicating a successful import resembles the following example:

    Example output

    ...
    status:
      conditions:
      - lastHeartbeatTime: "2020-07-22T08:58:52Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: Validation completed successfully
        reason: ValidationCompleted
        status: "True"
        type: Valid
      - lastHeartbeatTime: "2020-07-22T08:58:52Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: 'VM specifies IO Threads: 1, VM has NUMA tune mode specified: interleave'
        reason: MappingRulesVerificationReportedWarnings
        status: "True"
        type: MappingRulesVerified
      - lastHeartbeatTime: "2020-07-22T08:58:56Z"
        lastTransitionTime: "2020-07-22T08:58:52Z"
        message: Copying virtual machine disks
        reason: CopyingDisks
        status: "True"
        type: Processing
      dataVolumes:
      - name: fedora32-b870c429-11e0-4630-b3df-21da551a48c0
      targetVmName: fedora32

7.12.4.7. Canceling a virtual machine import

You can cancel a virtual machine import in progress by using the web console.

Procedure

  1. Click WorkloadsVirtual Machines.
  2. Click the Options menu kebab of the virtual machine you are importing and select Delete Virtual Machine.
  3. In the Delete Virtual Machine window, click Delete.

    The virtual machine is removed from the list of virtual machines.

7.12.4.8. Troubleshooting a virtual machine import

7.12.4.8.1. Logs

You can check the VM Import Controller Pod log for errors.

Procedure

  1. View the VM Import Controller Pod name by running the following command:

    $ oc get pods -n <namespace> | grep import 1
    1
    Specify the namespace of your imported virtual machine.

    Example output

    vm-import-controller-f66f7d-zqkz7            1/1     Running     0          4h49m

  2. View the VM Import Controller Pod log by running the following command:

    $ oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> 1
    1
    Specify the VM Import Controller Pod name and the namespace.
7.12.4.8.2. Error messages

The following error message might appear:

  • The following error message is displayed in the VM Import Controller Pod log and the progress bar stops at 10% if the OpenShift Virtualization storage PV is not suitable:

    Failed to bind volumes: provisioning failed for PVC

    You must use a compatible storage class. The Cinder storage class is not supported.

7.12.5. Importing a single VMware virtual machine or template

You can import a VMware vSphere 6.5, 6.7, or 7.0 VM or VM template into OpenShift Virtualization by using the VM Import wizard.

If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.

7.12.5.1. OpenShift Virtualization storage feature matrix

The following table describes local and shared persistent storage that support VM import.

Table 7.7. OpenShift Virtualization storage feature matrix

 VMware VM import

OpenShift Container Storage: RBD block-mode volumes

Yes

OpenShift Virtualization hostpath provisioner

Yes

Other multi-node writable storage

Yes [1]

Other single-node writable storage

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.
  2. PVCs must request a ReadWriteOnce access mode.

7.12.5.2. Preparing a VDDK image

The import process uses the VMware Virtual Disk Development Kit (VDDK) to copy the VMware virtual disk.

You can download the VDDK SDK, create a VDDK image, upload the image to an image registry, and add it to the v2v-vmware ConfigMap.

You can configure either an internal OpenShift Container Platform image registry or a secure external image registry for the VDDK image. The registry must be accessible to your OpenShift Virtualization environment.

Note

Storing the VDDK image in a public registry might violate the terms of the VMware license.

7.12.5.2.1. Configuring an internal image registry

You can configure the internal OpenShift Container Platform image registry on bare metal by updating the Image Registry Operator configuration.

You can access the registry directly, from within the OpenShift Container Platform cluster, or externally, by exposing the registry with a route.

Changing the image registry’s management state

To start the image registry, you must change the Image Registry Operator configuration’s managementState from Removed to Managed.

Procedure

  • Change managementState Image Registry Operator configuration from Removed to Managed. For example:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
Configuring registry storage for bare metal

As a cluster administrator, following installation you must configure your registry to use storage.

Prerequisites

  • Cluster administrator permissions.
  • A cluster on bare metal.
  • Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.

    Important

    OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required.

  • Must have 100Gi capacity.

Procedure

  1. To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource.

    Note

    When using shared storage, review your security settings to prevent outside access.

  2. Verify that you do not have a registry pod:

    $ oc get pod -n openshift-image-registry
    Note

    If the storage type is emptyDIR, the replica number cannot be greater than 1.

  3. Check the registry configuration:

    $ oc edit configs.imageregistry.operator.openshift.io

    Example output

    storage:
      pvc:
        claim:

    Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC.

  4. Check the clusteroperator status:

    $ oc get clusteroperator image-registry
Accessing registry directly from the cluster

You can access the registry from inside the cluster.

Procedure

Access the registry from the cluster by using internal routes:

  1. Access the node by getting the node’s address:

    $ oc get nodes
    $ oc debug nodes/<node_address>
  2. To enable access to tools such as oc and podman on the node, run the following command:

    sh-4.2# chroot /host
  3. Log in to the container image registry by using your access token:

    sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
    sh-4.2# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000

    You should see a message confirming login, such as:

    Login Succeeded!
    Note

    You can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure.

    Since the Image Registry Operator creates the route, it will likely be similar to default-route-openshift-image-registry.<cluster_name>.

  4. Perform podman pull and podman push operations against your registry:

    Important

    You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.

    In the following examples, use:

    ComponentValue

    <registry_ip>

    172.30.124.220

    <port>

    5000

    <project>

    openshift

    <image>

    image

    <tag>

    omitted (defaults to latest)

    1. Pull an arbitrary image:

      $ podman pull name.io/image
    2. Tag the new image with the form <registry_ip>:<port>/<project>/<image>. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry:

      $ podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image
      Note

      You must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the podman push in the next step will fail. To test, you can create a new project to push the image.

    3. Push the newly tagged image to your registry:

      $ podman push image-registry.openshift-image-registry.svc:5000/openshift/image
Exposing a secure registry manually

Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.

Prerequisites:

  • The following prerequisites are automatically performed:

    • Deploy the Registry Operator.
    • Deploy the Ingress Operator.

Procedure

You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes.

To expose the registry using DefaultRoute:

  1. Set DefaultRoute to True:

    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
  2. Log in with podman:

    $ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
    $ podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $HOST 1
    1
    --tls-verify=false is needed if the cluster’s default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator.

To expose the registry using custom routes:

  1. Create a secret with your route’s TLS keys:

    $ oc create secret tls public-route-tls \
        -n openshift-image-registry \
        --cert=</path/to/tls.crt> \
        --key=</path/to/tls.key>

    This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator.

  2. On the Registry Operator:

    spec:
      routes:
        - name: public-routes
          hostname: myregistry.mycorp.organization
          secretName: public-route-tls
    ...
    Note

    Only set secretName if you are providing a custom TLS configuration for the registry’s route.

7.12.5.2.2. Configuring an external image registry

If you use an external image registry for the VDDK image, you can add the external image registry’s certificate authorities to the OpenShift Container Platform cluster.

Optionally, you can create a pull secret from your Docker credentials and add it to your service account.

Adding certificate authorities to the cluster

You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure.

Prerequisites

  • You must have cluster administrator privileges.
  • You must have access to the public certificates of the registry, usually a hostname/ca.crt file located in the /etc/docker/certs.d/ directory.

Procedure

  1. Create a ConfigMap in the openshift-config namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the hostname of the registry in the hostname[..port] format:

    $ oc create configmap registry-cas -n openshift-config \
    --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \
    --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
  2. Update the cluster image configuration:

    $ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
Allowing pods to reference images from other secured registries

The .dockercfg $HOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry.

To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.

Procedure

  • If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockercfg=<path/to/.dockercfg> \
        --type=kubernetes.io/dockercfg
  • Or if you have a $HOME/.docker/config.json file:

    $ oc create secret generic <pull_secret_name> \
        --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
        --type=kubernetes.io/dockerconfigjson
  • If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:

    $ oc create secret docker-registry <pull_secret_name> \
        --docker-server=<registry_server> \
        --docker-username=<user_name> \
        --docker-password=<password> \
        --docker-email=<email>
  • To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default:

    $ oc secrets link default <pull_secret_name> --for=pull
7.12.5.2.3. Creating and using a VDDK image

You can download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You then add the VDDK image to the v2v-vmware ConfigMap.

Prerequisites

  • You must have access to an OpenShift Container Platform internal image registry or a secure external registry.

Procedure

  1. Create and navigate to a temporary directory:

    $ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
  2. In a browser, navigate to VMware code and click SDKs.
  3. Under Compute Virtualization, click Virtual Disk Development Kit (VDDK).
  4. Select the VDDK version that corresponds to your VMware vSphere version, for example, VDDK 7.0 for vSphere 7.0, click Download, and then save the VDDK archive in the temporary directory.
  5. Extract the VDDK archive:

    $ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
  6. Create a Dockerfile:

    $ cat > Dockerfile <<EOF
    FROM busybox:latest
    COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib
    RUN mkdir -p /opt
    ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"]
    EOF
  7. Build the image:

    $ podman build . -t <registry_route_or_server_path>/vddk:<tag> 1
    1
    Specify your image registry:
    • For an internal OpenShift Container Platform registry, use the internal registry route, for example, image-registry.openshift-image-registry.svc:5000/openshift/vddk:<tag>.
    • For an external registry, specify the server name, path, and tag, for example, server.example.com:5000/vddk:<tag>.
  8. Push the image to the registry:

    $ podman push <registry_route_or_server_path>/vddk:<tag>
  9. Ensure that the image is accessible to your OpenShift Virtualization environment.
  10. Edit the v2v-vmware ConfigMap in the openshift-cnv project:

    $ oc edit configmap v2v-vmware -n openshift-cnv
  11. Add the vddk-init-image parameter to the data stanza:

    ...
    data:
      vddk-init-image: <registry_route_or_server_path>/vddk:<tag>

7.12.5.3. Importing a virtual machine with the VM Import wizard

You can import a single virtual machine with the VM Import wizard.

You can also import a VM template. If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.

Prerequisites

  • You must have admin user privileges.
  • The VMware Virtual Disk Development Kit (VDDK) image must be in an image registry that is accessible to your OpenShift Virtualization environment.
  • The VDDK image must be added to the v2v-vmware ConfigMap.
  • The VM must be powered off.
  • Virtual disks must be connected to IDE or SCSI controllers. If virtual disks are connected to a SATA controller, you can change them to IDE controllers and then migrate the VM.
  • The OpenShift Virtualization local and shared persistent storage classes must support VM import.
  • The OpenShift Virtualization storage must be large enough to accommodate the virtual disk.

    Warning

    If you try to import a virtual machine with a disk that is larger than the available storage space, the operation cannot complete. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage back end.

  • The OpenShift Virtualization egress network policy must allow the following traffic:

    DestinationProtocolPort

    VMware ESXi hosts

    TCP

    443

    VMware ESXi hosts

    TCP

    902

    VMware vCenter

    TCP

    5840

Procedure

  1. In the web console, click WorkloadsVirtual Machines.
  2. Click Create Virtual Machine and select Import with Wizard.
  3. Select VMware from the Provider list.
  4. Select Connect to New Instance or a saved vCenter instance.

    • If you select Connect to New Instance, enter the vCenter hostname, Username, and Password.
    • If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials.
  5. Click Check and Save and wait for the connection to complete.

    Note

    The connection details are stored in a secret. If you add a provider with an incorrect host name, user name, or password, click WorkloadsSecrets and delete the provider secret.

  6. Select a virtual machine or a template.
  7. Click Next.
  8. In the Review screen, review your settings.
  9. Click Edit to update the following settings:

    • General:

      • Description
      • Operating System
      • Flavor
      • Memory
      • CPUs
      • Workload Profile
    • Networking:

      • Name
      • Model
      • Network
      • Type: You must select the masquerade binding method.
      • MAC Address
    • Storage: Click the Options menu kebab of the VM disk and select Edit to update the following fields:

      • Name
      • Source: For example, Import Disk.
      • Size
      • Interface
      • Storage Class: Select NFS or ocs-storagecluster-ceph-rbd (ceph-rbd).

        If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.

        Other storage classes might work, but they are not officially supported.

      • AdvancedVolume Mode: Select Block.
      • AdvancedAccess Mode
    • AdvancedCloud-init:

      • Form: Enter the Hostname and Authenticated SSH Keys.
      • Custom script: Enter the cloud-init script in the text field.
    • AdvancedVirtual Hardware: You can attach a virtual CD-ROM to the imported virtual machine.
  10. Click Import or Review and Import, if you have edited the import settings.

    A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in WorkloadsVirtual Machines.

Virtual machine wizard fields
NameParameterDescription

Template

 

Template from which to create the virtual machine. Selecting a template will automatically complete other fields.

Source

PXE

Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster.

URL

Provision virtual machine from an image available from an HTTP or S3 endpoint.

Container

Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: kubevirt/cirros-registry-disk-demo.

Disk

Provision virtual machine from a disk.

Operating System

 

The primary operating system that is selected for the virtual machine.

Flavor

small, medium, large, tiny, Custom

Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system.

Memory

 

Size in GiB of the memory allocated to the virtual machine.

CPUs

 

The amount of CPU allocated to the virtual machine.

Workload Profile

High Performance

A virtual machine configuration that is optimized for high-performance workloads.

Server

A profile optimized to run server workloads.

Desktop

A virtual machine configuration for use on a desktop.

Name

 

The name can contain lowercase letters (a-z), numbers (0-9), and hyphens (-), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.), or special characters.

Description

 

Optional description field.

Start virtual machine on creation

 

Select to automatically start the virtual machine upon creation.

Cloud-init fields
NameDescription

Hostname

Sets a specific host name for the virtual machine.

Authenticated SSH Keys

The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine.

Custom script

Replaces other options with a field in which you paste a custom cloud-init script.

Networking fields
NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Storage fields
NameDescription

Source

Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs).

Name

Name of the disk. The name can contain lowercase letters (a-z), numbers (0-9), hyphens (-), and periods (.), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters.

Size (GiB)

Size, in GiB, of the disk.

Interface

Type of disk device. Supported interfaces are virtIO, SATA, and SCSI.

Storage Class

The StorageClass that is used to create the disk.

Advanced → Volume Mode

 

Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem.

Advanced → Access Mode

 

Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX).

Advanced storage settings

The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map.

NameParameterDescription

Volume Mode

Filesystem

Stores the virtual disk on a filesystem-based volume.

Block

Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it.

Access Mode

Single User (RWO)

The disk can be mounted as read/write by a single node.

Shared Access (RWX)

The disk can be mounted as read/write by many nodes.

Note

This is required for some features, such as live migration of virtual machines between nodes.

Read Only (ROX)

The disk can be mounted as read-only by many nodes.

7.12.5.3.1. Updating the NIC name of an imported virtual machine

You must update the NIC name of a virtual machine that you imported from VMware to conform to OpenShift Virtualization naming conventions.

Procedure

  1. Log in to the virtual machine.
  2. Navigate to the /etc/sysconfig/network-scripts directory.
  3. Rename the network configuration file:

    $ mv vmnic0 ifcfg-eth0 1
    1
    The first network configuration file is named ifcfg-eth0. Additional network configuration files are numbered sequentially, for example, ifcfg-eth1, ifcfg-eth2.
  4. Update the NAME and DEVICE parameters in the network configuration file:

    NAME=eth0
    DEVICE=eth0
  5. Restart the network:

    $ systemctl restart network

7.12.5.4. Troubleshooting a virtual machine import

7.12.5.4.1. Logs

You can check the V2V Conversion Pod log for errors.

Procedure

  1. View the V2V Conversion Pod name by running the following command:

    $ oc get pods -n <namespace> | grep v2v 1
    1
    Specify the namespace of your imported virtual machine.

    Example output

    kubevirt-v2v-conversion-f66f7d-zqkz7            1/1     Running     0          4h49m

  2. View the V2V Conversion Pod log by running the following command:

    $ oc logs <kubevirt-v2v-conversion-f66f7d-zqkz7> -f -n <namespace> 1
    1
    Specify the VM Conversion Pod name and the namespace.
7.12.5.4.2. Error messages

The following error messages might appear:

  • If the VMware VM is not shut down before import, the imported virtual machine displays the error message, Readiness probe failed in the OpenShift Container Platform console and the V2V Conversion Pod log displays the following error message:

    INFO - have error: ('virt-v2v error: internal error: invalid argument: libvirt domain ‘v2v_migration_vm_1’ is running or paused. It must be shut down in order to perform virt-v2v conversion',)"
  • The following error message is displayed in the OpenShift Container Platform console if a non-admin user tries to import a VM:

    Could not load ConfigMap vmware-to-kubevirt-os in kube-public namespace
    Restricted Access: configmaps "vmware-to-kubevirt-os" is forbidden: User cannot get resource "configmaps" in API group "" in the namespace "kube-public"

    Only an admin user can import a VM.

7.13. Cloning virtual machines

7.13.1. Enabling user permissions to clone DataVolumes across namespaces

The isolating nature of namespaces means that users cannot by default clone resources between namespaces.

To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new ClusterRole. Bind this ClusterRole to a user to enable them to clone virtual machines to the destination namespace.

7.13.1.1. Prerequisites

  • Only a user with the cluster-admin role can create ClusterRoles.

7.13.1.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.13.1.3. Creating RBAC resources for cloning DataVolumes

Create a new ClusterRole that enables permissions for all actions for the datavolumes resource.

Procedure

  1. Create a ClusterRole manifest:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: <datavolume-cloner> 1
    rules:
    - apiGroups: ["cdi.kubevirt.io"]
      resources: ["datavolumes/source"]
      verbs: ["*"]
    1
    Unique name for the ClusterRole.
  2. Create the ClusterRole in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the ClusterRole manifest created in the previous step.
  3. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the ClusterRole created in the previous step.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: <allow-clone-to-user> 1
      namespace: <Source namespace> 2
    subjects:
    - kind: ServiceAccount
      name: default
      namespace: <Destination namespace> 3
    roleRef:
      kind: ClusterRole
      name: datavolume-cloner 4
      apiGroup: rbac.authorization.k8s.io
    1
    Unique name for the RoleBinding.
    2
    The namespace for the source DataVolume.
    3
    The namespace to which the DataVolume is cloned.
    4
    The name of the ClusterRole created in the previous step.
  4. Create the RoleBinding in the cluster:

    $ oc create -f <datavolume-cloner.yaml> 1
    1
    The file name of the RoleBinding manifest created in the previous step.

7.13.2. Cloning a virtual machine disk into a new DataVolume

You can clone the PersistentVolumeClaim (PVC) of a virtual machine disk into a new DataVolume by referencing the source PVC in your DataVolume configuration file.

Warning

Cloning operations between different volume modes are not supported. The volumeMode values must match in both the source and target specifications.

For example, if you attempt to clone from a PersistentVolume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem, the operation fails with an error message.

7.13.2.1. Prerequisites

7.13.2.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.13.2.3. Cloning the PersistentVolumeClaim of a virtual machine disk into a new DataVolume

You can clone a PersistentVolumeClaim (PVC) of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.

Note

When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine. If the virtual machine is deleted, neither the DataVolume nor its associated PVC is deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).

Procedure

  1. Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a DataVolume object that specifies the name of the new DataVolume, the name and namespace of the source PVC, and the size of the new DataVolume.

    For example:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 4
    1
    The name of the new DataVolume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4
    The size of the new DataVolume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
  3. Start cloning the PVC by creating the DataVolume:

    $ oc create -f <cloner-datavolume>.yaml
    Note

    DataVolumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new DataVolume while the PVC clones.

7.13.2.4. Template: DataVolume clone configuration file

example-clone-dv.yaml

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: "example-clone-dv"
spec:
  source:
      pvc:
        name: source-pvc
        namespace: example-ns
  pvc:
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: "1G"

7.13.2.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.13.3. Cloning a virtual machine by using a DataVolumeTemplate

You can create a new virtual machine by cloning the PersistentVolumeClaim (PVC) of an existing VM. By including a dataVolumeTemplate in your virtual machine configuration file, you create a new DataVolume from the original PVC.

Warning

Cloning operations between different volume modes are not supported. The volumeMode values must match in both the source and target specifications.

For example, if you attempt to clone from a PersistentVolume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem, the operation fails with an error message.

7.13.3.1. Prerequisites

7.13.3.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.13.3.3. Creating a new virtual machine from a cloned PersistentVolumeClaim by using a DataVolumeTemplate

You can create a virtual machine that clones the PersistentVolumeClaim (PVC) of an existing virtual machine into a DataVolume. By referencing a dataVolumeTemplate in the virtual machine spec, the source PVC is cloned to a DataVolume, which is then automatically used for the creation of the virtual machine.

Note

When a DataVolume is created as part of the DataVolumeTemplate of a virtual machine, the lifecycle of the DataVolume is then dependent on the virtual machine. If the virtual machine is deleted, the DataVolume and associated PVC are also deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).

Procedure

  1. Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a VirtualMachine object. The following virtual machine example clones my-favorite-vm-disk, which is located in the source-namespace namespace. The 2Gi DataVolume called favorite-clone is created from my-favorite-vm-disk.

    For example:

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      labels:
        kubevirt.io/vm: vm-dv-clone
      name: vm-dv-clone 1
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-dv-clone
        spec:
          domain:
            devices:
              disks:
              - disk:
                  bus: virtio
                name: root-disk
            resources:
              requests:
                memory: 64M
          volumes:
          - dataVolume:
              name: favorite-clone
            name: root-disk
      dataVolumeTemplates:
      - metadata:
          name: favorite-clone
        spec:
          pvc:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 2Gi
          source:
            pvc:
              namespace: "source-namespace"
              name: "my-favorite-vm-disk"
    1
    The virtual machine to create.
  3. Create the virtual machine with the PVC-cloned DataVolume:

    $ oc create -f <vm-clone-datavolumetemplate>.yaml

7.13.3.4. Template: DataVolume virtual machine configuration file

example-dv-vm.yaml

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  labels:
    kubevirt.io/vm: example-vm
  name: example-vm
spec:
  dataVolumeTemplates:
  - metadata:
      name: example-dv
    spec:
      pvc:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1G
      source:
          http:
             url: "" 1
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: example-vm
    spec:
      domain:
        cpu:
          cores: 1
        devices:
          disks:
          - disk:
              bus: virtio
            name: example-dv-disk
        machine:
          type: q35
        resources:
          requests:
            memory: 1G
      terminationGracePeriodSeconds: 0
      volumes:
      - dataVolume:
          name: example-dv
        name: example-dv-disk
1
The HTTP source of the image you want to import, if applicable.

7.13.3.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.13.4. Cloning a virtual machine disk into a new block storage DataVolume

You can clone the PersistentVolumeClaim (PVC) of a virtual machine disk into a new block DataVolume by referencing the source PVC in your DataVolume configuration file.

Warning

Cloning operations between different volume modes are not supported. The volumeMode values must match in both the source and target specifications.

For example, if you attempt to clone from a PersistentVolume (PV) with volumeMode: Block to a PV with volumeMode: Filesystem, the operation fails with an error message.

7.13.4.1. Prerequisites

7.13.4.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.13.4.3. About block PersistentVolumes

A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PersistentVolumeClaim (PVC) specification.

7.13.4.4. Creating a local block PersistentVolume

Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume configuration that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a StorageClass for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The filename of the PersistentVolume created in the previous step.

7.13.4.5. Cloning the PersistentVolumeClaim of a virtual machine disk into a new DataVolume

You can clone a PersistentVolumeClaim (PVC) of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.

Note

When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine. If the virtual machine is deleted, neither the DataVolume nor its associated PVC is deleted.

Prerequisites

  • Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
  • Install the OpenShift CLI (oc).
  • At least one available block PersistentVolume (PV) that is the same size as or larger than the source PVC.

Procedure

  1. Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
  2. Create a YAML file for a DataVolume object that specifies the name of the new DataVolume, the name and namespace of the source PVC, volumeMode: Block so that an available block PV is used, and the size of the new DataVolume.

    For example:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <cloner-datavolume> 1
    spec:
      source:
        pvc:
          namespace: "<source-namespace>" 2
          name: "<my-favorite-vm-disk>" 3
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 4
        volumeMode: Block 5
    1
    The name of the new DataVolume.
    2
    The namespace where the source PVC exists.
    3
    The name of the source PVC.
    4
    The size of the new DataVolume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
    5
    Specifies that the destination is a block PV
  3. Start cloning the PVC by creating the DataVolume:

    $ oc create -f <cloner-datavolume>.yaml
    Note

    DataVolumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new DataVolume while the PVC clones.

7.13.4.6. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.14. Virtual machine networking

7.14.1. Using the default Pod network for virtual machines

You can use the default Pod network with OpenShift Virtualization. To do so, you must use the masquerade binding method. It is the only recommended binding method for use with the default Pod network. Do not use masquerade mode with non-default networks.

For secondary networks, use the bridge binding method.

Note

The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. It is not enabled by default. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace.

7.14.1.1. Configuring masquerade mode from the command line

You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the Pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge.

Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.

Prerequisites

  • The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.

Procedure

  1. Edit the interfaces spec of your virtual machine configuration file:

    kind: VirtualMachine
    spec:
      domain:
        devices:
          interfaces:
            - name: red
              masquerade: {} 1
              ports:
                - port: 80 2
      networks:
      - name: red
        pod: {}
    1
    Connect using masquerade mode
    2
    Allow incoming traffic on port 80
  2. Create the virtual machine:

    $ oc create -f <vm-name>.yaml

7.14.1.2. Selecting binding method

If you create a virtual machine from the OpenShift Virtualization web console wizard, select the required binding method from the Networking screen.

7.14.1.2.1. Networking fields
NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

7.14.1.3. Virtual machine configuration examples for the default network

7.14.1.3.1. Template: virtual machine configuration file
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: example-vm
  namespace: default
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - masquerade: {}
            name: default
        resources:
          requests:
            memory: 1024M
      networks:
        - name: default
          pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |
              #!/bin/bash
              echo "fedora" | passwd fedora --stdin
7.14.1.3.2. Template: Windows virtual machine instance configuration file
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachineInstance
metadata:
  labels:
    special: vmi-windows
  name: vmi-windows
spec:
  domain:
    clock:
      timer:
        hpet:
          present: false
        hyperv: {}
        pit:
          tickPolicy: delay
        rtc:
          tickPolicy: catchup
      utc: {}
    cpu:
      cores: 2
    devices:
      disks:
      - disk:
          bus: sata
        name: pvcdisk
      interfaces:
      - masquerade: {}
        model: e1000
        name: default
    features:
      acpi: {}
      apic: {}
      hyperv:
        relaxed: {}
        spinlocks:
          spinlocks: 8191
        vapic: {}
    firmware:
      uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
    machine:
      type: q35
    resources:
      requests:
        memory: 2Gi
  networks:
  - name: default
    pod: {}
  terminationGracePeriodSeconds: 0
  volumes:
  - name: pvcdisk
    persistentVolumeClaim:
      claimName: disk-windows

7.14.1.4. Creating a service from a virtual machine

Create a service from a running virtual machine by first creating a Service object to expose the virtual machine.

The ClusterIP service type exposes the virtual machine internally, within the cluster. The NodePort or LoadBalancer service types expose the virtual machine externally, outside of the cluster.

This procedure presents an example of how to create, connect to, and expose a Service object of type: ClusterIP as a virtual machine-backed service.

Note

ClusterIP is the default service type, if the service type is not specified.

Procedure

  1. Edit the virtual machine YAML as follows:

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      name: vm-ephemeral
      namespace: example-namespace
    spec:
      running: false
      template:
        metadata:
          labels:
            special: key 1
        spec:
          domain:
            devices:
              disks:
                - name: containerdisk
                  disk:
                    bus: virtio
                - name: cloudinitdisk
                  disk:
                    bus: virtio
              interfaces:
              - masquerade: {}
                name: default
            resources:
              requests:
                memory: 1024M
          networks:
            - name: default
              pod: {}
          volumes:
            - name: containerdisk
              containerDisk:
                image: kubevirt/fedora-cloud-container-disk-demo
            - name: cloudinitdisk
              cloudInitNoCloud:
                userData: |
                  #!/bin/bash
                  echo "fedora" | passwd fedora --stdin
    1
    Add the label special: key in the spec.template.metadata.labels section.
    Note

    Labels on a virtual machine are passed through to the pod. The labels on the VirtualMachine, for example special: key, must match the labels in the Service YAML selector attribute, which you create later in this procedure.

  2. Save the virtual machine YAML to apply your changes.
  3. Edit the Service YAML to configure the settings necessary to create and expose the Service object:

    apiVersion: v1
    kind: Service
    metadata:
      name: vmservice 1
      namespace: example-namespace 2
    spec:
      ports:
      - port: 27017
        protocol: TCP
        targetPort: 22 3
      selector:
        special: key 4
      type: ClusterIP 5
    1
    Specify the name of the service you are creating and exposing.
    2
    Specify namespace in the metadata section of the Service YAML that corresponds to the namespace you specify in the virtual machine YAML.
    3
    Add targetPort: 22, exposing the service on SSH port 22.
    4
    In the spec section of the Service YAML, add special: key to the selector attribute, which corresponds to the labels you added in the virtual machine YAML configuration file.
    5
    In the spec section of the Service YAML, add type: ClusterIP for a ClusterIP service. To create and expose other types of services externally, outside of the cluster, such as NodePort and LoadBalancer, replace type: ClusterIP with type: NodePort or type: LoadBalancer, as appropriate.
  4. Save the Service YAML to store the service configuration.
  5. Create the ClusterIP service:

    $ oc create -f <service_name>.yaml
  6. Start the virtual machine. If the virtual machine is already running, restart it.
  7. Query the Service object to verify it is available and is configured with type ClusterIP.

    Verification

    • Run the oc get service command, specifying the namespace that you reference in the virtual machine and Service YAML files.

      $ oc get service -n example-namespace

      Example output

      NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
      vmservice   ClusterIP   172.30.3.149   <none>        27017/TCP   2m

      • As shown from the output, vmservice is running.
      • The TYPE displays as ClusterIP, as you specified in the Service YAML.
  8. Establish a connection to the virtual machine that you want to use to back your service. Connect from an object inside the cluster, such as another virtual machine.

    1. Edit the virtual machine YAML as follows:

      apiVersion: kubevirt.io/v1alpha3
      kind: VirtualMachine
      metadata:
        name: vm-connect
        namespace: example-namespace
      spec:
        running: false
        template:
          spec:
            domain:
              devices:
                disks:
                  - name: containerdisk
                    disk:
                      bus: virtio
                  - name: cloudinitdisk
                    disk:
                      bus: virtio
                interfaces:
                - masquerade: {}
                  name: default
              resources:
                requests:
                  memory: 1024M
            networks:
              - name: default
                pod: {}
            volumes:
              - name: containerdisk
                containerDisk:
                  image: kubevirt/fedora-cloud-container-disk-demo
              - name: cloudinitdisk
                cloudInitNoCloud:
                  userData: |
                    #!/bin/bash
                    echo "fedora" | passwd fedora --stdin
    2. Run the oc create command to create a second virtual machine, where file.yaml is the name of the virtual machine YAML:

      $ oc create -f <file.yaml>
    3. Start the virtual machine.
    4. Connect to the virtual machine by running the following virtctl command:

      $ virtctl -n example-namespace console <new-vm-name>
      Note

      For service type LoadBalancer, use the vinagre client to connect your virtual machine by using the public IP and port. External ports are dynamically allocated when using service type LoadBalancer.

    5. Run the ssh command to authenticate the connection, where 172.30.3.149 is the ClusterIP of the service and fedora is the user name of the virtual machine:

      $ ssh fedora@172.30.3.149 -p 27017

      Verification

      • You receive the command prompt of the virtual machine backing the service you want to expose. You now have a service backed by a running virtual machine.

Additional resources

7.14.2. Attaching a virtual machine to multiple networks

OpenShift Virtualization provides layer-2 networking capabilities that allow you to connect virtual machines to multiple networks. You can import virtual machines with existing workloads that depend on access to multiple interfaces. You can also configure a PXE network so that you can boot machines over the network.

To get started, a network administrator configures a bridge NetworkAttachmentDefinition for a namespace in the web console or CLI. Users can then create a NIC to attach Pods and virtual machines in that namespace to the bridge network.

Note

The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. It is not enabled by default. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace.

7.14.2.1. OpenShift Virtualization networking glossary

OpenShift Virtualization provides advanced networking functionality by using custom resources and plug-ins.

The following terms are used throughout OpenShift Virtualization documentation:

Container Network Interface (CNI)
a Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
Multus
a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.
Custom Resource Definition (CRD)
a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
NetworkAttachmentDefinition
a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Preboot eXecution Environment (PXE)
an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.

7.14.2.2. Creating a NetworkAttachmentDefinition

7.14.2.3. Prerequisites

  • A Linux bridge must be configured and attached on every node. See the node networking section for more information.
Warning

Configuring ipam in a NetworkAttachmentDefinition for virtual machines is not supported.

7.14.2.3.1. Creating a Linux bridge NetworkAttachmentDefinition in the web console

The NetworkAttachmentDefinition is a custom resource that exposes layer-2 devices to a specific namespace in your OpenShift Virtualization cluster.

Network administrators can create NetworkAttachmentDefinitions to provide existing layer-2 networking to pods and virtual machines.

Procedure

  1. In the web console, click NetworkingNetwork Attachment Definitions.
  2. Click Create Network Attachment Definition .
  3. Enter a unique Name and optional Description.
  4. Click the Network Type list and select CNV Linux bridge.
  5. Enter the name of the bridge in the Bridge Name field.
  6. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
  7. Click Create.
7.14.2.3.2. Creating a Linux bridge NetworkAttachmentDefinition in the CLI

As a network administrator, you can configure a NetworkAttachmentDefinition of type cnv-bridge to provide Layer-2 networking to pods and virtual machines.

Note

The NetworkAttachmentDefinition must be in the same namespace as the Pod or virtual machine.

Procedure

  1. Create a new file for the NetworkAttachmentDefinition in any local directory. The file must have the following contents, modified to match your configuration:

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: a-bridge-network
      annotations:
        k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 1
    spec:
      config: '{
        "cniVersion": "0.3.1",
        "name": "a-bridge-network", 2
        "plugins": [
          {
            "type": "cnv-bridge", 3
            "bridge": "br0", 4
            "vlan": 1 5
          },
          {
            "type": "cnv-tuning" 6
          }
        ]
      }'
    1
    If you add this annotation to your NetworkAttachmentDefinition, your virtual machine instances will only run on nodes that have the br0 bridge connected.
    2
    Required. A name for the configuration. It is recommended to match the configuration name to the name value of the NetworkAttachmentDefinition.
    3
    The actual name of the Container Network Interface (CNI) plug-in that provides the network for this NetworkAttachmentDefinition. Do not change this field unless you want to use a different CNI.
    4
    You must substitute the actual name of the bridge, if it is not br0.
    5
    Optional: The VLAN tag.
    6
    Required. This allows the MAC pool manager to assign a unique MAC address to the connection.
    $ oc create -f <resource_spec.yaml>
  2. Edit the configuration file of a virtual machine or virtual machine instance that you want to connect to the bridge network, for example:

    apiVersion: v1
    kind: VirtualMachine
    metadata:
        name: example-vm
    spec:
      domain:
        devices:
          interfaces:
            - masquerade: {}
              name: default
            - bridge: {}
              name: bridge-net 1
      ...
    
      networks:
        - name: default
          pod: {}
        - name: bridge-net 2
          multus:
            networkName: a-bridge-network 3
    ...
    1 2
    The name value for the bridge interface and network must be the same.
    3
    You must substitute the actual name value from the NetworkAttachmentDefinition.
    Note

    The virtual machine instance will be connected to both the default Pod network and bridge-net, which is defined by a NetworkAttachmentDefinition named a-bridge-network.

  3. Apply the configuration file to the resource:

    $ oc create -f <local/path/to/network-attachment-definition.yaml>
Note

When defining the NIC in the next section, ensure that the NETWORK value is the bridge network name from the NetworkAttachmentDefinition you created in the previous section.

7.14.2.4. Creating a NIC for a virtual machine

Create and attach additional NICs to a virtual machine from the web console.

Procedure

  1. In the correct project in the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click Network Interfaces to display the NICs already attached to the virtual machine.
  5. Click Add Network Interface to create a new slot in the list.
  6. Fill in the Name, Model, Network, Type, and MAC Address for the new NIC.
  7. Click the Add button to save and attach the NIC to the virtual machine.

7.14.2.5. Networking fields

NameDescription

Name

Name for the Network Interface Card.

Model

Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO.

Network

List of available NetworkAttachmentDefinition objects.

Type

List of available binding methods. For the default Pod network, masquerade is the only recommended binding method. For secondary networks, use the bridge binding method. The masquerade method is not supported for non-default networks.

MAC Address

MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session.

Install the optional QEMU guest agent on the virtual machine so that the host can display relevant information about the additional networks.

7.14.3. Configuring an SR-IOV network device for virtual machines

You can configure a Single Root I/O Virtualization (SR-IOV) device for virtual machines in your cluster. This process is similar but not identical to configuring an SR-IOV device for OpenShift Container Platform.

7.14.3.1. Prerequisites

7.14.3.2. Automated discovery of SR-IOV network devices

The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device.

The CR is assigned the same name as the worker node. The status.interfaces list provides information about the network devices on a node.

Important

Do not modify a SriovNetworkNodeState object. The Operator creates and manages these resources automatically.

7.14.3.2.1. Example SriovNetworkNodeState object

The following YAML is an example of a SriovNetworkNodeState object created by the SR-IOV Network Operator:

An SriovNetworkNodeState object

apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodeState
metadata:
  name: node-25 1
  namespace: openshift-sriov-network-operator
  ownerReferences:
  - apiVersion: sriovnetwork.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: SriovNetworkNodePolicy
    name: default
spec:
  dpConfigVersion: "39824"
status:
  interfaces: 2
  - deviceID: "1017"
    driver: mlx5_core
    mtu: 1500
    name: ens785f0
    pciAddress: "0000:18:00.0"
    totalvfs: 8
    vendor: 15b3
  - deviceID: "1017"
    driver: mlx5_core
    mtu: 1500
    name: ens785f1
    pciAddress: "0000:18:00.1"
    totalvfs: 8
    vendor: 15b3
  - deviceID: 158b
    driver: i40e
    mtu: 1500
    name: ens817f0
    pciAddress: 0000:81:00.0
    totalvfs: 64
    vendor: "8086"
  - deviceID: 158b
    driver: i40e
    mtu: 1500
    name: ens817f1
    pciAddress: 0000:81:00.1
    totalvfs: 64
    vendor: "8086"
  - deviceID: 158b
    driver: i40e
    mtu: 1500
    name: ens803f0
    pciAddress: 0000:86:00.0
    totalvfs: 64
    vendor: "8086"
  syncStatus: Succeeded

1
The value of the name field is the same as the name of the worker node.
2
The interfaces stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node.

7.14.3.3. Configuring SR-IOV network devices

The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).

Note

When applying the configuration specified in a SriovNetworkNodePolicy object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.

It might take several minutes for a configuration change to apply.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • You have access to the cluster as a user with the cluster-admin role.
  • You have installed the SR-IOV Network Operator.
  • You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
  • You have not selected any control plane nodes for SR-IOV network device configuration.

Procedure

  1. Create an SriovNetworkNodePolicy object, and then save the YAML in the <name>-sriov-node-network.yaml file. Replace <name> with the name for this configuration.
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetworkNodePolicy
metadata:
  name: <name> 1
  namespace: openshift-sriov-network-operator 2
spec:
  resourceName: <sriov_resource_name> 3
  nodeSelector:
    feature.node.kubernetes.io/network-sriov.capable: "true" 4
  priority: <priority> 5
  mtu: <mtu> 6
  numVfs: <num> 7
  nicSelector: 8
    vendor: "<vendor_code>" 9
    deviceID: "<device_id>" 10
    pfNames: ["<pf_name>", ...] 11
    rootDevices: ["<pci_bus_id>", "..."] 12
  deviceType: vfio-pci 13
  isRdma: false 14
1
Specify a name for the CR object.
2
Specify the namespace where the SR-IOV Operator is installed.
3
Specify the resource name of the SR-IOV device plug-in. You can create multiple SriovNetworkNodePolicy objects for a resource name.
4
Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plug-in and device plug-in are deployed only on selected nodes.
5
Optional: Specify an integer value between 0 and 99. A smaller number gets higher priority, so a priority of 10 is higher than a priority of 99. The default value is 99.
6
Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
7
Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel Network Interface Card (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than 128.
8
The nicSelector mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specify rootDevices, you must also specify a value for vendor, deviceID, or pfNames. If you specify both pfNames and rootDevices at the same time, ensure that they point to an identical device.
9
Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either 8086 or 15b3.
10
Optional: Specify the device hex code of SR-IOV network device. The only allowed values are 158b, 1015, 1017.
11
Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
12
The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: 0000:02:00.1.
13
The vfio-pci driver type is required for virtual functions in OpenShift Virtualization.
14
Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set isRdma to false. The default value is false.
Note

If isRDMA flag is set to true, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.

  1. Create the SriovNetworkNodePolicy object:

    $ oc create -f <name>-sriov-node-network.yaml

    where <name> specifies the name for this configuration.

    After applying the configuration update, all the pods in sriov-network-operator namespace transition to the Running status.

  2. To verify that the SR-IOV network device is configured, enter the following command. Replace <node_name> with the name of a node with the SR-IOV network device that you just configured.

    $ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'

7.14.3.4. Next steps

7.14.4. Defining an SR-IOV network

You can create a network attachment for a Single Root I/O Virtualization (SR-IOV) device for virtual machines.

After the network is defined, you can attach virtual machines to the SR-IOV network.

7.14.4.1. Prerequisites

7.14.4.2. Configuring SR-IOV additional network

You can configure an additional network that uses SR-IOV hardware by creating a SriovNetwork object. When you create a SriovNetwork object, the SR-IOV Operator automatically creates a NetworkAttachmentDefinition object.

Users can then attach virtual machines to the SR-IOV network by specifying the network in the virtual machine configurations.

Note

Do not modify or delete a SriovNetwork object if it is attached to any pods or virtual machines in the running state.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create the following SriovNetwork object, and then save the YAML in the <name>-sriov-network.yaml file. Replace <name> with a name for this additional network.
apiVersion: sriovnetwork.openshift.io/v1
kind: SriovNetwork
metadata:
  name: <name> 1
  namespace: openshift-sriov-network-operator 2
spec:
  resourceName: <sriov_resource_name> 3
  networkNamespace: <target_namespace> 4
  vlan: <vlan> 5
  spoofChk: "<spoof_check>" 6
  linkState: <link_state> 7
  maxTxRate: <max_tx_rate> 8
  minTxRate: <min_rx_rate> 9
  vlanQoS: <vlan_qos> 10
  trust: "<trust_vf>" 11
  capabilities: <capabilities> 12
1
Replace <name> with a name for the object. The SR-IOV Network Operator creates a NetworkAttachmentDefinition object with same name.
2
Specify the namespace where the SR-IOV Network Operator is installed.
3
Replace <sriov_resource_name> with the value for the .spec.resourceName parameter from the SriovNetworkNodePolicy object that defines the SR-IOV hardware for this additional network.
4
Replace <target_namespace> with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork.
5
Optional: Replace <vlan> with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from 0 to 4095. The default value is 0.
6
Optional: Replace <spoof_check> with the spoof check mode of the VF. The allowed values are the strings "on" and "off".
Important

You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.

7
Optional: Replace <link_state> with the link state of virtual function (VF). Allowed value are enable, disable and auto.
8
Optional: Replace <max_tx_rate> with a maximum transmission rate, in Mbps, for the VF.
9
Optional: Replace <min_tx_rate> with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate.
Note

Intel NICs do not support the minTxRate parameter. For more information, see BZ#1772847.

10
Optional: Replace <vlan_qos> with an IEEE 802.1p priority level for the VF. The default value is 0.
11
Optional: Replace <trust_vf> with the trust mode of the VF. The allowed values are the strings "on" and "off".
Important

You must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.

12
Optional: Replace <capabilities> with the capabilities to configure for this network.
  1. To create the object, enter the following command. Replace <name> with a name for this additional network.

    $ oc create -f <name>-sriov-network.yaml
  2. Optional: To confirm that the NetworkAttachmentDefinition object associated with the SriovNetwork object that you created in the previous step exists, enter the following command. Replace <namespace> with the namespace you specified in the SriovNetwork object.

    $ oc get net-attach-def -n <namespace>

7.14.4.3. Next steps

7.14.5. Attaching a virtual machine to an SR-IOV network

You can attach a virtual machine to use a Single Root I/O Virtualization (SR-IOV) network as a secondary network.

7.14.5.1. Prerequisites

7.14.5.2. Attaching a virtual machine to an SR-IOV network

You can attach the virtual machine to the SR-IOV network by including the network details in the virtual machine configuration.

Procedure

  1. Include the SR-IOV network details in the spec.domain.devices.interfaces and spec.networks of the virtual machine configuration:

    kind: VirtualMachine
    ...
    spec:
      domain:
        devices:
          interfaces:
          - name: <default> 1
            masquerade: {} 2
          - name: <nic1> 3
            sriov: {}
      networks:
      - name: <default> 4
        pod: {}
      - name: <nic1> 5
        multus:
            networkName: <sriov-network> 6
    ...
    1
    A unique name for the interface that is connected to the Pod network.
    2
    The masquerade binding to the default Pod network.
    3
    A unique name for the SR-IOV interface.
    4
    The name of the Pod network interface. This must be the same as the interfaces.name that you defined earlier.
    5
    The name of the SR-IOV interface. This must be the same as the interfaces.name that you defined earlier.
    6
    The name of the SR-IOV network attachment definition.
  2. Apply the virtual machine configuration:

    $ oc apply -f <vm-sriov.yaml> 1
    1
    The name of the virtual machine YAML file.

7.14.6. Installing the QEMU guest agent on virtual machines

The QEMU guest agent is a daemon that runs on the virtual machine. The agent passes network information on the virtual machine, notably the IP address of additional networks, to the host.

7.14.6.1. Installing QEMU guest agent on a Linux virtual machine

The qemu-guest-agent is widely available and available by default in Red Hat virtual machines. Install the agent and start the service

Procedure

  1. Access the virtual machine command line through one of the consoles or by SSH.
  2. Install the QEMU guest agent on the virtual machine:

    $ yum install -y qemu-guest-agent
  3. Start the QEMU guest agent service:

    $ systemctl start qemu-guest-agent
  4. Ensure the service is persistent:

    $ systemctl enable qemu-guest-agent

You can also install and start the QEMU guest agent by using the custom script field in the cloud-init section of the wizard when creating either virtual machines or virtual machines templates in the web console.

7.14.6.2. Installing QEMU guest agent on a Windows virtual machine

For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers, which can be installed using one of the following procedures:

7.14.6.2.1. Installing VirtIO drivers on an existing Windows virtual machine

Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.

Note

This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. Refer to the installation documentation for your version of Windows for specific installation steps.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Log in to a Windows user session.
  3. Open Device Manager and expand Other devices to list any Unknown device.

    1. Open the Device Properties to identify the unknown device. Right-click the device and select Properties.
    2. Click the Details tab and select Hardware Ids in the Property list.
    3. Compare the Value for the Hardware Ids with the supported VirtIO drivers.
  4. Right-click the device and select Update Driver Software.
  5. Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Click Next to install the driver.
  7. Repeat this process for all the necessary VirtIO drivers.
  8. After the driver installs, click Close to close the window.
  9. Reboot the virtual machine to complete the driver installation.
7.14.6.2.2. Installing VirtIO drivers during Windows installation

Install the VirtIO drivers from the attached SATA CD driver during Windows installation.

Note

This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. Refer to the documentation for the version of Windows that you are installing.

Procedure

  1. Start the virtual machine and connect to a graphical console.
  2. Begin the Windows installation process.
  3. Select the Advanced installation.
  4. The storage destination will not be recognized until the driver is loaded. Click Load driver.
  5. The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
  6. Repeat the previous two steps for all required drivers.
  7. Complete the Windows installation.

7.14.7. Viewing the IP address of NICs on a virtual machine

The QEMU guest agent runs on the virtual machine and passes the IP address of attached NICs to the host, allowing you to view the IP address from both the web console and the oc client.

7.14.7.1. Prerequisites

  1. Verify that the guest agent is installed and running by entering the following command:

    $ systemctl status qemu-guest-agent
  2. If the guest agent is not installed and running, install and run the guest agent on the virtual machine.

7.14.7.2. Viewing the IP address of a virtual machine interface in the CLI

The network interface configuration is included in the oc describe vmi <vmi_name> command.

You can also view the IP address information by running ip addr on the virtual machine, or by running oc get vmi <vmi_name> -o yaml.

Procedure

  • Use the oc describe command to display the virtual machine interface configuration:

    $ oc describe vmi <vmi_name>

    Example output

    ...
    Interfaces:
       Interface Name:  eth0
       Ip Address:      10.244.0.37/24
       Ip Addresses:
         10.244.0.37/24
         fe80::858:aff:fef4:25/64
       Mac:             0a:58:0a:f4:00:25
       Name:            default
       Interface Name:  v2
       Ip Address:      1.1.1.7/24
       Ip Addresses:
         1.1.1.7/24
         fe80::f4d9:70ff:fe13:9089/64
       Mac:             f6:d9:70:13:90:89
       Interface Name:  v1
       Ip Address:      1.1.1.1/24
       Ip Addresses:
         1.1.1.1/24
         1.1.1.2/24
         1.1.1.4/24
         2001:de7:0:f101::1/64
         2001:db8:0:f101::1/64
         fe80::1420:84ff:fe10:17aa/64
       Mac:             16:20:84:10:17:aa

7.14.7.3. Viewing the IP address of a virtual machine interface in the web console

The IP information displays in the Virtual Machine Overview screen for the virtual machine.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine name to open the Virtual Machine Overview screen.

The information for each attached NIC is displayed under IP Address.

7.14.8. Using a MAC address pool for virtual machines

The KubeMacPool component provides a MAC address pool service for virtual machine NICs in designated namespaces. Enable a MAC address pool in a namespace by applying the KubeMacPool label to that namespace.

7.14.8.1. About KubeMacPool

If you enable the KubeMacPool component for a namespace, virtual machine NICs in that namespace are allocated MAC addresses from a MAC address pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine.

Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots.

Note

KubeMacPool does not handle virtual machine instances created independently from a virtual machine.

KubeMacPool is disabled by default. Enable a MAC address pool for a namespace by applying the KubeMacPool label to the namespace.

7.14.8.2. Enabling a MAC address pool for a namespace in the CLI

Enable a MAC address pool for virtual machines in a namespace by applying the mutatevirtualmachines.kubemacpool.io=allocate label to the namespace.

Procedure

  • Add the KubeMacPool label to the namespace. The following example adds the KubeMacPool label to two namespaces, <namespace1> and <namespace2>:

    $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=allocate

7.14.8.3. Disabling a MAC address pool for a namespace in the CLI

Disable a MAC address pool for virtual machines in a namespace by removing the mutatevirtualmachines.kubemacpool.io label.

Procedure

  • Remove the KubeMacPool label from the namespace. The following example removes the KubeMacPool label from two namespaces, <namespace1> and <namespace2>:

    $ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-

7.15. Virtual machine disks

7.15.1. Storage features

Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization.

7.15.1.1. OpenShift Virtualization storage feature matrix

Table 7.8. OpenShift Virtualization storage feature matrix

 Virtual machine live migrationHost-assisted virtual machine disk cloningStorage-assisted virtual machine disk cloningVirtual machine snapshots

OpenShift Container Storage: RBD block-mode volumes

Yes

Yes

Yes

Yes

OpenShift Virtualization hostpath provisioner

No

Yes

No

No

Other multi-node writable storage

Yes [1]

Yes

Yes [2]

Yes [2]

Other single-node writable storage

No

Yes

Yes [2]

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.
  2. Storage provider must support both Kubernetes and CSI snapshot APIs
Note

You cannot live migrate virtual machines that use:

  • A storage class with ReadWriteOnce (RWO) access mode
  • Passthrough features such as SRI-OV and GPU

Do not set the evictionStrategy field to LiveMigrate for these virtual machines.

7.15.2. Configuring local storage for virtual machines

You can configure local storage for your virtual machines by using the hostpath provisioner feature.

7.15.2.1. About the hostpath provisioner

The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:

  • Configure SELinux:

    • If you use Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.
    • Otherwise, apply the SELinux label container_file_t to the PersistentVolume (PV) backing directory on each node.
  • Create a HostPathProvisioner custom resource.
  • Create a StorageClass object for the hostpath provisioner.

The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the PersistentVolumes that the hostpath provisioner creates.

7.15.2.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS 8

You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.

Note

If you do not use Red Hat Enterprise Linux CoreOS workers, skip this procedure.

Prerequisites

  • Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.

Procedure

  1. Create the MachineConfig file. For example:

    $ touch machineconfig.yaml
  2. Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 50-set-selinux-for-hostpath-provisioner
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 2.2.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Set SELinux chcon for hostpath provisioner
                Before=kubelet.service
    
                [Service]
                ExecStart=/usr/bin/chcon -Rt container_file_t <path/to/backing/directory> 1
    
                [Install]
                WantedBy=multi-user.target
              enabled: true
              name: hostpath-provisioner.service
    1
    Specify the backing directory where you want the provisioner to create PVs.
  3. Create the MachineConfig object:

    $ oc create -f machineconfig.yaml -n <namespace>

7.15.2.3. Using the hostpath provisioner to enable local storage

To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.

Prerequisites

  • Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
  • Apply the SELinux context container_file_t to the PV backing directory on each node. For example:

    $ sudo chcon -t container_file_t -R </path/to/backing/directory>
    Note

    If you use Red Hat Enterprise Linux CoreOS 8 workers, you must configure SELinux by using a MachineConfig manifest instead.

Procedure

  1. Create the HostPathProvisioner custom resource file. For example:

    $ touch hostpathprovisioner_cr.yaml
  2. Edit the file, ensuring that the spec.pathConfig.path value is the directory where you want the hostpath provisioner to create PVs. For example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1alpha1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      pathConfig:
        path: "</path/to/backing/directory>" 1
        useNamingPrefix: "false" 2
    1
    Specify the backing directory where you want the provisioner to create PVs.
    2
    Change this value to true if you want to use the name of the PersistentVolumeClaim (PVC) that is bound to the created PV as the prefix of the directory name.
    Note

    If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the container_file_t SELinux context, this can cause Permission denied errors.

  3. Create the custom resource in the openshift-cnv namespace:

    $ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv

7.15.2.4. Creating a StorageClass object

When you create a StorageClass object, you set parameters that affect the dynamic provisioning of PersistentVolumes (PVs) that belong to that storage class.

Note

You cannot update a StorageClass object’s parameters after you create it.

Procedure

  1. Create a YAML file for defining the storage class. For example:

    $ touch storageclass.yaml
  2. Edit the file. For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-provisioner 1
    provisioner: kubevirt.io/hostpath-provisioner
    reclaimPolicy: Delete 2
    volumeBindingMode: WaitForFirstConsumer 3
    1
    You can optionally rename the storage class by changing this value.
    2
    The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the storage class defaults to Delete.
    3
    The volumeBindingMode value determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a PV until after a Pod that uses the PersistentVolumeClaim (PVC) is created. This ensures that the PV meets the Pod’s scheduling requirements.
Note

Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass with volumeBindingMode set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a Pod is created using the PVC.

  1. Create the StorageClass object:

    $ oc create -f storageclass.yaml

Additional information

7.15.3. Configuring CDI to work with namespaces that have a compute resource quota

You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.

7.15.3.1. About CPU and memory quotas in a namespace

A resource quota, defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.

The CDIConfig object defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values for the CDIConfig object are set to a default value of 0. This ensures that pods created by CDI that make no compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.

7.15.3.2. Editing the CDIConfig object to override CPU and memory defaults

Modify the default settings for CPU and memory requests and limits for your use case by editing the spec attribute of the CDIConfig object.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the cdiconfig/config by running the following command:

    $ oc edit cdiconfig/config
  2. Change the default CPU and memory requests and limits by editing the spec: podResourceRequirements property of the CDIConfig object:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: CDIConfig
    metadata:
      labels:
        app: containerized-data-importer
        cdi.kubevirt.io: ""
      name: config
    spec:
        podResourceRequirements:
        limits:
          cpu: "4"
          memory: "1Gi"
        requests:
          cpu: "1"
          memory: "250Mi"
    ...
  3. Save and exit the editor to update the CDIConfig object.

Verification

  • View the CDIConfig status and verify your changes by running the following command:

    $ oc get cdiconfig config -o yaml

7.15.3.3. Additional resources

7.15.4. Uploading local disk images by using the virtctl tool

You can upload a locally stored disk image to a new or existing DataVolume by using the virtctl command-line utility.

7.15.4.1. Prerequisites

7.15.4.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.4.3. Creating an upload DataVolume

You can manually create a DataVolume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a DataVolume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the DataVolume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

7.15.4.4. Uploading a local disk image to a DataVolume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

Note

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
    • The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    Note
    • If you do not want to create a new DataVolume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:

    $ oc get dvs

7.15.4.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.15.5. Uploading a local disk image to a block storage DataVolume

You can upload a local disk image into a block DataVolume by using the virtctl command-line utility.

In this workflow, you create a local block device to use as a PersistentVolume, associate this block volume with an upload DataVolume, and use virtctl to upload the local disk image into the DataVolume.

7.15.5.1. Prerequisites

7.15.5.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.5.3. About block PersistentVolumes

A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PersistentVolumeClaim (PVC) specification.

7.15.5.4. Creating a local block PersistentVolume

Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume configuration that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a StorageClass for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The filename of the PersistentVolume created in the previous step.

7.15.5.5. Creating an upload DataVolume

You can manually create a DataVolume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a DataVolume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the DataVolume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

7.15.5.6. Uploading a local disk image to a DataVolume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

Note

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
    • The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    Note
    • If you do not want to create a new DataVolume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:

    $ oc get dvs

7.15.5.7. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.15.6. Moving a local virtual machine disk to a different node

Virtual machines that use local volume storage can be moved so that they run on a specific node.

You might want to move the virtual machine to a specific node for the following reasons:

  • The current node has limitations to the local storage configuration.
  • The new node is better optimized for the workload of that virtual machine.

To move a virtual machine that uses local storage, you must clone the underlying volume by using a DataVolume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new DataVolume, or add the new DataVolume to another virtual machine.

Note

Users without the cluster-admin role require additional user permissions in order to clone volumes across namespaces.

7.15.6.1. Cloning a local volume to another node

You can move a virtual machine disk so that it runs on a specific node by cloning the underlying PersistentVolumeClaim (PVC).

To ensure the virtual machine disk is cloned to the correct node, you must either create a new PersistentVolume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the DataVolume.

Note

The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.

Prerequisites

  • The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.

Procedure

  1. Either create a new local PV on the node, or identify a local PV already on the node:

    • Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01.

      kind: PersistentVolume
      apiVersion: v1
      metadata:
        name: <destination-pv> 1
        annotations:
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 10Gi 2
        local:
          path: /mnt/local-storage/local/disk1 3
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - node01 4
        persistentVolumeReclaimPolicy: Delete
        storageClassName: local
        volumeMode: Filesystem
      1
      The name of the PV.
      2
      The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
      3
      The mount path on the node.
      4
      The name of the node where you want to create the PV.
    • Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration:

      $ oc get pv <destination-pv> -o yaml

      The following snippet shows that the PV is on node01:

      Example output

      ...
      spec:
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname 1
                operator: In
                values:
                - node01 2
      ...

      1
      The kubernetes.io/hostname key uses the node host name to select a node.
      2
      The host name of the node.
  2. Add a unique label to the PV:

    $ oc label pv <destination-pv> node=node01
  3. Create a DataVolume manifest that references the following:

    • The PVC name and namespace of the virtual machine.
    • The label you applied to the PV in the previous step.
    • The size of the destination PV.

      apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        name: <clone-datavolume> 1
      spec:
        source:
          pvc:
            name: "<source-vm-disk>" 2
            namespace: "<source-namespace>" 3
        pvc:
          accessModes:
            - ReadWriteOnce
          selector:
            matchLabels:
              node: node01 4
          resources:
            requests:
              storage: <10Gi> 5
      1
      The name of the new DataVolume.
      2
      The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName.
      3
      The namespace where the source PVC exists.
      4
      The label that you applied to the PV in the previous step.
      5
      The size of the destination PV.
  4. Start the cloning operation by applying the DataVolume manifest to your cluster:

    $ oc apply -f <clone-datavolume.yaml>

The DataVolume clones the PVC of the virtual machine into the PV on the specific node.

7.15.7. Expanding virtual storage by adding blank disk images

You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.

7.15.7.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.7.2. Creating a blank disk image with DataVolumes

You can create a new blank disk image in a PersistentVolumeClaim by customizing and deploying a DataVolume configuration file.

Prerequisites

  • At least one available PersistentVolume.
  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the DataVolume configuration file:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: blank-image-datavolume
    spec:
      source:
          blank: {}
      pvc:
        # Optional: Set the storage class or omit to accept the default
        # storageClassName: "hostpath"
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 500Mi
  2. Create the blank disk image by running the following command:

    $ oc create -f <blank-image-datavolume>.yaml

7.15.7.3. Template: DataVolume configuration file for blank disk images

blank-image-datavolume.yaml

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: blank-image-datavolume
spec:
  source:
      blank: {}
  pvc:
    # Optional: Set the storage class or omit to accept the default
    # storageClassName: "hostpath"
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 500Mi

7.15.8. Storage defaults for DataVolumes

The kubevirt-storage-class-defaults ConfigMap provides access mode and volume mode defaults for DataVolumes. You can edit or add storage class defaults to the ConfigMap in order to create DataVolumes in the web console that better match the underlying storage.

7.15.8.1. About storage settings for DataVolumes

DataVolumes require a defined access mode and volume mode to be created in the web console. These storage settings are configured by default with a ReadWriteOnce access mode and Filesystem volume mode.

You can modify these settings by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

Note

You must configure storage settings that are supported by the underlying storage.

All DataVolumes that you create in the web console use the default storage settings unless you specify a storage class that is also defined in the ConfigMap.

7.15.8.1.1. Access modes

DataVolumes support the following access modes:

  • ReadWriteOnce: The volume can be mounted as read-write by a single node. ReadWriteOnce has greater versatility and is the default setting.
  • ReadWriteMany: The volume can be mounted as read-write by many nodes. ReadWriteMany is required for some features, such as live migration of virtual machines between nodes.

ReadWriteMany is recommended if the underlying storage supports it.

7.15.8.1.2. Volume modes

The volume mode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. DataVolumes support the following volume modes:

  • Filesystem: Creates a filesystem on the DataVolume. This is the default setting.
  • Block: Creates a block DataVolume. Only use Block if the underlying storage supports it.

7.15.8.2. Editing the kubevirt-storage-class-defaults config map in the web console

Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

Note

You must configure storage settings that are supported by the underlying storage.

Procedure

  1. Click WorkloadsConfig Maps from the side menu.
  2. In the Project list, select openshift-cnv.
  3. Click kubevirt-storage-class-defaults to open the Config Map Overview.
  4. Click the YAML tab to display the editable configuration.
  5. Update the data values with the storage configuration that is appropriate for your underlying storage:

    ...
    data:
      accessMode: ReadWriteOnce 1
      volumeMode: Filesystem 2
      <new>.accessMode: ReadWriteMany 3
      <new>.volumeMode: Block 4
    1
    The default accessMode is ReadWriteOnce.
    2
    The default volumeMode is Filesystem.
    3
    If you add an access mode for a storage class, replace the <new> part of the parameter with the storage class name.
    4
    If you add a volume mode for a storage class, replace the <new> part of the parameter with the storage class name.
  6. Click Save to update the config map.

7.15.8.3. Editing the kubevirt-storage-class-defaults config map in the CLI

Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

Note

You must configure storage settings that are supported by the underlying storage.

Procedure

  1. Edit the ConfigMap by running the following command:

    $ oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
  2. Update the data values of the ConfigMap:

    ...
    data:
      accessMode: ReadWriteOnce 1
      volumeMode: Filesystem 2
      <new>.accessMode: ReadWriteMany 3
      <new>.volumeMode: Block 4
    1
    The default accessMode is ReadWriteOnce.
    2
    The default volumeMode is Filesystem.
    3
    If you add an access mode for storage class, replace the <new> part of the parameter with the storage class name.
    4
    If you add a volume mode for a storage class, replace the <new> part of the parameter with the storage class name.
  3. Save and exit the editor to update the config map.

7.15.8.4. Example of multiple storage class defaults

The following YAML file is an example of a kubevirt-storage-class-defaults ConfigMap that has storage settings configured for two storage classes, migration and block.

Ensure that all settings are supported by your underlying storage before you update the ConfigMap.

kind: ConfigMap
apiVersion: v1
metadata:
  name: kubevirt-storage-class-defaults
  namespace: openshift-cnv
...
data:
  accessMode: ReadWriteOnce
  volumeMode: Filesystem
  nfs-sc.accessMode: ReadWriteMany
  nfs-sc.volumeMode: Filesystem
  block-sc.accessMode: ReadWriteMany
  block-sc.volumeMode: Block

7.15.9. Using container disks with virtual machines

You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.

7.15.9.1. About container disks

A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.

A container disk can either be imported into a persistent volume claim (PVC) by using a DataVolume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume.

7.15.9.1.1. Importing a container disk into a PVC by using a DataVolume

Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a DataVolume. You can then attach the DataVolume to a virtual machine for persistent storage.

7.15.9.1.2. Attaching a container disk to a virtual machine as a containerDisk volume

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.

Use containerDisk volumes for read-only filesystems such as CD-ROMs or for disposable virtual machines.

Important

Using containerDisk volumes for read-write filesystems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.

7.15.9.2. Preparing a container disk for virtual machines

You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a DataVolume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume.

Prerequisites

  • Install podman if it is not already installed.
  • The virtual machine image must be either QCOW2 or RAW format.

Procedure

  1. Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107, and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440.

    The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result:

    $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
    Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
    To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image.
  2. Build and tag the container:

    $ podman build -t <registry>/<container_disk_name>:latest .
  3. Push the container image to the registry:

    $ podman push <registry>/<container_disk_name>:latest

If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.

7.15.9.3. Disabling TLS for a container registry to use as insecure registry

You can disable TLS (transport layer security) for a container registry by adding the registry to the cdi-insecure-registries ConfigMap.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.

Procedure

  • Add the registry to the cdi-insecure-registries ConfigMap in the cdi namespace.

    $ oc patch configmap cdi-insecure-registries -n cdi \
      --type merge -p '{"data":{"mykey": "<insecure-registry-host>:5000"}}' 1
    1
    Replace <insecure-registry-host> with the registry hostname.

7.15.9.4. Next steps

7.15.10. Preparing CDI scratch space

7.15.10.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.10.2. Understanding scratch space

The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, the CDI provisions a scratch space PVC equal to the size of the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts.

The CDIConfig object allows you to define which StorageClass to use to bind the scratch space PVC by setting the scratchSpaceStorageClass in the spec: section of the CDIConfig object.

If the defined StorageClass does not match a StorageClass in the cluster, then the default StorageClass defined for the cluster is used. If there is no default StorageClass defined in the cluster, the StorageClass used to provision the original DV or PVC is used.

Note

The CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin DataVolume. If the origin PVC is backed by block volume mode, you must define a StorageClass capable of provisioning file volume mode PVCs.

Manual provisioning

If there are no storage classes, the CDI will use any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod will remain in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.

7.15.10.3. CDI operations that require scratch space

TypeReason

Registry imports

The CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk.

Upload image

QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion.

HTTP imports of archived images

QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG.

HTTP imports of authenticated images

QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG.

HTTP imports of custom certificates

QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, the CDI downloads the image to scratch space before passing the file to QEMU-IMG.

7.15.10.4. Defining a StorageClass in the CDI configuration

Define a StorageClass in the CDI configuration to dynamically provision scratch space for CDI operations.

Procedure

  • Use the oc client to edit the cdiconfig/config and add or edit the spec: scratchSpaceStorageClass to match a StorageClass in the cluster.

    $ oc edit cdiconfig/config
    API Version:  cdi.kubevirt.io/v1alpha1
    kind: CDIConfig
    metadata:
      name: config
    ...
    spec:
      scratchSpaceStorageClass: "<storage_class>"
    ...

7.15.10.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

Additional resources

  • See the Dynamic provisioning section for more information on StorageClasses and how these are defined in the cluster.

7.15.11. Re-using persistent volumes

In order to re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.

7.15.11.1. About reclaiming statically provisioned persistent volumes

When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.

You can then re-use the PV configuration to create a PV with a different name.

Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.

Important

The Recycle reclaim policy is deprecated in OpenShift Container Platform 4.

7.15.11.2. Reclaiming statically provisioned persistent volumes

Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.

Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.

Procedure

  1. Ensure that the reclaim policy of the PV is set to Retain:

    1. Check the reclaim policy of the PV:

      $ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
    2. If the persistentVolumeReclaimPolicy is not set to Retain, edit the reclaim policy with the following command:

      $ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  2. Ensure that no resources are using the PV:

    $ oc describe pvc <pvc_name> | grep 'Mounted By:'

    Remove any resources that use the PVC before continuing.

  3. Delete the PVC to release the PV:

    $ oc delete pvc <pvc_name>
  4. Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:

    $ oc get pv <pv_name> -o yaml > <file_name>.yaml
  5. Delete the PV:

    $ oc delete pv <pv_name>
  6. Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:

    $ rm -rf <path_to_share_storage>
  7. Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest:

    Note

    To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.

    $ oc create -f <new_pv_name>.yaml

Additional Resources

7.15.12. Deleting DataVolumes

You can manually delete a DataVolume by using the oc command-line interface.

Note

When you delete a virtual machine, the DataVolume it uses is automatically deleted.

7.15.12.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.12.2. Listing all DataVolumes

You can list the DataVolumes in your cluster by using the oc command-line interface.

Procedure

  • List all DataVolumes by running the following command:

    $ oc get dvs

7.15.12.3. Deleting a DataVolume

You can delete a DataVolume by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the DataVolume that you want to delete.

Procedure

  • Delete the DataVolume by running the following command:

    $ oc delete dv <datavolume_name>
    Note

    This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace.