Chapter 14. Logging, events, and monitoring

14.1. Virtualization Overview page

The Virtualization Overview page provides a comprehensive view of virtualization resources, details, status, and top consumers:

  • The Overview tab displays Getting started resources, details, inventory, alerts, and other information about your OpenShift Virtualization environment.
  • The Top consumers tab displays high utilization of a specific resource by projects, virtual machines, or nodes.
  • The Migrations tab displays the status of live migrations.
  • The Settings tab displays cluster-wide settings, including live migration settings and user permissions.

By gaining an insight into the overall health of OpenShift Virtualization, you can determine if intervention is required to resolve specific issues identified by examining the data.

14.1.1. Reviewing top consumers

You can view the top consumers of resources for a selected project, virtual machine, or node on the Top consumers tab of the Virtualization Overview page.

Prerequisites

  • You must have access to the cluster as a user with the cluster-admin role.
  • To use the vCPU wait metric on the Top consumers tab, you must apply the schedstats=enable kernel argument to the MachineConfig object.

Procedure

  1. In the Administrator perspective in the OpenShift Container Platform web console, navigate to VirtualizationOverview.
  2. Click the Top consumers tab.
  3. Optional: You can filter the results by selecting a time period or by selecting the 5 or 10 top consumers.

14.1.2. Additional resources

14.2. Viewing OpenShift Virtualization logs

You can view logs for OpenShift Virtualization components and virtual machines by using the web console or the oc CLI. You can retrieve virtual machine logs from the virt-launcher pod. To control log verbosity, edit the HyperConverged custom resource.

14.2.1. Viewing OpenShift Virtualization logs with the CLI

Configure log verbosity for OpenShift Virtualization components by editing the HyperConverged custom resource (CR). Then, view logs for the component pods by using the oc CLI tool.

Procedure

  1. To set log verbosity for specific components, open the HyperConverged CR in your default text editor by running the following command:

    $ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
  2. Set the log level for one or more components by editing the spec.logVerbosityConfig stanza. For example:

    apiVersion: hco.kubevirt.io/v1beta1
    kind: HyperConverged
    metadata:
      name: kubevirt-hyperconverged
    spec:
      logVerbosityConfig:
        kubevirt:
          virtAPI: 5 1
          virtController: 4
          virtHandler: 3
          virtLauncher: 2
          virtOperator: 6
    1
    The log verbosity value must be an integer in the range 1–9, where a higher number indicates a more detailed log. In this example, the virtAPI component logs are exposed if their priority level is 5 or higher.
  3. Apply your changes by saving and exiting the editor.
  4. View a list of pods in the OpenShift Virtualization namespace by running the following command:

    $ oc get pods -n openshift-cnv

    Example 14.1. Example output

    NAME                               READY   STATUS    RESTARTS   AGE
    disks-images-provider-7gqbc        1/1     Running   0          32m
    disks-images-provider-vg4kx        1/1     Running   0          32m
    virt-api-57fcc4497b-7qfmc          1/1     Running   0          31m
    virt-api-57fcc4497b-tx9nc          1/1     Running   0          31m
    virt-controller-76c784655f-7fp6m   1/1     Running   0          30m
    virt-controller-76c784655f-f4pbd   1/1     Running   0          30m
    virt-handler-2m86x                 1/1     Running   0          30m
    virt-handler-9qs6z                 1/1     Running   0          30m
    virt-operator-7ccfdbf65f-q5snk     1/1     Running   0          32m
    virt-operator-7ccfdbf65f-vllz8     1/1     Running   0          32m
  5. To view logs for a component pod, run the following command:

    $ oc logs -n openshift-cnv <pod_name>

    For example:

    $ oc logs -n openshift-cnv virt-handler-2m86x
    Note

    If a pod fails to start, you can use the --previous option to view logs from the last attempt.

    To monitor log output in real time, use the -f option.

    Example 14.2. Example output

    {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"}
    {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"}
    {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"}
    {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"}
    {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"}
    {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"}

14.2.2. Viewing virtual machine logs in the web console

Get virtual machine logs from the associated virtual machine launcher pod.

Procedure

  1. In the OpenShift Container Platform console, click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Details tab.
  4. Click the virt-launcher-<name> pod in the Pod section to open the Pod details page.
  5. Click the Logs tab to view the pod logs.

14.2.3. Common error messages

The following error messages might appear in OpenShift Virtualization logs:

ErrImagePull or ImagePullBackOff
Indicates an incorrect deployment configuration or problems with the images that are referenced.

14.3. Viewing events

14.3.1. About virtual machine events

OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.

OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI.

See also: Viewing system event information in an OpenShift Container Platform cluster.

14.3.2. Viewing the events for a virtual machine in the web console

You can view streaming events for a running virtual machine on the VirtualMachine details page of the web console.

Procedure

  1. Click VirtualizationVirtualMachines from the side menu.
  2. Select a virtual machine to open the VirtualMachine details page.
  3. Click the Events tab to view streaming events for the virtual machine.

    • The ▮▮ button pauses the events stream.
    • The ▶ button resumes a paused events stream.

14.3.3. Viewing namespace events in the CLI

Use the OpenShift Container Platform client to get the events for a namespace.

Procedure

  • In the namespace, use the oc get command:

    $ oc get events

14.3.4. Viewing resource events in the CLI

Events are included in the resource description, which you can get using the OpenShift Container Platform client.

Procedure

  • In the namespace, use the oc describe command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher pod for a virtual machine:

    $ oc describe vm <vm>
    $ oc describe vmi <vmi>
    $ oc describe pod virt-launcher-<name>

14.4. Monitoring live migration

You can monitor the progress of live migration from either the web console or the CLI.

14.4.1. Monitoring live migration by using the web console

You can monitor the progress of all live migrations on the Overview → Migrations tab in the web console.

You can view the migration metrics of a virtual machine on the VirtualMachine details → Metrics tab in the web console.

14.4.2. Monitoring live migration of a virtual machine instance in the CLI

The status of the virtual machine migration is stored in the Status component of the VirtualMachineInstance configuration.

Procedure

  • Use the oc describe command on the migrating virtual machine instance:

    $ oc describe vmi vmi-fedora

    Example output

    ...
    Status:
      Conditions:
        Last Probe Time:       <nil>
        Last Transition Time:  <nil>
        Status:                True
        Type:                  LiveMigratable
      Migration Method:  LiveMigration
      Migration State:
        Completed:                    true
        End Timestamp:                2018-12-24T06:19:42Z
        Migration UID:                d78c8962-0743-11e9-a540-fa163e0c69f1
        Source Node:                  node2.example.com
        Start Timestamp:              2018-12-24T06:19:35Z
        Target Node:                  node1.example.com
        Target Node Address:          10.9.0.18:43891
        Target Node Domain Detected:  true

14.4.3. Metrics

You can use Prometheus queries to monitor live migration.

14.4.3.1. Live migration metrics

The following metrics can be queried to show live migration status:

kubevirt_migrate_vmi_data_processed_bytes
The amount of guest operating system (OS) data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes
The amount of guest OS data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes
The rate at which memory is becoming dirty in the guest OS. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count
The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count
The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count
The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded
The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed
The number of failed migrations. Type: Gauge.

14.5. Diagnosing data volumes using events and conditions

Use the oc describe command to analyze and help resolve issues with data volumes.

14.5.1. About conditions and events

Diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command:

$ oc describe dv <DataVolume>

There are three Types in the Conditions section that display:

  • Bound
  • Running
  • Ready

The Events section provides the following additional information:

  • Type of event
  • Reason for logging
  • Source of the event
  • Message containing additional diagnostic information.

The output from oc describe does not always contains Events.

An event is generated when either Status, Reason, or Message changes. Both conditions and events react to changes in the state of the data volume.

For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well.

14.5.2. Analyzing data volumes using conditions and events

By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state.

There are many different combinations of conditions. Each must be evaluated in its unique context.

Examples of various combinations follow.

  • Bound – A successfully bound PVC displays in this example.

    Note that the Type is Bound, so the Status is True. If the PVC is not bound, the Status is False.

    When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True. The Message indicates which PVC owns the data volume.

    Message, in the Events section, provides further details including how long the PVC has been bound (Age) and by what resource (From), in this case datavolume-controller:

    Example output

    Status:
    	Conditions:
    		Last Heart Beat Time:  2020-07-15T03:58:24Z
    		Last Transition Time:  2020-07-15T03:58:24Z
    		Message:               PVC win10-rootdisk Bound
    		Reason:                Bound
    		Status:                True
    		Type:                  Bound
    
    	Events:
    		Type     Reason     Age    From                   Message
    		----     ------     ----   ----                   -------
    		Normal   Bound      24s    datavolume-controller  PVC example-dv Bound

  • Running – In this case, note that Type is Running and Status is False, indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False.

    However, note that Reason is Completed and the Message field indicates Import Complete.

    In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404, listed in the Events section’s first Warning.

    From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time:  2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Message:               Import Complete
    		 Reason:                Completed
    		 Status:                False
    		 Type:                  Running
    
    	Events:
    		Type     Reason           Age                From                   Message
    		----     ------           ----               ----                   -------
    		Warning  Error            12s (x2 over 14s)  datavolume-controller  Unable to connect
    		to http data source: expected status code 200, got 404. Status: 404 Not Found

  • Ready – If Type is Ready and Status is True, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, the Status is False:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time: 2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Status:                True
    		 Type:                  Ready

14.6. Viewing information about virtual machine workloads

You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console.

14.6.1. The Virtual Machines dashboard

Access virtual machines (VMs) from the OpenShift Container Platform web console by navigating to the VirtualizationVirtualMachines page and clicking a virtual machine (VM) to view the VirtualMachine details page.

The Overview tab displays the following cards:

  • Details provides identifying information about the virtual machine, including:

    • Name
    • Status
    • Date of creation
    • Operating system
    • CPU and memory
    • Hostname
    • Template

    If the VM is running, there is an active VNC preview window and a link to open the VNC web console. The Options menu kebab on the Details card provides options to stop or pause the VM, and to copy the ssh over nodeport command for SSH tunneling.

  • Alerts lists VM alerts with three severity levels:

    • Critical
    • Warning
    • Info
  • Snapshots provides information about VM snapshots and the ability to take a snapshot. For each snapshot listed, the Snapshots card includes:

    • A visual indicator of the status of the snapshot, if it is successfully created, is still in progress, or has failed.
    • An Options menu kebab with options to restore or delete the snapshot
  • Network interfaces provides information about the network interfaces of the VM, including:

    • Name (Network and Type)
    • IP address, with the ability to copy the IP address to the clipboard
  • Disks lists VM disks details, including:

    • Name
    • Drive
    • Size
  • Utilization includes charts that display usage data for:

    • CPU
    • Memory
    • Storage
    • Network transfer
    Note

    Use the drop-down list to choose a duration for the utilization data. The available options are 5 minutes, 1 hour, 6 hours, and 24 hours.

  • Hardware Devices provides information about GPU and host devices, including:

    • Resource name
    • Hardware device name

14.7. Monitoring virtual machine health

A virtual machine instance (VMI) can become unhealthy due to transient issues such as connectivity loss, deadlocks, or problems with external dependencies. A health check periodically performs diagnostics on a VMI by using any combination of the readiness and liveness probes.

14.7.1. About readiness and liveness probes

Use readiness and liveness probes to detect and handle unhealthy virtual machine instances (VMIs). You can include one or more probes in the specification of the VMI to ensure that traffic does not reach a VMI that is not ready for it and that a new instance is created when a VMI becomes unresponsive.

A readiness probe determines whether a VMI is ready to accept service requests. If the probe fails, the VMI is removed from the list of available endpoints until the VMI is ready.

A liveness probe determines whether a VMI is responsive. If the probe fails, the VMI is deleted and a new instance is created to restore responsiveness.

You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachineInstance object. These fields support the following tests:

HTTP GET
The probe determines the health of the VMI by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
TCP socket
The probe attempts to open a socket to the VMI. The VMI is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
Guest agent ping
The probe uses the guest-ping command to determine if the QEMU guest agent is running on the virtual machine.

14.7.2. Defining an HTTP readiness probe

Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine instance (VMI) configuration.

Procedure

  1. Include details of the readiness probe in the VMI configuration file.

    Sample readiness probe with an HTTP GET test

    # ...
    spec:
      readinessProbe:
        httpGet: 1
          port: 1500 2
          path: /healthz 3
          httpHeaders:
          - name: Custom-Header
            value: Awesome
        initialDelaySeconds: 120 4
        periodSeconds: 20 5
        timeoutSeconds: 10 6
        failureThreshold: 3 7
        successThreshold: 3 8
    # ...

    1
    The HTTP GET request to perform to connect to the VMI.
    2
    The port of the VMI that the probe queries. In the above example, the probe queries port 1500.
    3
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is removed from the list of available endpoints.
    4
    The time, in seconds, after the VMI starts before the readiness probe is initiated.
    5
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    6
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
    7
    The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready.
    8
    The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

14.7.3. Defining a TCP readiness probe

Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine instance (VMI) configuration.

Procedure

  1. Include details of the TCP readiness probe in the VMI configuration file.

    Sample readiness probe with a TCP socket test

    ...
    spec:
      readinessProbe:
        initialDelaySeconds: 120 1
        periodSeconds: 20 2
        tcpSocket: 3
          port: 1500 4
        timeoutSeconds: 10 5
    ...

    1
    The time, in seconds, after the VMI starts before the readiness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The TCP action to perform.
    4
    The port of the VMI that the probe queries.
    5
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

14.7.4. Defining an HTTP liveness probe

Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine instance (VMI) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.

Procedure

  1. Include details of the HTTP liveness probe in the VMI configuration file.

    Sample liveness probe with an HTTP GET test

    # ...
    spec:
      livenessProbe:
        initialDelaySeconds: 120 1
        periodSeconds: 20 2
        httpGet: 3
          port: 1500 4
          path: /healthz 5
          httpHeaders:
          - name: Custom-Header
            value: Awesome
        timeoutSeconds: 10 6
    # ...

    1
    The time, in seconds, after the VMI starts before the liveness probe is initiated.
    2
    The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    3
    The HTTP GET request to perform to connect to the VMI.
    4
    The port of the VMI that the probe queries. In the above example, the probe queries port 1500. The VMI installs and runs a minimal HTTP server on port 1500 via cloud-init.
    5
    The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VMI is considered to be healthy. If the handler returns a failure code, the VMI is deleted and a new instance is created.
    6
    The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

14.7.5. Defining a guest agent ping probe

Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine instance (VMI) configuration.

Important

The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • The QEMU guest agent must be installed and enabled on the virtual machine.

Procedure

  1. Include details of the guest agent ping probe in the VMI configuration file. For example:

    Sample guest agent ping probe

    # ...
    spec:
      readinessProbe:
        guestAgentPing: {} 1
        initialDelaySeconds: 120 2
        periodSeconds: 20 3
        timeoutSeconds: 10 4
        failureThreshold: 3 5
        successThreshold: 3 6
    # ...

    1
    The guest agent ping probe to connect to the VMI.
    2
    Optional: The time, in seconds, after the VMI starts before the guest agent probe is initiated.
    3
    Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than timeoutSeconds.
    4
    Optional: The number of seconds of inactivity after which the probe times out and the VMI is assumed to have failed. The default value is 1. This value must be lower than periodSeconds.
    5
    Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked Unready.
    6
    Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
  2. Create the VMI by running the following command:

    $ oc create -f <file_name>.yaml

14.7.6. Template: Virtual machine configuration file for defining health checks

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  labels:
    special: vm-fedora
  name: vm-fedora
spec:
  template:
    metadata:
      labels:
        special: vm-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 1024M
      readinessProbe:
        httpGet:
          port: 1500
        initialDelaySeconds: 120
        periodSeconds: 20
        timeoutSeconds: 10
        failureThreshold: 3
        successThreshold: 3
      terminationGracePeriodSeconds: 180
      volumes:
      - name: containerdisk
        containerDisk:
          image: kubevirt/fedora-cloud-registry-disk-demo
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - setenforce 0
              - dnf install -y nmap-ncat
              - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
        name: cloudinitdisk

14.7.7. Additional resources

14.8. Using the OpenShift Container Platform dashboard to get cluster information

Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console.

The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards.

14.8.1. About the OpenShift Container Platform dashboards page

Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to HomeOverview from the OpenShift Container Platform web console.

The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards.

The OpenShift Container Platform dashboard consists of the following cards:

  • Details provides a brief overview of informational cluster details.

    Status include ok, error, warning, in progress, and unknown. Resources can add custom status names.

    • Cluster ID
    • Provider
    • Version
  • Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about:

    • Number of nodes
    • Number of pods
    • Persistent storage volume claims
    • Virtual machines (available if OpenShift Virtualization is installed)
    • Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment).
  • Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem.

    • Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment)
  • Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage).
  • Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about:

    • CPU time
    • Memory allocation
    • Storage consumed
    • Network resources consumed
    • Pod count
  • Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host.

14.9. Reviewing resource usage by virtual machines

Dashboards in the OpenShift Container Platform web console provide visual representations of cluster metrics to help you to quickly understand the state of your cluster. Dashboards belong to the Monitoring overview that provides monitoring for core platform components.

The OpenShift Virtualization dashboard provides data on resource consumption for virtual machines and associated pods. The visualization metrics displayed in the OpenShift Virtualization dashboard are based on Prometheus Query Language (PromQL) queries.

A monitoring role is required to monitor user-defined namespaces in the OpenShift Virtualization dashboard.

You can view resource usage for a specific virtual machine on the VirtualMachine details page → Metrics tab in the web console.

14.9.1. About reviewing top consumers

In the OpenShift Virtualization dashboard, you can select a specific time period and view the top consumers of resources within that time period. Top consumers are virtual machines or virt-launcher pods that are consuming the highest amount of resources.

The following table shows resources monitored in the dashboard and describes the metrics associated with each resource for top consumers.

Monitored resources

Description

Memory swap traffic

Virtual machines consuming the most memory pressure when swapping memory.

vCPU wait

Virtual machines experiencing the maximum wait time (in seconds) for their vCPUs.

CPU usage by pod

The virt-launcher pods that are using the most CPU.

Network traffic

Virtual machines that are saturating the network by receiving the most amount of network traffic (in bytes).

Storage traffic

Virtual machines with the highest amount (in bytes) of storage-related traffic.

Storage IOPS

Virtual machines with the highest amount of I/O operations per second over a time period.

Memory usage

The virt-launcher pods that are using the most memory (in bytes).

Note

Viewing the maximum resource consumption is limited to the top five consumers.

14.9.2. Reviewing top consumers

In the Administrator perspective, you can view the OpenShift Virtualization dashboard where top consumers of resources are displayed.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. In the Administrator perspective in the OpenShift Virtualization web console, navigate to ObserveDashboards.
  2. Select the KubeVirt/Infrastructure Resources/Top Consumers dashboard from the Dashboard list.
  3. Select a predefined time period from the drop-down menu for Period. You can review the data for top consumers in the tables.
  4. Optional: Click Inspect to view or edit the Prometheus Query Language (PromQL) query associated with the top consumers for a table.

14.9.3. Additional resources

14.10. OpenShift Container Platform cluster monitoring, logging, and Telemetry

OpenShift Container Platform provides various resources for monitoring at the cluster level.

14.10.1. About OpenShift Container Platform monitoring

OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster.

After installing OpenShift Container Platform 4.12, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console.

Note

Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the predefined monitoring roles.

14.10.2. Logging architecture

The major components of the logging are:

Collector

The collector is a daemonset that deploys pods to each OpenShift Container Platform node. It collects log data from each node, transforms the data, and forwards it to configured outputs. You can use the Vector collector or the legacy Fluentd collector.

Note

Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead.

Log store

The log store stores log data for analysis and is the default output for the log forwarder. You can use the default LokiStack log store, the legacy Elasticsearch log store, or forward logs to additional external log stores.

Note

The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators.

Visualization

You can use a UI component to view a visual representation of your log data. The UI provides a graphical interface to search, query, and view stored logs. The OpenShift Container Platform web console UI is provided by enabling the OpenShift Container Platform console plugin.

Note

The Kibana web console is now deprecated is planned to be removed in a future logging release.

Logging collects container logs and node logs. These are categorized into types:

Application logs
Container logs generated by user applications running in the cluster, except infrastructure container applications.
Infrastructure logs
Container logs generated by infrastructure namespaces: openshift*, kube*, or default, as well as journald messages from nodes.
Audit logs
Logs generated by auditd, the node audit system, which are stored in the /var/log/audit/audit.log file, and logs from the auditd, kube-apiserver, openshift-apiserver services, as well as the ovn project if enabled.

For more information on OpenShift Logging, see the OpenShift Logging documentation.

14.10.3. About Telemetry

Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.

This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience.

This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use.

14.10.3.1. Information collected by Telemetry

The following information is collected by Telemetry:

14.10.3.1.1. System information
  • Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability
  • Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update
  • The unique random identifier that is generated during an installation
  • Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services
  • The OpenShift Container Platform framework components installed in a cluster and their condition and status
  • Events for all namespaces listed as "related objects" for a degraded Operator
  • Information about degraded software
  • Information about the validity of certificates
  • The name of the provider platform that OpenShift Container Platform is deployed on and the data center location
14.10.3.1.2. Sizing Information
  • Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each
  • The number of running virtual machine instances in a cluster
  • The number of etcd members and the number of objects stored in the etcd cluster
  • Number of application builds by build strategy type
14.10.3.1.3. Usage information
  • Usage information about components, features, and extensions
  • Usage details about Technology Previews and unsupported configurations

Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.

14.10.4. CLI troubleshooting and debugging commands

For a list of the oc client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation.

14.11. Running cluster checkups

OpenShift Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting.

Important

The OpenShift Container Platform cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

14.11.1. About the OpenShift Container Platform cluster checkup framework

A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.

By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.

Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.

Important

You must always:

  • Verify that the checkup image is from a trustworthy source before applying it.
  • Review the checkup permissions before creating the Role and RoleBinding objects.

14.11.2. Checking network connectivity and latency for virtual machines on a secondary network

You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface.

To run a checkup for the first time, follow the steps in the procedure.

If you have previously run a checkup, skip to step 5 of the procedure because the steps to install the framework and enable permissions for the checkup are not required.

Prerequisites

  • You installed the OpenShift CLI (oc).
  • The cluster has at least two worker nodes.
  • The Multus Container Network Interface (CNI) plugin is installed on the cluster.
  • You configured a network attachment definition for a namespace.

Procedure

  1. Create a manifest file that contains the ServiceAccount, Role, and RoleBinding objects with permissions that the checkup requires for cluster access:

    Example 14.3. Example role manifest file

    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: vm-latency-checkup-sa
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kubevirt-vm-latency-checker
    rules:
    - apiGroups: ["kubevirt.io"]
      resources: ["virtualmachineinstances"]
      verbs: ["get", "create", "delete"]
    - apiGroups: ["subresources.kubevirt.io"]
      resources: ["virtualmachineinstances/console"]
      verbs: ["get"]
    - apiGroups: ["k8s.cni.cncf.io"]
      resources: ["network-attachment-definitions"]
      verbs: ["get"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kubevirt-vm-latency-checker
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kubevirt-vm-latency-checker
      apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: kiagnose-configmap-access
    rules:
    - apiGroups: [ "" ]
      resources: [ "configmaps" ]
      verbs: ["get", "update"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: kiagnose-configmap-access
    subjects:
    - kind: ServiceAccount
      name: vm-latency-checkup-sa
    roleRef:
      kind: Role
      name: kiagnose-configmap-access
      apiGroup: rbac.authorization.k8s.io
  2. Apply the checkup roles manifest:

    $ oc apply -n <target_namespace> -f <latency_roles>.yaml 1
    1
    <target_namespace> is the namespace where the checkup is to be run. This must be an existing namespace where the NetworkAttachmentDefinition object resides.
  3. Create a ConfigMap manifest that contains the input parameters for the checkup. The config map provides the input for the framework to run the checkup and also stores the results of the checkup.

    Example input config map

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
    data:
      spec.timeout: 5m
      spec.param.network_attachment_definition_namespace: <target_namespace>
      spec.param.network_attachment_definition_name: "blue-network" 1
      spec.param.max_desired_latency_milliseconds: "10" 2
      spec.param.sample_duration_seconds: "5" 3
      spec.param.source_node: "worker1" 4
      spec.param.target_node: "worker2" 5

    1
    The name of the NetworkAttachmentDefinition object.
    2
    Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
    3
    Optional: The duration of the latency check, in seconds.
    4
    Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the spec.param.target_node field cannot be empty.
    5
    Optional: When specified, latency is measured from the source node to this node.
  4. Apply the config map manifest in the target namespace:

    $ oc apply -n <target_namespace> -f <latency_config_map>.yaml
  5. Create a Job object to run the checkup:

    Example job manifest

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: kubevirt-vm-latency-checkup
    spec:
      backoffLimit: 0
      template:
        spec:
          serviceAccountName: vm-latency-checkup-sa
          restartPolicy: Never
          containers:
            - name: vm-latency-checkup
              image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup:v4.12.0
              securityContext:
                allowPrivilegeEscalation: false
                capabilities:
                  drop: ["ALL"]
                runAsNonRoot: true
                seccompProfile:
                  type: "RuntimeDefault"
              env:
                - name: CONFIGMAP_NAMESPACE
                  value: <target_namespace>
                - name: CONFIGMAP_NAME
                  value: kubevirt-vm-latency-checkup-config

  6. Apply the Job manifest. The checkup uses the ping utility to verify connectivity and measure latency.

    $ oc apply -n <target_namespace> -f <latency_job>.yaml
  7. Wait for the job to complete:

    $ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
  8. Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the spec.param.max_desired_latency_milliseconds attribute, the checkup fails and returns an error.

    $ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml

    Example output config map (success)

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kubevirt-vm-latency-checkup-config
      namespace: <target_namespace>
    data:
      spec.timeout: 5m
      spec.param.network_attachment_definition_namespace: <target_namespace>
      spec.param.network_attachment_definition_name: "blue-network"
      spec.param.max_desired_latency_milliseconds: "10"
      spec.param.sample_duration_seconds: "5"
      spec.param.source_node: "worker1"
      spec.param.target_node: "worker2"
      status.succeeded: "true"
      status.failureReason: ""
      status.completionTimestamp: "2022-01-01T09:00:00Z"
      status.startTimestamp: "2022-01-01T09:00:07Z"
      status.result.avgLatencyNanoSec: "177000"
      status.result.maxLatencyNanoSec: "244000" 1
      status.result.measurementDurationSec: "5"
      status.result.minLatencyNanoSec: "135000"
      status.result.sourceNode: "worker1"
      status.result.targetNode: "worker2"

    1
    The maximum measured latency in nanoseconds.
  9. Optional: To view the detailed job log in case of checkup failure, use the following command:

    $ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
  10. Delete the job and config map resources that you previously created by running the following commands:

    $ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
    $ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
  11. Optional: If you do not plan to run another checkup, delete the checkup role and framework manifest files.

    $ oc delete -f <file_name>.yaml

14.11.3. Additional resources

14.12. Prometheus queries for virtual resources

OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.

Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics.

14.12.1. Prerequisites

  • To use the vCPU metric, the schedstats=enable kernel argument must be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. See the OpenShift Container Platform machine configuration tasks documentation for more information on applying a kernel argument.
  • For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.

14.12.2. About querying metrics

The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.

As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects.

As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.

14.12.2.1. Querying metrics for all projects as a cluster administrator

As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin cluster role or with view permissions for all projects.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. From the Administrator perspective of the OpenShift Container Platform web console, go to ObserveMetrics.
  2. To add one or more queries, perform any of the following actions:

    OptionDescription

    Create a custom query.

    Add your Prometheus Query Language (PromQL) query to the Expression field.

    As you type a PromQL expression, autocomplete suggestions are displayed in a list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.

    Add multiple queries.

    Click Add query.

    Duplicate an existing query.

    Click the Options menu kebab next to the query and select Duplicate query.

    Delete a query.

    Click the Options menu kebab next to the query and select Delete query.

    Disable a query from being run.

    Click the Options menu kebab next to the query and select Disable query.

  3. To run queries that you created, click Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.

    Note

    Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, click Hide graph and calibrate your query by using the metrics table. After finding a feasible query, enable the plot to draw the graphs.

  4. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.

14.12.2.2. Querying metrics for user-defined projects as a developer

You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.

In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.

Note

Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time in the Observe -→ Metrics page in the web console for your user-defined project.

Prerequisites

  • You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
  • You have enabled monitoring for user-defined projects.
  • You have deployed a service in a user-defined project.
  • You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored.

Procedure

  1. Select the Developer perspective in the OpenShift Container Platform web console.
  2. Select ObserveMetrics.
  3. Select the project that you want to view metrics for in the Project: list.
  4. Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL.
  5. Optional: Select Custom query from the Select query list to enter a new query. As you type, autocomplete suggestions appear in a drop-down list. These suggestions include functions and metrics. Click a suggested item to select it.

    Note

    In the Developer perspective, you can only run one query at a time.

14.12.3. Virtualization metrics

The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions.

Note

The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.

14.12.3.1. vCPU metrics

The following query can identify virtual machines that are waiting for Input/Output (I/O):

kubevirt_vmi_vcpu_wait_seconds
Returns the wait time (in seconds) for a virtual machine’s vCPU. Type: Counter.

A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.

Note

To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.

Example vCPU wait time query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1

1
This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.

14.12.3.2. Network metrics

The following queries can identify virtual machines that are saturating the network:

kubevirt_vmi_network_receive_bytes_total
Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total
Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.

Example network traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.

14.12.3.3. Storage metrics

14.12.3.3.1. Storage-related traffic

The following queries can identify VMs that are writing large amounts of data:

kubevirt_vmi_storage_read_traffic_bytes_total
Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total
Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.

Example storage-related traffic query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
14.12.3.3.2. Storage snapshot data
kubevirt_vmsnapshot_disks_restored_from_source_total
Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes
Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.

Examples of storage snapshot data queries

kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"} 1

1
This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1
1
This query returns the amount of space in bytes restored from the source virtual machine.
14.12.3.3.3. I/O performance

The following queries can determine the I/O performance of storage devices:

kubevirt_vmi_storage_iops_read_total
Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total
Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.

Example I/O performance query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1

1
This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.

14.12.3.4. Guest memory swapping metrics

The following queries can identify which swap-enabled guests are performing the most memory swapping:

kubevirt_vmi_memory_swap_in_traffic_bytes_total
Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes_total
Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.

Example memory swapping query

topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1

1
This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Note

Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.

14.12.4. Live migration metrics

The following metrics can be queried to show live migration status:

kubevirt_migrate_vmi_data_processed_bytes
The amount of guest operating system (OS) data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes
The amount of guest OS data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes
The rate at which memory is becoming dirty in the guest OS. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count
The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count
The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count
The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded
The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed
The number of failed migrations. Type: Gauge.

14.12.5. Additional resources

14.13. Exposing custom metrics for virtual machines

OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.

In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.

14.13.1. Configuring the node exporter service

The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.

Prerequisites

  • Install the OpenShift Container Platform CLI oc.
  • Log in to the cluster as a user with cluster-admin privileges.
  • Create the cluster-monitoring-config ConfigMap object in the openshift-monitoring project.
  • Configure the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project by setting enableUserWorkload to true.

Procedure

  1. Create the Service YAML file. In the following example, the file is called node-exporter-service.yaml.

    kind: Service
    apiVersion: v1
    metadata:
      name: node-exporter-service 1
      namespace: dynamation 2
      labels:
        servicetype: metrics 3
    spec:
      ports:
        - name: exmet 4
          protocol: TCP
          port: 9100 5
          targetPort: 9100 6
      type: ClusterIP
      selector:
        monitor: metrics 7
    1
    The node-exporter service that exposes the metrics from the virtual machines.
    2
    The namespace where the service is created.
    3
    The label for the service. The ServiceMonitor uses this label to match this service.
    4
    The name given to the port that exposes metrics on port 9100 for the ClusterIP service.
    5
    The target port used by node-exporter-service to listen for requests.
    6
    The TCP port number of the virtual machine that is configured with the monitor label.
    7
    The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label monitor and a value of metrics will be matched.
  2. Create the node-exporter service:

    $ oc create -f node-exporter-service.yaml

14.13.2. Configuring a virtual machine with the node exporter service

Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.

Prerequisites

  • The pods for the component are running in the openshift-user-workload-monitoring project.
  • Grant the monitoring-edit role to users who need to monitor this user-defined project.

Procedure

  1. Log on to the virtual machine.
  2. Download the node-exporter file on to the virtual machine by using the directory path that applies to the version of node-exporter file.

    $ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
  3. Extract the executable and place it in the /usr/bin directory.

    $ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \
        --directory /usr/bin --strip 1 "*/node_exporter"
  4. Create a node_exporter.service file in this directory path: /etc/systemd/system. This systemd service file runs the node-exporter service when the virtual machine reboots.

    [Unit]
    Description=Prometheus Metrics Exporter
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    User=root
    ExecStart=/usr/bin/node_exporter
    
    [Install]
    WantedBy=multi-user.target
  5. Enable and start the systemd service.

    $ sudo systemctl enable node_exporter.service
    $ sudo systemctl start node_exporter.service

Verification

  • Verify that the node-exporter agent is reporting metrics from the virtual machine.

    $ curl http://localhost:9100/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5244e-05
    go_gc_duration_seconds{quantile="0.25"} 3.0449e-05
    go_gc_duration_seconds{quantile="0.5"} 3.7913e-05

14.13.3. Creating a custom monitoring label for virtual machines

To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.

Prerequisites

  • Install the OpenShift Container Platform CLI oc.
  • Log in as a user with cluster-admin privileges.
  • Access to the web console for stop and restart a virtual machine.

Procedure

  1. Edit the template spec of your virtual machine configuration file. In this example, the label monitor has the value metrics.

    spec:
      template:
        metadata:
          labels:
            monitor: metrics
  2. Stop and restart the virtual machine to create a new pod with the label name given to the monitor label.

14.13.3.1. Querying the node-exporter service for metrics

Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Obtain the HTTP service endpoint by specifying the namespace for the service:

    $ oc get service -n <namespace> <node-exporter-service>
  2. To list all available metrics for the node-exporter service, query the metrics resource.

    $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"

    Example output

    node_arp_entries{device="eth0"} 1
    node_boot_time_seconds 1.643153218e+09
    node_context_switches_total 4.4938158e+07
    node_cooling_device_cur_state{name="0",type="Processor"} 0
    node_cooling_device_max_state{name="0",type="Processor"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
    node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
    node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06
    node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61
    node_cpu_seconds_total{cpu="0",mode="irq"} 233.91
    node_cpu_seconds_total{cpu="0",mode="nice"} 551.47
    node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3
    node_cpu_seconds_total{cpu="0",mode="steal"} 86.12
    node_cpu_seconds_total{cpu="0",mode="system"} 464.15
    node_cpu_seconds_total{cpu="0",mode="user"} 1075.2
    node_disk_discard_time_seconds_total{device="vda"} 0
    node_disk_discard_time_seconds_total{device="vdb"} 0
    node_disk_discarded_sectors_total{device="vda"} 0
    node_disk_discarded_sectors_total{device="vdb"} 0
    node_disk_discards_completed_total{device="vda"} 0
    node_disk_discards_completed_total{device="vdb"} 0
    node_disk_discards_merged_total{device="vda"} 0
    node_disk_discards_merged_total{device="vdb"} 0
    node_disk_info{device="vda",major="252",minor="0"} 1
    node_disk_info{device="vdb",major="252",minor="16"} 1
    node_disk_io_now{device="vda"} 0
    node_disk_io_now{device="vdb"} 0
    node_disk_io_time_seconds_total{device="vda"} 174
    node_disk_io_time_seconds_total{device="vdb"} 0.054
    node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003
    node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039
    node_disk_read_bytes_total{device="vda"} 3.71867136e+08
    node_disk_read_bytes_total{device="vdb"} 366592
    node_disk_read_time_seconds_total{device="vda"} 19.128
    node_disk_read_time_seconds_total{device="vdb"} 0.039
    node_disk_reads_completed_total{device="vda"} 5619
    node_disk_reads_completed_total{device="vdb"} 96
    node_disk_reads_merged_total{device="vda"} 5
    node_disk_reads_merged_total{device="vdb"} 0
    node_disk_write_time_seconds_total{device="vda"} 240.66400000000002
    node_disk_write_time_seconds_total{device="vdb"} 0
    node_disk_writes_completed_total{device="vda"} 71584
    node_disk_writes_completed_total{device="vdb"} 0
    node_disk_writes_merged_total{device="vda"} 19761
    node_disk_writes_merged_total{device="vdb"} 0
    node_disk_written_bytes_total{device="vda"} 2.007924224e+09
    node_disk_written_bytes_total{device="vdb"} 0

14.13.4. Creating a ServiceMonitor resource for the node exporter service

You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Create a YAML file for the ServiceMonitor resource configuration. In this example, the service monitor matches any service with the label metrics and queries the exmet port every 30 seconds.

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      labels:
        k8s-app: node-exporter-metrics-monitor
      name: node-exporter-metrics-monitor 1
      namespace: dynamation 2
    spec:
      endpoints:
      - interval: 30s 3
        port: exmet 4
        scheme: http
      selector:
        matchLabels:
          servicetype: metrics
    1
    The name of the ServiceMonitor.
    2
    The namespace where the ServiceMonitor is created.
    3
    The interval at which the port will be queried.
    4
    The name of the port that is queried every 30 seconds
  2. Create the ServiceMonitor configuration for the node-exporter service.

    $ oc create -f node-exporter-metrics-monitor.yaml

14.13.4.1. Accessing the node exporter service outside the cluster

You can access the node-exporter service outside the cluster and view the exposed metrics.

Prerequisites

  • You have access to the cluster as a user with cluster-admin privileges or the monitoring-edit role.
  • You have enabled monitoring for the user-defined project by configuring the node-exporter service.

Procedure

  1. Expose the node-exporter service.

    $ oc expose service -n <namespace> <node_exporter_service_name>
  2. Obtain the FQDN (Fully Qualified Domain Name) for the route.

    $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host

    Example output

    NAME                    DNS
    node-exporter-service   node-exporter-service-dynamation.apps.cluster.example.org

  3. Use the curl command to display metrics for the node-exporter service.

    $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics

    Example output

    go_gc_duration_seconds{quantile="0"} 1.5382e-05
    go_gc_duration_seconds{quantile="0.25"} 3.1163e-05
    go_gc_duration_seconds{quantile="0.5"} 3.8546e-05
    go_gc_duration_seconds{quantile="0.75"} 4.9139e-05
    go_gc_duration_seconds{quantile="1"} 0.000189423

14.13.5. Additional resources

14.14. OpenShift Virtualization runbooks

You can use the procedures in these runbooks to diagnose and resolve issues that trigger OpenShift Virtualization alerts.

OpenShift Virtualization alerts are displayed on the Virtualization > Overview page.

14.14.1. CDIDataImportCronOutdated

Meaning

This alert fires when DataImportCron cannot poll or import the latest disk image versions.

DataImportCron polls disk images, checking for the latest versions, and imports the images as persistent volume claims (PVCs). This process ensures that PVCs are updated to the latest version so that they can be used as reliable clone sources or golden images for virtual machines (VMs).

For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.

Impact

VMs might be created from outdated disk images.

VMs might fail to start because no source PVC is available for cloning.

Diagnosis
  1. Check the cluster for a default storage class:

    $ oc get sc

    The output displays the storage classes with (default) beside the name of the default storage class. You must set a default storage class, either on the cluster or in the DataImportCron specification, in order for the DataImportCron to poll and import golden images. If no storage class is defined, the DataVolume controller fails to create PVCs and the following event is displayed: DataVolume.storage spec is missing accessMode and no storageClass to choose profile.

  2. Obtain the DataImportCron namespace and name:

    $ oc get dataimportcron -A -o json | jq -r '.items[] | \
      select(.status.conditions[] | select(.type == "UpToDate" and \
      .status == "False")) | .metadata.namespace + "/" + .metadata.name'
  3. If a default storage class is not defined on the cluster, check the DataImportCron specification for a default storage class:

    $ oc get dataimportcron <dataimportcron> -o yaml | \
      grep -B 5 storageClassName

    Example output

          url: docker://.../cdi-func-test-tinycore
        storage:
          resources:
            requests:
              storage: 5Gi
        storageClassName: rook-ceph-block

  4. Obtain the name of the DataVolume associated with the DataImportCron object:

    $ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \
      jq .status.lastImportedPVC.name
  5. Check the DataVolume log for error messages:

    $ oc -n <namespace> get dv <datavolume> -o yaml
  6. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | \
      grep cdi-operator | awk '{print $1}')"
  7. Check the cdi-deployment log for error messages:

    $ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment
Mitigation
  1. Set a default storage class, either on the cluster or in the DataImportCron specification, to poll and import golden images. The updated Containerized Data Importer (CDI) will resolve the issue within a few seconds.
  2. If the issue does not resolve itself, delete the data volumes associated with the affected DataImportCron objects. The CDI will recreate the data volumes with the default storage class.
  3. If your cluster is installed in a restricted network environment, disable the enableCommonBootImageImport feature gate in order to opt out of automatic updates:

    $ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \
      -p '[{"op": "replace", "path": \
      "/spec/featureGates/enableCommonBootImageImport", "value": false}]'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.2. CDIDataVolumeUnusualRestartCount

Meaning

This alert fires when a DataVolume object restarts more than three times.

Impact

Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.

Diagnosis
  1. Obtain the name and namespace of the data volume:

    $ oc get dv -A -o json | jq -r '.items[] | \
      select(.status.restartCount>3)' | jq '.metadata.name, .metadata.namespace'
  2. Check the status of the pods associated with the data volume:

    $ oc get pods -n <namespace> -o json | jq -r '.items[] | \
      select(.metadata.ownerReferences[] | \
      select(.name=="<dv_name>")).metadata.name'
  3. Obtain the details of the pods:

    $ oc -n <namespace> describe pods <pod>
  4. Check the pod logs for error messages:

    $ oc -n <namespace> describe logs <pod>
Mitigation

Delete the data volume, resolve the issue, and create a new data volume.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.3. CDINotReady

Meaning

This alert fires when the Containerized Data Importer (CDI) is in a degraded state:

  • Not progressing
  • Not available to use
Impact

CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.

Diagnosis
  1. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | \
      grep cdi-operator | awk '{print $1}')"
  2. Check the CDI deployment for components that are not ready:

    $ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
  3. Check the details of the failing pod:

    $ oc -n $CDI_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    $ oc -n $CDI_NAMESPACE logs <pod>
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.4. CDIOperatorDown

Meaning

This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.

Impact

The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.

Diagnosis
  1. Set the CDI_NAMESPACE environment variable:

    $ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \
      awk '{print $1}')"
  2. Check whether the cdi-operator pod is currently running:

    $ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
  3. Obtain the details of the cdi-operator pod:

    $ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
  4. Check the log of the cdi-operator pod for errors:

    $ oc -n $CDI_NAMESPACE logs -l name=cdi-operator
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.5. CDIStorageProfilesIncomplete

Meaning

This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.

If a storage profile is incomplete, the CDI cannot infer persistent volume claim (PVC) fields, such as volumeMode and accessModes, which are required to create a virtual machine (VM) disk.

Impact

The CDI cannot create a VM disk on the PVC.

Diagnosis
  • Identify the incomplete storage profile:

    $ oc get storageprofile <storage_class>
Mitigation
  • Add the missing storage profile information as in the following example:

    $ oc patch storageprofile local --type=merge -p '{"spec": \
      {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \
      "volumeMode": "Filesystem"}]}}'

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.6. CnaoDown

Meaning

This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.

Impact

If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | \
      grep cluster-network-addons-operator | awk '{print $1}')"
  2. Check the status of the cluster-network-addons-operator pod:

    $ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
  3. Check the cluster-network-addons-operator logs for error messages:

    $ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
  4. Obtain the details of the cluster-network-addons-operator pods:

    $ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.7. HPPNotReady

Meaning

This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.

The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

HPP is not usable. Its components are not ready and they are not progressing towards a ready state.

Diagnosis
  1. Set the HPP_NAMESPACE environment variable:

    $ export HPP_NAMESPACE="$(oc get deployment -A | \
      grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Check for HPP components that are currently not ready:

    $ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
  3. Obtain the details of the failing pod:

    $ oc -n $HPP_NAMESPACE describe pods <pod>
  4. Check the logs of the failing pod:

    $ oc -n $HPP_NAMESPACE logs <pod>
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.8. HPPOperatorDown

Meaning

This alert fires when the hostpath provisioner (HPP) Operator is down.

The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.

Impact

The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.

Diagnosis
  1. Configure the HPP_NAMESPACE environment variable:

    $ HPP_NAMESPACE="$(oc get deployment -A | grep \
      hostpath-provisioner-operator | awk '{print $1}')"
  2. Check whether the hostpath-provisioner-operator pod is currently running:

    $ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
  3. Obtain the details of the hostpath-provisioner-operator pod:

    $ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
  4. Check the log of the hostpath-provisioner-operator pod for errors:

    $ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.9. HPPSharingPoolPathWithOS

Meaning

This alert fires when the hostpath provisioner (HPP) shares a file system with other critical components, such as kubelet or the operating system (OS).

HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).

Impact

A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.

Diagnosis
  1. Configure the HPP_NAMESPACE environment variable:

    $ export HPP_NAMESPACE="$(oc get deployment -A | \
      grep hostpath-provisioner-operator | awk '{print $1}')"
  2. Obtain the status of the hostpath-provisioner-csi daemon set pods:

    $ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
  3. Check the hostpath-provisioner-csi logs to identify the shared pool and path:

    $ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner

    Example output

    I0208 15:21:03.769731       1 utils.go:221] pool (<legacy, csi-data-dir>/csi),
    shares path with OS which can lead to node disk pressure

Mitigation

Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.10. KubeMacPoolDown

Meaning

KubeMacPool is down. KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts.

Impact

If KubeMacPool is down, VirtualMachine objects cannot be created.

Diagnosis
  1. Set the KMP_NAMESPACE environment variable:

    $ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \
      control-plane=mac-controller-manager | awk '{print $1}')"
  2. Set the KMP_NAME environment variable:

    $ export KMP_NAME="$(oc get pod -A --no-headers -l \
      control-plane=mac-controller-manager | awk '{print $2}')"
  3. Obtain the KubeMacPool-manager pod details:

    $ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
  4. Check the KubeMacPool-manager logs for error messages:

    $ oc logs -n $KMP_NAMESPACE $KMP_NAME
Mitigation

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.11. KubeMacPoolDuplicateMacsFound

Meaning

This alert fires when KubeMacPool detects duplicate MAC addresses.

KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts. When KubeMacPool starts, it scans the cluster for the MAC addresses of virtual machines (VMs) in managed namespaces.

Impact

Duplicate MAC addresses on the same LAN might cause network issues.

Diagnosis
  1. Obtain the namespace and the name of the kubemacpool-mac-controller pod:

    $ oc get pod -A -l control-plane=mac-controller-manager --no-headers \
      -o custom-columns=":metadata.namespace,:metadata.name"
  2. Obtain the duplicate MAC addresses from the kubemacpool-mac-controller logs:

    $ oc logs -n <namespace> <kubemacpool_mac_controller> | \
      grep "already allocated"

    Example output

    mac address 02:00:ff:ff:ff:ff already allocated to
    vm/kubemacpool-test/testvm, br1,
    conflict with: vm/kubemacpool-test/testvm2, br1

Mitigation
  1. Update the VMs to remove the duplicate MAC addresses.
  2. Restart the kubemacpool-mac-controller pod:

    $ oc delete pod -n <namespace> <kubemacpool_mac_controller>

14.14.12. KubeVirtComponentExceedsRequestedCPU

Meaning

This alert fires when a component’s CPU usage exceeds the requested limit.

Impact

Usage of CPU resources is not optimal and the node might be overloaded.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the component’s CPU request limit:

    $ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
  3. Check the actual CPU usage by using a PromQL query:

    node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
    {namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the CPU request limit in the HCO custom resource.

14.14.13. KubeVirtComponentExceedsRequestedMemory

Meaning

This alert fires when a component’s memory usage exceeds the requested limit.

Impact

Usage of memory resources is not optimal and the node might be overloaded.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the component’s memory request limit:

    $ oc -n $NAMESPACE get deployment <component> -o yaml | \
      grep requests: -A 2
  3. Check the actual memory usage by using a PromQL query:

    container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}

See the Prometheus documentation for more information.

Mitigation

Update the memory request limit in the HCO custom resource.

14.14.14. KubevirtHyperconvergedClusterOperatorCRModification

Meaning

This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.

HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly. The HyperConverged custom resource is the source of truth for the configuration.

Impact

Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.

Diagnosis
  • Check the component_name value in the alert details to determine the operand kind (kubevirt) and the operand name (kubevirt-kubevirt-hyperconverged) that are being changed:

    Labels
      alertname=KubevirtHyperconvergedClusterOperatorCRModification
      component_name=kubevirt/kubevirt-kubevirt-hyperconverged
      severity=warning
Mitigation

Do not change the HCO operands directly. Use HyperConverged objects to configure the cluster.

The alert resolves itself after 10 minutes if the operands are not changed manually.

14.14.15. KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlert

Meaning

This alert fires when the HyperConverged Cluster Operator (HCO) runs for more than an hour without a HyperConverged custom resource (CR).

This alert has the following causes:

  • During the installation process, you installed the HCO but you did not create the HyperConverged CR.
  • During the uninstall process, you removed the HyperConverged CR before uninstalling the HCO and the HCO is still running.
Mitigation

The mitigation depends on whether you are installing or uninstalling the HCO:

  • Complete the installation by creating a HyperConverged CR with its default values:

    $ cat <<EOF | oc apply -f -
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: hco-operatorgroup
      namespace: kubevirt-hyperconverged
    spec: {}
    EOF
  • Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.

14.14.16. KubevirtHyperconvergedClusterOperatorUSModification

Meaning

This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).

HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.

However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.

Impact

Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.

Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.

Diagnosis
  • Check the annotation_name in the alert details to identify the JSON Patch annotation:

    Labels
      alertname=KubevirtHyperconvergedClusterOperatorUSModification
      annotation_name=kubevirt.kubevirt.io/jsonpatch
      severity=info
Mitigation

It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.

Remove JSON Patch annotations before upgrade to avoid potential issues.

14.14.17. KubevirtVmHighMemoryUsage

Meaning

This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.

Impact

The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.

Diagnosis
  1. Obtain the virt-launcher pod details:

    $ oc get pod <virt-launcher> -o yaml
  2. Identify compute container processes with high memory usage in the virt-launcher pod:

    $ oc exec -it <virt-launcher> -c compute -- top
Mitigation
  • Increase the memory limit in the VirtualMachine specification as in the following example:

    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/vm: vm-name
        spec:
          domain:
            resources:
              limits:
                memory: 200Mi
              requests:
                memory: 128Mi

14.14.18. KubeVirtVMIExcessiveMigrations

Meaning

This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.

This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.

Impact

A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.

Diagnosis
  1. Verify that the worker node has sufficient resources:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
      jq .items[].status.allocatable

    Example output

    {
      "cpu": "3500m",
      "devices.kubevirt.io/kvm": "1k",
      "devices.kubevirt.io/sev": "0",
      "devices.kubevirt.io/tun": "1k",
      "devices.kubevirt.io/vhost-net": "1k",
      "ephemeral-storage": "38161122446",
      "hugepages-1Gi": "0",
      "hugepages-2Mi": "0",
      "memory": "7000128Ki",
      "pods": "250"
    }

  2. Check the status of the worker node:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | \
      jq .items[].status.conditions

    Example output

    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has sufficient memory available",
      "reason": "KubeletHasSufficientMemory",
      "status": "False",
      "type": "MemoryPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has no disk pressure",
      "reason": "KubeletHasNoDiskPressure",
      "status": "False",
      "type": "DiskPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:12:02Z",
      "message": "kubelet has sufficient PID available",
      "reason": "KubeletHasSufficientPID",
      "status": "False",
      "type": "PIDPressure"
    },
    {
      "lastHeartbeatTime": "2022-05-26T07:36:01Z",
      "lastTransitionTime": "2022-05-23T08:24:15Z",
      "message": "kubelet is posting ready status",
      "reason": "KubeletReady",
      "status": "True",
      "type": "Ready"
    }

  3. Log in to the worker node and verify that the kubelet service is running:

    $ systemctl status kubelet
  4. Check the kubelet journal log for error messages:

    $ journalctl -r -u kubelet
Mitigation

Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.

If the problem persists, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.19. KubeVirtVMStuckInErrorState

Meaning

This alert fires when a virtual machine (VM) is in an error state for more than 5 minutes.

Error states:

  • CrashLoopBackOff
  • Unknown
  • Unschedulable
  • ErrImagePull
  • ImagePullBackOff
  • PvcNotFound
  • DataVolumeError

This alert might indicate an issue with the VM configuration, such as a missing persistent volume claim, or a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis
  1. Check the virtual machine instance (VMI) details:

    $ oc describe vmi <vmi> -n <namespace>

    Example output

    Name:          testvmi-hxghp
    Namespace:     kubevirt-test-default1
    Labels:        name=testvmi-hxghp
    Annotations:   kubevirt.io/latest-observed-api-version: v1
                   kubevirt.io/storage-observed-api-version: v1alpha3
    API Version:   kubevirt.io/v1
    Kind:          VirtualMachineInstance
    ...
    Spec:
      Domain:
    ...
        Resources:
          Requests:
            Cpu:     5000000Gi
            Memory:  5130000240Mi
    ...
    Status:
    ...
      Conditions:
        Last Probe Time:       2022-10-03T11:11:07Z
        Last Transition Time:  2022-10-03T11:11:07Z
        Message:               Guest VM is not reported as running
        Reason:                GuestNotRunning
        Status:                False
        Type:                  Ready
        Last Probe Time:       <nil>
        Last Transition Time:  2022-10-03T11:11:07Z
        Message:               0/2 nodes are available: 2 Insufficient cpu, 2
          Insufficient memory.
        Reason:                Unschedulable
        Status:                False
        Type:                  PodScheduled
      Guest OS Info:
      Phase:  Scheduling
      Phase Transition Timestamps:
        Phase:                        Pending
        Phase Transition Timestamp:   2022-10-03T11:11:07Z
        Phase:                        Scheduling
        Phase Transition Timestamp:   2022-10-03T11:11:07Z
      Qos Class:                       Burstable
      Runtime User:                    0
      Virtual Machine Revision Name:   revision-start-vm-3503e2dc-27c0-46ef-9167-7ae2e7d93e6e-1
    Events:
      Type    Reason            Age   From                       Message
      ----    ------            ----  ----                       -------
      Normal  SuccessfulCreate  27s   virtualmachine-controller  Created virtual
        machine pod virt-launcher-testvmi-hxghp-xh9qn

  2. Check the node resources:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
      .[].status.allocatable'

    Example output

    {
      "cpu": "5",
      "devices.kubevirt.io/kvm": "1k",
      "devices.kubevirt.io/sev": "0",
      "devices.kubevirt.io/tun": "1k",
      "devices.kubevirt.io/vhost-net": "1k",
      "ephemeral-storage": "33812468066",
      "hugepages-1Gi": "0",
      "hugepages-2Mi": "128Mi",
      "memory": "3783496Ki",
      "pods": "110"
    }

  3. Check the node for error conditions:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
      .[].status.conditions'

    Example output

    [
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has sufficient memory available",
        "reason": "KubeletHasSufficientMemory",
        "status": "False",
        "type": "MemoryPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has no disk pressure",
        "reason": "KubeletHasNoDiskPressure",
        "status": "False",
        "type": "DiskPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has sufficient PID available",
        "reason": "KubeletHasSufficientPID",
        "status": "False",
        "type": "PIDPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:30Z",
        "message": "kubelet is posting ready status",
        "reason": "KubeletReady",
        "status": "True",
        "type": "Ready"
      }
    ]

Mitigation

Try to identify and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.20. KubeVirtVMStuckInMigratingState

Meaning

This alert fires when a virtual machine (VM) is in a migrating state for more than 5 minutes.

This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient node resources.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis
  1. Check the node resources:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
      .[].status.allocatable'

    Example output

    {
       "cpu": "5",
       "devices.kubevirt.io/kvm": "1k",
       "devices.kubevirt.io/sev": "0",
       "devices.kubevirt.io/tun": "1k",
       "devices.kubevirt.io/vhost-net": "1k",
       "ephemeral-storage": "33812468066",
       "hugepages-1Gi": "0",
       "hugepages-2Mi": "128Mi",
       "memory": "3783496Ki",
       "pods": "110"
    }

  2. Check the node status conditions:

    $ oc get nodes -l node-role.kubernetes.io/worker= -o json | jq '.items | \
      .[].status.conditions'

    Example output

    [
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has sufficient memory available",
        "reason": "KubeletHasSufficientMemory",
        "status": "False",
        "type": "MemoryPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has no disk pressure",
        "reason": "KubeletHasNoDiskPressure",
        "status": "False",
        "type": "DiskPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:20Z",
        "message": "kubelet has sufficient PID available",
        "reason": "KubeletHasSufficientPID",
        "status": "False",
        "type": "PIDPressure"
      },
      {
        "lastHeartbeatTime": "2022-10-03T11:13:34Z",
        "lastTransitionTime": "2022-10-03T10:14:30Z",
        "message": "kubelet is posting ready status",
        "reason": "KubeletReady",
        "status": "True",
        "type": "Ready"
      }
    ]

Mitigation

Check the migration configuration of the virtual machine to ensure that it is appropriate for the workload.

You set a cluster-wide migration configuration by editing the MigrationConfiguration stanza of the KubeVirt custom resource.

You set a migration configuration for a specific scope by creating a migration policy.

You can determine whether a VM is bound to a migration policy by viewing its vm.Status.MigrationState.MigrationPolicyName parameter.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.21. KubeVirtVMStuckInStartingState

Meaning

This alert fires when a virtual machine (VM) is in a starting state for more than 5 minutes.

This alert might indicate an issue in the VM configuration, such as a misconfigured priority class or a missing network device.

Impact

There is no immediate impact. However, if this alert persists, you must investigate the root cause and resolve the issue.

Diagnosis
  • Check the virtual machine instance (VMI) details for error conditions:

    $ oc describe vmi <vmi> -n <namespace>

    Example output

    Name:          testvmi-ldgrw
    Namespace:     kubevirt-test-default1
    Labels:        name=testvmi-ldgrw
    Annotations:   kubevirt.io/latest-observed-api-version: v1
                   kubevirt.io/storage-observed-api-version: v1alpha3
    API Version:   kubevirt.io/v1
    Kind:          VirtualMachineInstance
    ...
    Spec:
    ...
      Networks:
        Name:  default
        Pod:
      Priority Class Name:               non-preemtible
      Termination Grace Period Seconds:  0
    Status:
      Conditions:
        Last Probe Time:       2022-10-03T11:08:30Z
        Last Transition Time:  2022-10-03T11:08:30Z
        Message:               virt-launcher pod has not yet been scheduled
        Reason:                PodNotExists
        Status:                False
        Type:                  Ready
        Last Probe Time:       <nil>
        Last Transition Time:  2022-10-03T11:08:30Z
        Message:               failed to create virtual machine pod: pods
        "virt-launcher-testvmi-ldgrw-" is forbidden: no PriorityClass with name
        non-preemtible was found
        Reason:                FailedCreate
        Status:                False
        Type:                  Synchronized
      Guest OS Info:
      Phase:  Pending
      Phase Transition Timestamps:
        Phase:                        Pending
        Phase Transition Timestamp:   2022-10-03T11:08:30Z
      Runtime User:                    0
      Virtual Machine Revision Name:
        revision-start-vm-6f01a94b-3260-4c5a-bbe5-dc98d13e6bea-1
    Events:
      Type     Reason        Age                From                       Message
      ----     ------        ----               ----                       -------
      Warning  FailedCreate  8s (x13 over 28s)  virtualmachine-controller  Error
      creating pod: pods "virt-launcher-testvmi-ldgrw-" is forbidden: no
      PriorityClass with name non-preemtible was found

Mitigation

Ensure that the VM is configured correctly and has the required resources.

A Pending state indicates that the VM has not yet been scheduled. Check the following possible causes:

  • The virt-launcher pod is not scheduled.
  • Topology hints for the VMI are not up to date.
  • Data volume is not provisioned or ready.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.22. LowKVMNodesCount

Meaning

This alert fires when fewer than two nodes in the cluster have KVM resources.

Impact

The cluster must have at least two nodes with KVM resources for live migration.

Virtual machines cannot be scheduled or run if no nodes have KVM resources.

Diagnosis
  • Identify the nodes with KVM resources:

    $ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \
      grep devices.kubevirt.io/kvm
Mitigation

Install KVM on the nodes without KVM resources.

14.14.23. LowReadyVirtControllersCount

Meaning

This alert fires when one or more virt-controller pods are running, but none of these pods has been in the Ready state for the past 5 minutes.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device creates pods for VMIs and manages their lifecycle. The device is critical for cluster-wide virtualization functionality.

Impact

This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify a virt-controller device is available:

    $ oc get deployment -n $NAMESPACE virt-controller \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions, such as crashing pods or failures to pull images:

    $ oc -n $NAMESPACE describe deploy virt-controller
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
Mitigation

This alert can have multiple causes, including the following:

  • The cluster has insufficient memory.
  • The nodes are down.
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • There are network issues.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.24. LowReadyVirtOperatorsCount

Meaning

This alert fires when one or more virt-operator pods are running, but none of these pods has been in a Ready state for the last 10 minutes.

The virt-operator is the first Operator to start in a cluster. The virt-operator deployment has a default replica of two virt-operator pods.

Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact

A cluster-level failure might occur. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might become unavailable. Such a state also triggers the NoReadyVirtOperator alert.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.25. LowVirtAPICount

Meaning

This alert fires when only one available virt-api pod is detected during a 60-minute period, although at least two nodes are available for scheduling.

Impact

An API call outage might occur during node eviction because the virt-api pod becomes a single point of failure.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the number of available virt-api pods:

    $ oc get deployment -n $NAMESPACE virt-api \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-api deployment for error conditions:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the nodes for issues such as nodes in a NotReady state:

    $ oc get nodes
Mitigation

Try to identify the root cause and to resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.26. LowVirtControllersCount

Meaning

This alert fires when a low number of virt-controller pods is detected. At least one virt-controller pod must be available in order to ensure high availability. The default number of replicas is 2.

A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device create pods for VMIs and manages the lifecycle of the pods. The device is critical for cluster-wide virtualization functionality.

Impact

The responsiveness of OpenShift Virtualization might become negatively affected. For example, certain requests might be missed.

In addition, if another virt-launcher instance terminates unexpectedly, OpenShift Virtualization might become completely unresponsive.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify that running virt-controller pods are available:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
  3. Check the virt-launcher logs for error messages:

    $ oc -n $NAMESPACE logs <virt-launcher>
  4. Obtain the details of the virt-launcher pod to check for status conditions such as unexpected termination or a NotReady state.

    $ oc -n $NAMESPACE describe pod/<virt-launcher>
Mitigation

This alert can have a variety of causes, including:

  • Not enough memory on the cluster
  • Nodes are down
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.27. LowVirtOperatorCount

Meaning

This alert fires when only one virt-operator pod in a Ready state has been running for the last 60 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact

The virt-operator cannot provide high availability (HA) for the deployment. HA requires two or more virt-operator pods in a Ready state. The default deployment is two pods.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its decreased availability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the states of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Review the logs of the affected virt-operator pods:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the affected virt-operator pods:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

14.14.28. NetworkAddonsConfigNotReady

Meaning

This alert fires when the NetworkAddonsConfig custom resource (CR) of the Cluster Network Addons Operator (CNAO) is not ready.

CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.

Impact

Network functionality is affected.

Diagnosis
  1. Check the status conditions of the NetworkAddonsConfig CR to identify the deployment or daemon set that is not ready:

    $ oc get networkaddonsconfig \
      -o custom-columns="":.status.conditions[*].message

    Example output

    DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...

  2. Check the component’s pod for errors:

    $ oc -n cluster-network-addons get daemonset <pod> -o yaml
  3. Check the component’s logs:

    $ oc -n cluster-network-addons logs <pod>
  4. Check the component’s details for error conditions:

    $ oc -n cluster-network-addons describe <pod>
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.29. NoLeadingVirtOperator

Meaning

This alert fires when no virt-operator pod with a leader lease has been detected for 10 minutes, although the virt-operator pods are in a Ready state. The alert indicates that no leader pod is available.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live updating, and live upgrading a cluster
  • Monitoring the lifecycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods, with one pod holding a leader lease.

Impact

This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A -o \
      custom-columns="":.metadata.namespace)"
  2. Obtain the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator pod logs to determine the leader status:

    $ oc -n $NAMESPACE logs | grep lead

    Leader pod example:

    {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"}
    I1130 12:15:18.635452       1 leaderelection.go:243] attempting to acquire
    leader lease <namespace>/virt-operator...
    I1130 12:15:19.216582       1 leaderelection.go:253] successfully acquired
    lease <namespace>/virt-operator
    {"component":"virt-operator","level":"info","msg":"Started leading",
    "pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}

    Non-leader pod example:

    {"component":"virt-operator","level":"info","msg":"Attempting to acquire
    leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"}
    I1130 12:15:20.533792       1 leaderelection.go:243] attempting to acquire
    leader lease <namespace>/virt-operator...
  4. Obtain the details of the affected virt-operator pods:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.30. NoReadyVirtController

Meaning

This alert fires when no available virt-controller devices have been detected for 5 minutes.

The virt-controller devices monitor the custom resource definitions of virtual machine instances (VMIs) and manage the associated pods. The devices create pods for VMIs and manage the lifecycle of the pods.

Therefore, virt-controller devices are critical for all cluster-wide virtualization functionality.

Impact

Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Verify the number of virt-controller devices:

    $ oc get deployment -n $NAMESPACE virt-controller \
      -o jsonpath='{.status.readyReplicas}'
  3. Check the status of the virt-controller deployment:

    $ oc -n $NAMESPACE get deploy virt-controller -o yaml
  4. Obtain the details of the virt-controller deployment to check for status conditions such as crashing pods or failure to pull images:

    $ oc -n $NAMESPACE describe deploy virt-controller
  5. Obtain the details of the virt-controller pods:

    $ get pods -n $NAMESPACE | grep virt-controller
  6. Check the logs of the virt-controller pods for error messages:

    $ oc logs -n $NAMESPACE <virt-controller>
  7. Check the nodes for problems, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.31. NoReadyVirtOperator

Meaning

This alert fires when no virt-operator pod in a Ready state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The default deployment is two virt-operator pods.

Impact

This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.

The virt-operator is not directly responsible for virtual machines in the cluster. Therefore, its temporary unavailability does not significantly affect workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the name of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Generate the description of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.

14.14.32. OrphanedVirtualMachineInstances

Meaning

This alert fires when a virtual machine instance (VMI), or virt-launcher pod, runs on a node that does not have a running virt-handler pod. Such a VMI is called orphaned.

Impact

Orphaned VMIs cannot be managed.

Diagnosis
  1. Check the status of the virt-handler pods to view the nodes on which they are running:

    $ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
  2. Check the status of the VMIs to identify VMIs running on nodes that do not have a running virt-handler pod:

    $ oc get vmis --all-namespaces
  3. Check the status of the virt-handler daemon:

    $ oc get daemonset virt-handler --all-namespaces

    Example output

    NAME          DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE ...
    virt-handler  2        2        2      2           2         ...

    The daemon set is considered healthy if the Desired, Ready, and Available columns contain the same value.

  4. If the virt-handler daemon set is not healthy, check the virt-handler daemon set for pod deployment issues:

    $ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
  5. Check the nodes for issues such as a NotReady status:

    $ oc get nodes
  6. Check the spec.workloads stanza of the KubeVirt custom resource (CR) for a workloads placement policy:

    $ oc get kubevirt kubevirt --all-namespaces -o yaml
Mitigation

If a workloads placement policy is configured, add the node with the VMI to the policy.

Possible causes for the removal of a virt-handler pod from a node include changes to the node’s taints and tolerations or to a pod’s scheduling rules.

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.33. OutdatedVirtualMachineInstanceWorkloads

Meaning

This alert fires when running virtual machine instances (VMIs) in outdated virt-launcher pods are detected 24 hours after the OpenShift Virtualization control plane has been updated.

Impact

Outdated VMIs might not have access to new OpenShift Virtualization features.

Outdated VMIs will not receive the security fixes associated with the virt-launcher pod update.

Diagnosis
  1. Identify the outdated VMIs:

    $ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
  2. Check the KubeVirt custom resource (CR) to determine whether workloadUpdateMethods is configured in the workloadUpdateStrategy stanza:

    $ oc get kubevirt kubevirt --all-namespaces -o yaml
  3. Check each outdated VMI to determine whether it is live-migratable:

    $ oc get vmi <vmi> -o yaml

    Example output

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstance
    ...
      status:
        conditions:
        - lastProbeTime: null
          lastTransitionTime: null
          message: cannot migrate VMI which does not use masquerade
          to connect to the pod network
          reason: InterfaceNotLiveMigratable
          status: "False"
          type: LiveMigratable

Mitigation
Configuring automated workload updates

Update the HyperConverged CR to enable automatic workload updates.

Stopping a VM associated with a non-live-migratable VMI
  • If a VMI is not live-migratable and if runStrategy: always is set in the corresponding VirtualMachine object, you can update the VMI by manually stopping the virtual machine (VM):

    $ virctl stop --namespace <namespace> <vm>

A new VMI spins up immediately in an updated virt-launcher pod to replace the stopped VMI. This is the equivalent of a restart action.

Note

Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload.

Migrating a live-migratable VMI

If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration object that targets a specific running VMI. The VMI is migrated into an updated virt-launcher pod.

  1. Create a VirtualMachineInstanceMigration manifest and save it as migration.yaml:

    apiVersion: kubevirt.io/v1
    kind: VirtualMachineInstanceMigration
    metadata:
      name: <migration_name>
      namespace: <namespace>
    spec:
      vmiName: <vmi_name>
  2. Create a VirtualMachineInstanceMigration object to trigger the migration:

    $ oc create -f migration.yaml

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.34. SSPCommonTemplatesModificationReverted

Meaning

This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.

The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.

Impact

Changes to common templates are overwritten.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the ssp-operator logs for templates with reverted changes:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \
      grep 'common template' -C 3
Mitigation

Try to identify and resolve the cause of the changes.

Ensure that changes are made only to copies of templates, and not to the templates themselves.

14.14.35. SSPFailingToReconcile

Meaning

This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.

Diagnosis
  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Obtain the details of the ssp-operator pods:

    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  3. Check the ssp-operator logs for errors:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
  4. Obtain the status of the virt-template-validator pods:

    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  5. Obtain the details of the virt-template-validator pods:

    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  6. Check the virt-template-validator logs for errors:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.36. SSPHighRateRejectedVms

Meaning

This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.

Impact

The VMs are not created or modified. As a result, the environment might not behave as expected.

Diagnosis
  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the virt-template-validator logs for errors that might indicate the cause:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator

    Example output

    {"component":"kubevirt-template-validator","level":"info","msg":"evalution
    summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL,
    value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false",
    "pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}

Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.37. SSPOperatorDown

Meaning

This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.

The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.

Impact

Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Check the status of the ssp-operator pods.

    $ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
  3. Obtain the details of the ssp-operator pods:

    $ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
  4. Check the ssp-operator logs for error messages:

    $ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.38. SSPTemplateValidatorDown

Meaning

This alert fires when all the Template Validator pods are down.

The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.

Impact

VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \
      awk '{print $1}')"
  2. Obtain the status of the virt-template-validator pods:

    $ oc -n $NAMESPACE get pods -l name=virt-template-validator
  3. Obtain the details of the virt-template-validator pods:

    $ oc -n $NAMESPACE describe pods -l name=virt-template-validator
  4. Check the virt-template-validator logs for error messages:

    $ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.39. VirtAPIDown

Meaning

This alert fires when all the API Server pods are down.

Impact

OpenShift Virtualization objects cannot send API calls.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  4. Check the virt-api deployment details for issues such as crashing pods or image pull failures:

    $ oc -n $NAMESPACE describe deploy virt-api
  5. Check for issues such as nodes in a NotReady state:

    $ oc get nodes
Mitigation

Try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.40. VirtApiRESTErrorsBurst

Meaning

More than 80% of REST calls have failed in the virt-api pods in the last 5 minutes.

Impact

A very high rate of failed REST calls to virt-api might lead to slow response and execution of API calls, and potentially to API calls being completely dismissed.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Obtain the list of virt-api pods on your deployment:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs for error messages:

    $ oc logs -n $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
  6. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    $ oc -n $NAMESPACE describe deploy virt-api
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.41. VirtApiRESTErrorsHigh

Meaning

More than 5% of REST calls have failed in the virt-api pods in the last 60 minutes.

Impact

A high rate of failed REST calls to virt-api might lead to slow response and execution of API calls.

However, currently running virtual machine workloads are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable as follows:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-api pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
  3. Check the virt-api logs:

    $ oc logs -n  $NAMESPACE <virt-api>
  4. Obtain the details of the virt-api pods:

    $ oc describe -n $NAMESPACE <virt-api>
  5. Check if any problems occurred with the nodes. For example, they might be in a NotReady state:

    $ oc get nodes
  6. Check the status of the virt-api deployment:

    $ oc -n $NAMESPACE get deploy virt-api -o yaml
  7. Obtain the details of the virt-api deployment:

    $ oc -n $NAMESPACE describe deploy virt-api
Mitigation

Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.42. VirtControllerDown

Meaning

No running virt-controller pod has been detected for 5 minutes.

Impact

Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-controller deployment:

    $ oc get deployment -n $NAMESPACE virt-controller -o yaml
  3. Review the logs of the virt-controller pod:

    $ oc get logs <virt-controller>
Mitigation

This alert can have a variety of causes, including the following:

  • Node resource exhaustion
  • Not enough memory on the cluster
  • Nodes are down
  • The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
  • Networking issues

Identify the root cause and fix it, if possible.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.43. VirtControllerRESTErrorsBurst

Meaning

More than 80% of REST calls in virt-controller pods failed in the last 5 minutes.

The virt-controller has likely fully lost the connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-controller>
Mitigation
  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.44. VirtControllerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-controller in the last 60 minutes.

This is most likely because virt-controller has partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-controller pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. List the available virt-controller pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
  3. Check the virt-controller logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-controller>
Mitigation
  • If the virt-controller pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-controller>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.45. VirtHandlerDaemonSetRolloutFailing

Meaning

The virt-handler daemon set has failed to deploy on one or more worker nodes after 15 minutes.

Impact

This alert is a warning. It does not indicate that all virt-handler daemon sets have failed to deploy. Therefore, the normal lifecycle of virtual machines is not affected unless the cluster is overloaded.

Diagnosis

Identify worker nodes that do not have a running virt-handler pod:

  1. Export the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pods to identify pods that have not deployed:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Obtain the name of the worker node of the virt-handler pod:

    $ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'
Mitigation

If the virt-handler pods failed to deploy because of insufficient resources, you can delete other pods on the affected worker node.

14.14.46. VirtHandlerRESTErrorsBurst

Meaning

More than 80% of REST calls failed in virt-handler in the last 5 minutes. This alert usually indicates that the virt-handler pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pod:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the virt-handler logs for error messages when connecting to the API server:

    $ oc logs -n  $NAMESPACE <virt-handler>
Mitigation
  • If the virt-handler cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.47. VirtHandlerRESTErrorsHigh

Meaning

More than 5% of REST calls failed in virt-handler in the last 60 minutes. This alert usually indicates that the virt-handler pods have partially lost connection to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-handler pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Node-related actions, such as starting and migrating workloads, are delayed on the node that virt-handler is running on. Running workloads are not affected, but reporting their current status might be delayed.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-handler pod:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
  3. Check the virt-handler logs for error messages when connecting to the API server:

    $ oc logs -n $NAMESPACE <virt-handler>
Mitigation
  • If the virt-handler cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-handler>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.48. VirtOperatorDown

Meaning

This alert fires when no virt-operator pod in the Running state has been detected for 10 minutes.

The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:

  • Installing, live-updating, and live-upgrading a cluster
  • Monitoring the life cycle of top-level controllers, such as virt-controller, virt-handler, virt-launcher, and managing their reconciliation
  • Certain cluster-wide tasks, such as certificate rotation and infrastructure management

The virt-operator deployment has a default replica of 2 pods.

Impact

This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.

The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator deployment:

    $ oc -n $NAMESPACE get deploy virt-operator -o yaml
  3. Obtain the details of the virt-operator deployment:

    $ oc -n $NAMESPACE describe deploy virt-operator
  4. Check the status of the virt-operator pods:

    $ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
  5. Check for node issues, such as a NotReady state:

    $ oc get nodes
Mitigation

Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.49. VirtOperatorRESTErrorsBurst

Meaning

This alert fires when more than 80% of the REST calls in the virt-operator pods failed in the last 5 minutes. This usually indicates that the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Cluster-level actions, such as upgrading and controller reconciliation, might not be available.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.50. VirtOperatorRESTErrorsHigh

Meaning

This alert fires when more than 5% of the REST calls in virt-operator pods failed in the last 60 minutes. This usually indicates the virt-operator pods cannot connect to the API server.

This error is frequently caused by one of the following problems:

  • The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
  • The virt-operator pod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact

Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.

However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.

Diagnosis
  1. Set the NAMESPACE environment variable:

    $ export NAMESPACE="$(oc get kubevirt -A \
      -o custom-columns="":.metadata.namespace)"
  2. Check the status of the virt-operator pods:

    $ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
  3. Check the virt-operator logs for error messages when connecting to the API server:

    $ oc -n $NAMESPACE logs <virt-operator>
  4. Obtain the details of the virt-operator pod:

    $ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
  • If the virt-operator pod cannot connect to the API server, delete the pod to force a restart:

    $ oc delete -n $NAMESPACE <virt-operator>

If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.

14.14.51. VMCannotBeEvicted

Meaning

This alert fires when the eviction strategy of a virtual machine (VM) is set to LiveMigration but the VM is not migratable.

Impact

Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.

Diagnosis
  1. Check the VMI configuration to determine whether the value of evictionStrategy is LiveMigrate:

    $ oc get vmis -o yaml
  2. Check for a False status in the LIVE-MIGRATABLE column to identify VMIs that are not migratable:

    $ oc get vmis -o wide
  3. Obtain the details of the VMI and check spec.conditions to identify the issue:

    $ oc get vmi <vmi> -o yaml

    Example output

    status:
      conditions:
      - lastProbeTime: null
        lastTransitionTime: null
        message: cannot migrate VMI which does not use masquerade to connect
        to the pod network
        reason: InterfaceNotLiveMigratable
        status: "False"
        type: LiveMigratable

Mitigation

Set the evictionStrategy of the VMI to shutdown or resolve the issue that prevents the VMI from migrating.

14.15. Collecting data for Red Hat Support

When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools:

must-gather tool
The must-gather tool collects diagnostic information, including resource definitions and service logs.
Prometheus
Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
Alertmanager
The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.

14.15.1. Collecting data about your environment

Collecting data about your environment minimizes the time required to analyze and determine the root cause.

Prerequisites

  • Set the retention time for Prometheus metrics data to a minimum of seven days.
  • Configure the Alertmanager to capture relevant alerts and to send them to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
  • Record the exact number of affected nodes and virtual machines.

Procedure

  1. Collect must-gather data for the cluster by using the default must-gather image.
  2. Collect must-gather data for Red Hat OpenShift Data Foundation, if necessary.
  3. Collect must-gather data for OpenShift Virtualization by using the OpenShift Virtualization must-gather image.
  4. Collect Prometheus metrics for the cluster.

14.15.1.1. Additional resources

14.15.2. Collecting data about virtual machines

Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.

Prerequisites

  • Windows VMs:

    • Record the Windows patch update details for Red Hat Support.
    • Install the latest version of the VirtIO drivers. The VirtIO drivers include the QEMU guest agent.
    • If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP to determine whether there is a problem with the connection software.

Procedure

  1. Collect detailed must-gather data about the malfunctioning VMs.
  2. Collect screenshots of VMs that have crashed before you restart them.
  3. Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.

14.15.2.1. Additional resources

14.15.3. Using the must-gather tool for OpenShift Virtualization

You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image.

The default data collection includes information about the following resources:

  • OpenShift Virtualization Operator namespaces, including child objects
  • OpenShift Virtualization custom resource definitions
  • Namespaces that contain virtual machines
  • Basic virtual machine definitions

Procedure

  • Run the following command to collect data about OpenShift Virtualization:

    $ oc adm must-gather --image-stream=openshift/must-gather \
      --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.11

14.15.3.1. must-gather tool options

You can specify a combination of scripts and environment variables for the following options:

  • Collecting detailed virtual machine (VM) information from a namespace
  • Collecting detailed information about specified VMs
  • Collecting image, image-stream, and image-stream-tags information
  • Limiting the maximum number of parallel processes used by the must-gather tool
14.15.3.1.1. Parameters

Environment variables

You can specify environment variables for a compatible script.

NS=<namespace_name>
Collect virtual machine information, including virt-launcher pod details, from the namespace that you specify. The VirtualMachine and VirtualMachineInstance CR data is collected for all namespaces.
VM=<vm_name>
Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the NS environment variable.
PROS=<number_of_processes>

Modify the maximum number of parallel processes that the must-gather tool uses. The default value is 5.

Important

Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended.

Scripts

Each script is compatible only with certain environment variable combinations.

/usr/bin/gather
Use the default must-gather script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the PROS variable.
/usr/bin/gather --vms_details
Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the must-gather tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the VM variable.
/usr/bin/gather --images
Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the PROS variable.
14.15.3.1.2. Usage and examples

Environment variables are optional. You can run a script by itself or with one or more compatible environment variables.

Table 14.1. Compatible parameters

ScriptCompatible environment variable

/usr/bin/gather

  • PROS=<number_of_processes>

/usr/bin/gather --vms_details

  • For a namespace: NS=<namespace_name>
  • For a VM: VM=<vm_name> NS=<namespace_name>
  • PROS=<number_of_processes>

/usr/bin/gather --images

  • PROS=<number_of_processes>

Syntax

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.11 \
  -- <environment_variable_1> <environment_variable_2> <script_name>

Default data collection parallel processes

By default, five processes run in parallel.

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.11 \
  -- PROS=5 /usr/bin/gather 1
1
You can modify the number of parallel processes by changing the default.

Detailed VM information

The following command collects detailed VM information for the my-vm VM in the mynamespace namespace:

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.11 \
  -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1
1
The NS environment variable is mandatory if you use the VM environment variable.

Image, image-stream, and image-stream-tags information

The following command collects image, image-stream, and image-stream-tags information from the cluster:

$ oc adm must-gather \
  --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.12.11 \
  -- /usr/bin/gather --images

14.15.3.2. Additional resources