Chapter 14. Support
14.1. Support overview
You can collect data about your environment, monitor the health of your cluster and virtual machines (VMs), and troubleshoot OpenShift Virtualization resources with the following tools.
14.1.1. Web console
The OpenShift Container Platform web console displays resource usage, alerts, events, and trends for your cluster and for OpenShift Virtualization components and resources.
Table 14.1. Web console pages for monitoring and troubleshooting
| Page | Description |
|---|---|
| Overview page | Cluster details, status, alerts, inventory, and resource usage |
| Virtualization → Overview tab | OpenShift Virtualization resources, usage, alerts, and status |
| Virtualization → Top consumers tab | Top consumers of CPU, memory, and storage |
| Virtualization → Migrations tab | Progress of live migrations |
| VirtualMachines → VirtualMachine → VirtualMachine details → Metrics tab | VM resource usage, storage, network, and migration |
| VirtualMachines → VirtualMachine → VirtualMachine details → Events tab | List of VM events |
| VirtualMachines → VirtualMachine → VirtualMachine details → Diagnostics tab | VM status conditions and volume snapshot status |
14.1.2. Collecting data for Red Hat Support
When you submit a support case to Red Hat Support, it is helpful to provide debugging information. You can gather debugging information by performing the following steps:
- Collecting data about your environment
-
Configure Prometheus and Alertmanager and collect
must-gatherdata for OpenShift Container Platform and OpenShift Virtualization. - Collecting data about VMs
-
Collect
must-gatherdata and memory dumps from VMs. must-gathertool for OpenShift Virtualization-
Configure and use the
must-gathertool.
14.1.3. Monitoring
You can monitor the health of your cluster and VMs. For details about monitoring tools, see the Monitoring overview.
14.1.4. Troubleshooting
Troubleshoot OpenShift Virtualization components and VMs and resolve issues that trigger alerts in the web console.
- Events
- View important life-cycle information for VMs, namespaces, and resources.
- Logs
- View and configure logs for OpenShift Virtualization components and VMs.
- Runbooks
- Diagnose and resolve issues that trigger OpenShift Virtualization alerts in the web console.
- Troubleshooting data volumes
- Troubleshoot data volumes by analyzing conditions and events.
14.2. Collecting data for Red Hat Support
When you submit a support case to Red Hat Support, it is helpful to provide debugging information for OpenShift Container Platform and OpenShift Virtualization by using the following tools:
- must-gather tool
-
The
must-gathertool collects diagnostic information, including resource definitions and service logs. - Prometheus
- Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing.
- Alertmanager
- The Alertmanager service handles alerts received from Prometheus. The Alertmanager is also responsible for sending the alerts to external notification systems.
For information about the OpenShift Container Platform monitoring stack, see About OpenShift Container Platform monitoring.
14.2.1. Collecting data about your environment
Collecting data about your environment minimizes the time required to analyze and determine the root cause.
Prerequisites
- Set the retention time for Prometheus metrics data to a minimum of seven days.
- Configure the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster.
- Record the exact number of affected nodes and virtual machines.
14.2.2. Collecting data about virtual machines
Collecting data about malfunctioning virtual machines (VMs) minimizes the time required to analyze and determine the root cause.
Prerequisites
- Linux VMs: Install the latest QEMU guest agent.
Windows VMs:
- Record the Windows patch update details.
- Install the latest VirtIO drivers.
- Install the latest QEMU guest agent.
- If Remote Desktop Protocol (RDP) is enabled, try to connect to the VMs with RDP by using the web console or the command line to determine whether there is a problem with the connection software.
Procedure
-
Collect must-gather data for the VMs using the
/usr/bin/gatherscript. - Collect screenshots of VMs that have crashed before you restart them.
- Collect memory dumps from VMs before remediation attempts.
- Record factors that the malfunctioning VMs have in common. For example, the VMs have the same host or network.
14.2.3. Using the must-gather tool for OpenShift Virtualization
You can collect data about OpenShift Virtualization resources by running the must-gather command with the OpenShift Virtualization image.
The default data collection includes information about the following resources:
- OpenShift Virtualization Operator namespaces, including child objects
- OpenShift Virtualization custom resource definitions
- Namespaces that contain virtual machines
- Basic virtual machine definitions
Procedure
Run the following command to collect data about OpenShift Virtualization:
$ oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.3 \ -- /usr/bin/gather
14.2.3.1. must-gather tool options
You can specify a combination of scripts and environment variables for the following options:
- Collecting detailed virtual machine (VM) information from a namespace
- Collecting detailed information about specified VMs
- Collecting image, image-stream, and image-stream-tags information
-
Limiting the maximum number of parallel processes used by the
must-gathertool
14.2.3.1.1. Parameters
Environment variables
You can specify environment variables for a compatible script.
NS=<namespace_name>-
Collect virtual machine information, including
virt-launcherpod details, from the namespace that you specify. TheVirtualMachineandVirtualMachineInstanceCR data is collected for all namespaces. VM=<vm_name>-
Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the
NSenvironment variable. PROS=<number_of_processes>Modify the maximum number of parallel processes that the
must-gathertool uses. The default value is5.ImportantUsing too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended.
Scripts
Each script is compatible only with certain environment variable combinations.
/usr/bin/gather-
Use the default
must-gatherscript, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with thePROSvariable. /usr/bin/gather --vms_details-
Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to OpenShift Virtualization resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the
must-gathertool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use theVMvariable. /usr/bin/gather --images-
Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the
PROSvariable.
14.2.3.1.2. Usage and examples
Environment variables are optional. You can run a script by itself or with one or more compatible environment variables.
Table 14.2. Compatible parameters
| Script | Compatible environment variable |
|---|---|
|
|
|
|
|
|
|
|
|
Syntax
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.3 \ -- <environment_variable_1> <environment_variable_2> <script_name>
Default data collection parallel processes
By default, five processes run in parallel.
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.3 \
-- PROS=5 /usr/bin/gather 1- 1
- You can modify the number of parallel processes by changing the default.
Detailed VM information
The following command collects detailed VM information for the my-vm VM in the mynamespace namespace:
$ oc adm must-gather \
--image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.3 \
-- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1- 1
- The
NSenvironment variable is mandatory if you use theVMenvironment variable.
Image, image-stream, and image-stream-tags information
The following command collects image, image-stream, and image-stream-tags information from the cluster:
$ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.13.3 \ /usr/bin/gather --images
14.3. Monitoring
14.3.1. Monitoring overview
You can monitor the health of your cluster and virtual machines (VMs) with the following tools:
- OpenShift Container Platform cluster checkup framework
Run automated tests on your cluster with the OpenShift Container Platform cluster checkup framework to check the following conditions:
- Network connectivity and latency between two VMs attached to a secondary network interface
- VM running a Data Plane Development Kit (DPDK) workload with zero packet loss
The OpenShift Container Platform cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
- Prometheus queries for virtual resources
- Query vCPU, network, storage, and guest memory swapping usage and live migration progress.
- VM custom metrics
-
Configure the
node-exporterservice to expose internal VM metrics and processes. - VM health checks
- Configure readiness, liveness, and guest agent ping probes and a watchdog for VMs.
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
14.3.2. OpenShift Container Platform cluster checkup framework
OpenShift Virtualization includes predefined checkups that can be used for cluster maintenance and troubleshooting.
The OpenShift Container Platform cluster checkup framework is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
14.3.2.1. About the OpenShift Container Platform cluster checkup framework
A checkup is an automated test workload that allows you to verify if a specific cluster functionality works as expected. The cluster checkup framework uses native Kubernetes resources to configure and execute the checkup.
By using predefined checkups, cluster administrators and developers can improve cluster maintainability, troubleshoot unexpected behavior, minimize errors, and save time. They can also review the results of the checkup and share them with experts for further analysis. Vendors can write and publish checkups for features or services that they provide and verify that their customer environments are configured correctly.
Running a predefined checkup in an existing namespace involves setting up a service account for the checkup, creating the Role and RoleBinding objects for the service account, enabling permissions for the checkup, and creating the input config map and the checkup job. You can run a checkup multiple times.
You must always:
- Verify that the checkup image is from a trustworthy source before applying it.
-
Review the checkup permissions before creating the
RoleandRoleBindingobjects.
14.3.2.2. Virtual machine latency checkup
You use a predefined checkup to verify network connectivity and measure latency between two virtual machines (VMs) that are attached to a secondary network interface. The latency checkup uses the ping utility.
You run a latency checkup by performing the following steps:
- Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the latency checkup resources.
Prerequisites
-
You installed the OpenShift CLI (
oc). - The cluster has at least two worker nodes.
- The Multus Container Network Interface (CNI) plugin is installed on the cluster.
- You configured a network attachment definition for a namespace.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the latency checkup:Example 14.1. Example role manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: ["kubevirt.io"] resources: ["virtualmachineinstances"] verbs: ["get", "create", "delete"] - apiGroups: ["subresources.kubevirt.io"] resources: ["virtualmachineinstances/console"] verbs: ["get"] - apiGroups: ["k8s.cni.cncf.io"] resources: ["network-attachment-definitions"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: ["get", "update"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io
Apply the
ServiceAccount,Role, andRoleBindingmanifest:$ oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1- 1
<target_namespace>is the namespace where the checkup is to be run. This must be an existing namespace where theNetworkAttachmentDefinitionobject resides.
Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" 1 spec.param.maxDesiredLatencyMilliseconds: "10" 2 spec.param.sampleDurationSeconds: "5" 3 spec.param.sourceNode: "worker1" 4 spec.param.targetNode: "worker2" 5
- 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- Optional: The maximum desired latency, in milliseconds, between the virtual machines. If the measured latency exceeds this value, the checkup fails.
- 3
- Optional: The duration of the latency check, in seconds.
- 4
- Optional: When specified, latency is measured from this node to the target node. If the source node is specified, the
spec.param.targetNodefield cannot be empty. - 5
- Optional: When specified, latency is measured from the source node to this node.
Apply the config map manifest in the target namespace:
$ oc apply -n <target_namespace> -f <latency_config_map>.yaml
Create a
Jobmanifest to run the checkup:Example job manifest
apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.13.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
Jobmanifest:$ oc apply -n <target_namespace> -f <latency_job>.yaml
Wait for the job to complete:
$ oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m
Review the results of the latency checkup by running the following command. If the maximum measured latency is greater than the value of the
spec.param.maxDesiredLatencyMillisecondsattribute, the checkup fails and returns an error.$ oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml
Example output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: "blue-network" spec.param.maxDesiredLatencyMilliseconds: "10" spec.param.sampleDurationSeconds: "5" spec.param.sourceNode: "worker1" spec.param.targetNode: "worker2" status.succeeded: "true" status.failureReason: "" status.completionTimestamp: "2022-01-01T09:00:00Z" status.startTimestamp: "2022-01-01T09:00:07Z" status.result.avgLatencyNanoSec: "177000" status.result.maxLatencyNanoSec: "244000" 1 status.result.measurementDurationSec: "5" status.result.minLatencyNanoSec: "135000" status.result.sourceNode: "worker1" status.result.targetNode: "worker2"- 1
- The maximum measured latency in nanoseconds.
Optional: To view the detailed job log in case of checkup failure, use the following command:
$ oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> kubevirt-vm-latency-checkup
$ oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config
Optional: If you do not plan to run another checkup, delete the roles manifest:
$ oc delete -f <latency_sa_roles_rolebinding>.yaml
14.3.2.3. DPDK checkup
Use a predefined checkup to verify that your OpenShift Container Platform cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator pod and a VM running a test DPDK application.
You run a DPDK checkup by performing the following steps:
- Create a service account, role, and role bindings for the DPDK checkup and a service account for the traffic generator pod.
- Create a security context constraints resource for the traffic generator pod.
- Create a config map to provide the input to run the checkup and to store the results.
- Create a job to run the checkup.
- Review the results in the config map.
- Optional: To rerun the checkup, delete the existing config map and job and then create a new config map and job.
- When you are finished, delete the DPDK checkup resources.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminpermissions. -
You have installed the OpenShift CLI (
oc). - You have configured the compute nodes to run DPDK applications on VMs with zero packet loss.
The traffic generator pod created by the checkup has elevated privileges:
- It runs as root.
- It has a bind mount to the node’s file system.
The container image of the traffic generator is pulled from the upstream Project Quay container registry.
Procedure
Create a
ServiceAccount,Role, andRoleBindingmanifest for the DPDK checkup and the traffic generator pod:Example 14.2. Example service account, role, and rolebinding manifest file
--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ "" ] resources: [ "configmaps" ] verbs: [ "get", "update" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ "kubevirt.io" ] resources: [ "virtualmachineinstances" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "subresources.kubevirt.io" ] resources: [ "virtualmachineinstances/console" ] verbs: [ "get" ] - apiGroups: [ "" ] resources: [ "pods" ] verbs: [ "create", "get", "delete" ] - apiGroups: [ "" ] resources: [ "pods/exec" ] verbs: [ "create" ] - apiGroups: [ "k8s.cni.cncf.io" ] resources: [ "network-attachment-definitions" ] verbs: [ "get" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker --- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-traffic-gen-saApply the
ServiceAccount,Role, andRoleBindingmanifest:$ oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml
Create a
SecurityContextConstraintsmanifest for the traffic generator pod:Example security context constraints manifest
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: dpdk-checkup-traffic-gen allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: false allowPrivilegedContainer: false allowedCapabilities: - IPC_LOCK - NET_ADMIN - NET_RAW - SYS_RESOURCE defaultAddCapabilities: null fsGroup: type: RunAsAny groups: [] readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - runtime/default - unconfined supplementalGroups: type: RunAsAny users: - system:serviceaccount:dpdk-checkup-ns:dpdk-checkup-traffic-gen-sa
Apply the
SecurityContextConstraintsmanifest:$ oc apply -f <dpdk_scc>.yaml
Create a
ConfigMapmanifest that contains the input parameters for the checkup:Example input config map
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGeneratorRuntimeClassName: <runtimeclass_name> 2 spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" 3 spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" 4
- 1
- The name of the
NetworkAttachmentDefinitionobject. - 2
- The
RuntimeClassresource that the traffic generator pod uses. - 3
- The container image for the traffic generator. In this example, the image is pulled from the upstream Project Quay Container Registry.
- 4
- The container disk image for the VM. In this example, the image is pulled from the upstream Project Quay Container Registry.
Apply the
ConfigMapmanifest in the target namespace:$ oc apply -n <target_namespace> -f <dpdk_config_map>.yaml
Create a
Jobmanifest to run the checkup:Example job manifest
apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.13.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] runAsNonRoot: true seccompProfile: type: "RuntimeDefault" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uidApply the
Jobmanifest:$ oc apply -n <target_namespace> -f <dpdk_job>.yaml
Wait for the job to complete:
$ oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m
Review the results of the checkup by running the following command:
$ oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml
Example output config map (success)
apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config data: spec.timeout: 1h2m spec.param.NetworkAttachmentDefinitionName: "mlx-dpdk-network-1" spec.param.trafficGeneratorRuntimeClassName: performance-performance-zeus10 spec.param.trafficGeneratorImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.1.1" spec.param.vmContainerDiskImage: "quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.1.1" status.succeeded: true status.failureReason: " " status.startTimestamp: 2022-12-21T09:33:06+00:00 status.completionTimestamp: 2022-12-21T11:33:06+00:00 status.result.actualTrafficGeneratorTargetNode: worker-dpdk1 status.result.actualDPDKVMTargetNode: worker-dpdk2 status.result.dropRate: 0
Delete the job and config map that you previously created by running the following commands:
$ oc delete job -n <target_namespace> dpdk-checkup
$ oc delete config-map -n <target_namespace> dpdk-checkup-config
Optional: If you do not plan to run another checkup, delete the
ServiceAccount,Role, andRoleBindingmanifest:$ oc delete -f <dpdk_sa_roles_rolebinding>.yaml
14.3.2.3.1. DPDK checkup config map parameters
The following table shows the mandatory and optional parameters that you can set in the data stanza of the input ConfigMap manifest when you run a cluster DPDK readiness checkup:
Table 14.3. DPDK checkup config map parameters
| Parameter | Description | Is Mandatory |
|---|---|---|
|
| The time, in minutes, before the checkup fails. | True |
|
|
The name of the | True |
|
| The RuntimeClass resource that the traffic generator pod uses. | True |
|
|
The container image for the traffic generator. The default value is | False |
|
| The node on which the traffic generator pod is to be scheduled. The node should be configured to allow DPDK traffic. | False |
|
| The number of packets per second, in kilo (k) or million(m). The default value is 14m. | False |
|
|
The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format | False |
|
|
The MAC address of the NIC connected to the traffic generator pod or VM. The default value is a random MAC address in the format | False |
|
|
The container disk image for the VM. The default value is | False |
|
| The label of the node on which the VM runs. The node should be configured to allow DPDK traffic. | False |
|
|
The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format | False |
|
|
The MAC address of the NIC that is connected to the VM. The default value is a random MAC address in the format | False |
|
| The duration, in minutes, for which the traffic generator runs. The default value is 5 minutes. | False |
|
| The maximum bandwidth of the SR-IOV NIC. The default value is 10GB. | False |
|
|
When set to | False |
14.3.2.3.2. Building a container disk image for RHEL virtual machines
You can build a custom Red Hat Enterprise Linux (RHEL) 8 OS image in qcow2 format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the spec.param.vmContainerDiskImage attribute of the DPDK checkup config map.
To build a container disk image, you must create an image builder virtual machine (VM). The image builder VM is a RHEL 8 VM that can be used to build custom RHEL images.
Prerequisites
-
The image builder VM must run RHEL 8.7 and must have a minimum of 2 CPU cores, 4 GiB RAM, and 20 GB of free space in the
/vardirectory. -
You have installed the image builder tool and its CLI (
composer-cli) on the VM. You have installed the
virt-customizetool:# dnf install libguestfs-tools
-
You have installed the Podman CLI tool (
podman).
Procedure
Verify that you can build a RHEL 8.7 image:
# composer-cli distros list
NoteTo run the
composer-clicommands as non-root, add your user to theweldrorrootgroups:# usermod -a -G weldr user
$ newgrp weldr
Enter the following command to create an image blueprint file in TOML format that contains the packages to be installed, kernel customizations, and the services to be disabled during boot time:
$ cat << EOF > dpdk-vm.toml name = "dpdk_image" description = "Image to use with the DPDK checkup" version = "0.0.1" distro = "rhel-87" [[packages]] name = "dpdk" [[packages]] name = "dpdk-tools" [[packages]] name = "driverctl" [[packages]] name = "tuned-profiles-cpu-partitioning" [customizations.kernel] append = "default_hugepagesz=1GB hugepagesz=1G hugepages=8 isolcpus=2-7" [customizations.services] disabled = ["NetworkManager-wait-online", "sshd"] EOF
Push the blueprint file to the image builder tool by running the following command:
# composer-cli blueprints push dpdk-vm.toml
Generate the system image by specifying the blueprint name and output file format. The Universally Unique Identifier (UUID) of the image is displayed when you start the compose process.
# composer-cli compose start dpdk_image qcow2
Wait for the compose process to complete. The compose status must show
FINISHEDbefore you can continue to the next step.# composer-cli compose status
Enter the following command to download the
qcow2image file by specifying its UUID:# composer-cli compose image <UUID>
Create the customization scripts by running the following commands:
$ cat <<EOF >customize-vm echo isolated_cores=2-7 > /etc/tuned/cpu-partitioning-variables.conf tuned-adm profile cpu-partitioning echo "options vfio enable_unsafe_noiommu_mode=1" > /etc/modprobe.d/vfio-noiommu.conf EOF
$ cat <<EOF >first-boot driverctl set-override 0000:06:00.0 vfio-pci driverctl set-override 0000:07:00.0 vfio-pci mkdir /mnt/huge mount /mnt/huge --source nodev -t hugetlbfs -o pagesize=1GB EOF
Use the
virt-customizetool to customize the image generated by the image builder tool:$ virt-customize -a <UUID>.qcow2 --run=customize-vm --firstboot=first-boot --selinux-relabel
To create a Dockerfile that contains all the commands to build the container disk image, enter the following command:
$ cat << EOF > Dockerfile FROM scratch COPY <uuid>-disk.qcow2 /disk/ EOF
where:
- <uuid>-disk.qcow2
-
Specifies the name of the custom image in
qcow2format.
Build and tag the container by running the following command:
$ podman build . -t dpdk-rhel:latest
Push the container disk image to a registry that is accessible from your cluster by running the following command:
$ podman push dpdk-rhel:latest
-
Provide a link to the container disk image in the
spec.param.vmContainerDiskImageattribute in the DPDK checkup config map.
14.3.2.4. Additional resources
- Attaching a virtual machine to multiple networks
- Using a virtual function in DPDK mode with an Intel NIC
- Using SR-IOV and the Node Tuning Operator to achieve a DPDK line rate
- Installing image builder
- How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription Manager
14.3.3. Prometheus queries for virtual resources
OpenShift Virtualization provides metrics that you can use to monitor the consumption of cluster infrastructure resources, including vCPU, network, storage, and guest memory swapping. You can also use metrics to query live migration status.
Use the OpenShift Container Platform monitoring dashboard to query virtualization metrics.
14.3.3.1. Prerequisites
-
To use the vCPU metric, the
schedstats=enablekernel argument must be applied to theMachineConfigobject. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler. For more information, see Adding kernel arguments to nodes. - For guest memory swapping queries to return data, memory swapping must be enabled on the virtual guests.
14.3.3.2. Querying metrics
The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring.
As a cluster administrator, you can query metrics for all core OpenShift Container Platform and user-defined projects.
As a developer, you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project.
14.3.3.2.1. Querying metrics for all projects as a cluster administrator
As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admincluster role or with view permissions for all projects. -
You have installed the OpenShift CLI (
oc).
Procedure
- From the Administrator perspective in the OpenShift Container Platform web console, select Observe → Metrics.
To add one or more queries, do any of the following:
Option Description Create a custom query.
Add your Prometheus Query Language (PromQL) query to the Expression field.
As you type a PromQL expression, autocomplete suggestions appear in a drop-down list. These suggestions include functions, metrics, labels, and time tokens. You can use the keyboard arrows to select one of these suggested items and then press Enter to add the item to your expression. You can also move your mouse pointer over a suggested item to view a brief description of that item.
Add multiple queries.
Select Add query.
Duplicate an existing query.
Select the Options menu
next to the query, then choose Duplicate query.
Disable a query from being run.
Select the Options menu
next to the query and choose Disable query.
To run queries that you created, select Run queries. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message.
NoteQueries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs.
NoteBy default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query.
- Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL.
Explore the visualized metrics. Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown by doing any of the following:
Option Description Hide all metrics from a query.
Click the Options menu
for the query and click Hide all series.
Hide a specific metric.
Go to the query table and click the colored square near the metric name.
Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box.
Hide the plot.
Select Hide graph.
14.3.3.2.2. Querying metrics for user-defined projects as a developer
You can access metrics for a user-defined project as a developer or as a user with view permissions for the project.
In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project.
Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time.
Prerequisites
- You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for.
- You have enabled monitoring for user-defined projects.
- You have deployed a service in a user-defined project.
-
You have created a
ServiceMonitorcustom resource definition (CRD) for the service to define how the service is monitored.
Procedure
- From the Developer perspective in the OpenShift Container Platform web console, select Observe → Metrics.
- Select the project that you want to view metrics for in the Project: list.
Select a query from the Select query list, or create a custom PromQL query based on the selected query by selecting Show PromQL. The metrics from the queries are visualized on the plot.
NoteIn the Developer perspective, you can only run one query at a time.
Explore the visualized metrics by doing any of the following:
Option Description Zoom into the plot and change the time range.
Either:
- Visually select the time range by clicking and dragging on the plot horizontally.
- Use the menu in the left upper corner to select the time range.
Reset the time range.
Select Reset zoom.
Display outputs for all queries at a specific point in time.
Hold the mouse cursor on the plot at that point. The query outputs appear in a pop-up box.
14.3.3.3. Virtualization metrics
The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions.
The following examples use topk queries that specify a time period. If virtual machines are deleted during that time period, they can still appear in the query output.
14.3.3.3.1. vCPU metrics
The following query can identify virtual machines that are waiting for Input/Output (I/O):
kubevirt_vmi_vcpu_wait_seconds- Returns the wait time (in seconds) for a virtual machine’s vCPU. Type: Counter.
A value above '0' means that the vCPU wants to run, but the host scheduler cannot run it yet. This inability to run indicates that there is an issue with I/O.
To query the vCPU metric, the schedstats=enable kernel argument must first be applied to the MachineConfig object. This kernel argument enables scheduler statistics used for debugging and performance tuning and adds a minor additional load to the scheduler.
Example vCPU wait time query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1
- 1
- This query returns the top 3 VMs waiting for I/O at every given moment over a six-minute time period.
14.3.3.3.2. Network metrics
The following queries can identify virtual machines that are saturating the network:
kubevirt_vmi_network_receive_bytes_total- Returns the total amount of traffic received (in bytes) on the virtual machine’s network. Type: Counter.
kubevirt_vmi_network_transmit_bytes_total- Returns the total amount of traffic transmitted (in bytes) on the virtual machine’s network. Type: Counter.
Example network traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs transmitting the most network traffic at every given moment over a six-minute time period.
14.3.3.3.3. Storage metrics
14.3.3.3.3.1. Storage-related traffic
The following queries can identify VMs that are writing large amounts of data:
kubevirt_vmi_storage_read_traffic_bytes_total- Returns the total amount (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
kubevirt_vmi_storage_write_traffic_bytes_total- Returns the total amount of storage writes (in bytes) of the virtual machine’s storage-related traffic. Type: Counter.
Example storage-related traffic query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs performing the most storage traffic at every given moment over a six-minute time period.
14.3.3.3.3.2. Storage snapshot data
kubevirt_vmsnapshot_disks_restored_from_source_total- Returns the total number of virtual machine disks restored from the source virtual machine. Type: Gauge.
kubevirt_vmsnapshot_disks_restored_from_source_bytes- Returns the amount of space in bytes restored from the source virtual machine. Type: Gauge.
Examples of storage snapshot data queries
kubevirt_vmsnapshot_disks_restored_from_source_total{vm_name="simple-vm", vm_namespace="default"} 1
- 1
- This query returns the total number of virtual machine disks restored from the source virtual machine.
kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name="simple-vm", vm_namespace="default"} 1- 1
- This query returns the amount of space in bytes restored from the source virtual machine.
14.3.3.3.3.3. I/O performance
The following queries can determine the I/O performance of storage devices:
kubevirt_vmi_storage_iops_read_total- Returns the amount of write I/O operations the virtual machine is performing per second. Type: Counter.
kubevirt_vmi_storage_iops_write_total- Returns the amount of read I/O operations the virtual machine is performing per second. Type: Counter.
Example I/O performance query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs performing the most I/O operations per second at every given moment over a six-minute time period.
14.3.3.3.4. Guest memory swapping metrics
The following queries can identify which swap-enabled guests are performing the most memory swapping:
kubevirt_vmi_memory_swap_in_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping in. Type: Gauge.
kubevirt_vmi_memory_swap_out_traffic_bytes_total- Returns the total amount (in bytes) of memory the virtual guest is swapping out. Type: Gauge.
Example memory swapping query
topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1
- 1
- This query returns the top 3 VMs where the guest is performing the most memory swapping at every given moment over a six-minute time period.
Memory swapping indicates that the virtual machine is under memory pressure. Increasing the memory allocation of the virtual machine can mitigate this issue.
14.3.3.3.5. Live migration metrics
The following metrics can be queried to show live migration status:
kubevirt_migrate_vmi_data_processed_bytes- The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge.
kubevirt_migrate_vmi_data_remaining_bytes- The amount of guest operating system data that remains to be migrated. Type: Gauge.
kubevirt_migrate_vmi_dirty_memory_rate_bytes- The rate at which memory is becoming dirty in the guest operating system. Dirty memory is data that has been changed but not yet written to disk. Type: Gauge.
kubevirt_migrate_vmi_pending_count- The number of pending migrations. Type: Gauge.
kubevirt_migrate_vmi_scheduling_count- The number of scheduling migrations. Type: Gauge.
kubevirt_migrate_vmi_running_count- The number of running migrations. Type: Gauge.
kubevirt_migrate_vmi_succeeded- The number of successfully completed migrations. Type: Gauge.
kubevirt_migrate_vmi_failed- The number of failed migrations. Type: Gauge.
14.3.3.4. Additional resources
14.3.4. Exposing custom metrics for virtual machines
OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. This monitoring stack is based on the Prometheus monitoring system. Prometheus is a time-series database and a rule evaluation engine for metrics.
In addition to using the OpenShift Container Platform monitoring stack, you can enable monitoring for user-defined projects by using the CLI and query custom metrics that are exposed for virtual machines through the node-exporter service.
14.3.4.1. Configuring the node exporter service
The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines.
Prerequisites
-
Install the OpenShift Container Platform CLI
oc. -
Log in to the cluster as a user with
cluster-adminprivileges. -
Create the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringproject. -
Configure the
user-workload-monitoring-configConfigMapobject in theopenshift-user-workload-monitoringproject by settingenableUserWorkloadtotrue.
Procedure
Create the
ServiceYAML file. In the following example, the file is callednode-exporter-service.yaml.kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7
- 1
- The node-exporter service that exposes the metrics from the virtual machines.
- 2
- The namespace where the service is created.
- 3
- The label for the service. The
ServiceMonitoruses this label to match this service. - 4
- The name given to the port that exposes metrics on port 9100 for the
ClusterIPservice. - 5
- The target port used by
node-exporter-serviceto listen for requests. - 6
- The TCP port number of the virtual machine that is configured with the
monitorlabel. - 7
- The label used to match the virtual machine’s pods. In this example, any virtual machine’s pod with the label
monitorand a value ofmetricswill be matched.
Create the node-exporter service:
$ oc create -f node-exporter-service.yaml
14.3.4.2. Configuring a virtual machine with the node exporter service
Download the node-exporter file on to the virtual machine. Then, create a systemd service that runs the node-exporter service when the virtual machine boots.
Prerequisites
-
The pods for the component are running in the
openshift-user-workload-monitoringproject. -
Grant the
monitoring-editrole to users who need to monitor this user-defined project.
Procedure
- Log on to the virtual machine.
Download the
node-exporterfile on to the virtual machine by using the directory path that applies to the version ofnode-exporterfile.$ wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz
Extract the executable and place it in the
/usr/bindirectory.$ sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz \ --directory /usr/bin --strip 1 "*/node_exporter"Create a
node_exporter.servicefile in this directory path:/etc/systemd/system. Thissystemdservice file runs the node-exporter service when the virtual machine reboots.[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target
Enable and start the
systemdservice.$ sudo systemctl enable node_exporter.service $ sudo systemctl start node_exporter.service
Verification
Verify that the node-exporter agent is reporting metrics from the virtual machine.
$ curl http://localhost:9100/metrics
Example output
go_gc_duration_seconds{quantile="0"} 1.5244e-05 go_gc_duration_seconds{quantile="0.25"} 3.0449e-05 go_gc_duration_seconds{quantile="0.5"} 3.7913e-05
14.3.4.3. Creating a custom monitoring label for virtual machines
To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine’s YAML file.
Prerequisites
-
Install the OpenShift Container Platform CLI
oc. -
Log in as a user with
cluster-adminprivileges. - Access to the web console for stop and restart a virtual machine.
Procedure
Edit the
templatespec of your virtual machine configuration file. In this example, the labelmonitorhas the valuemetrics.spec: template: metadata: labels: monitor: metrics-
Stop and restart the virtual machine to create a new pod with the label name given to the
monitorlabel.
14.3.4.3.1. Querying the node-exporter service for metrics
Metrics are exposed for virtual machines through an HTTP service endpoint under the /metrics canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Obtain the HTTP service endpoint by specifying the namespace for the service:
$ oc get service -n <namespace> <node-exporter-service>
To list all available metrics for the node-exporter service, query the
metricsresource.$ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$"
Example output
node_arp_entries{device="eth0"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name="0",type="Processor"} 0 node_cooling_device_max_state{name="0",type="Processor"} 0 node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0 node_cpu_guest_seconds_total{cpu="0",mode="user"} 0 node_cpu_seconds_total{cpu="0",mode="idle"} 1.10586485e+06 node_cpu_seconds_total{cpu="0",mode="iowait"} 37.61 node_cpu_seconds_total{cpu="0",mode="irq"} 233.91 node_cpu_seconds_total{cpu="0",mode="nice"} 551.47 node_cpu_seconds_total{cpu="0",mode="softirq"} 87.3 node_cpu_seconds_total{cpu="0",mode="steal"} 86.12 node_cpu_seconds_total{cpu="0",mode="system"} 464.15 node_cpu_seconds_total{cpu="0",mode="user"} 1075.2 node_disk_discard_time_seconds_total{device="vda"} 0 node_disk_discard_time_seconds_total{device="vdb"} 0 node_disk_discarded_sectors_total{device="vda"} 0 node_disk_discarded_sectors_total{device="vdb"} 0 node_disk_discards_completed_total{device="vda"} 0 node_disk_discards_completed_total{device="vdb"} 0 node_disk_discards_merged_total{device="vda"} 0 node_disk_discards_merged_total{device="vdb"} 0 node_disk_info{device="vda",major="252",minor="0"} 1 node_disk_info{device="vdb",major="252",minor="16"} 1 node_disk_io_now{device="vda"} 0 node_disk_io_now{device="vdb"} 0 node_disk_io_time_seconds_total{device="vda"} 174 node_disk_io_time_seconds_total{device="vdb"} 0.054 node_disk_io_time_weighted_seconds_total{device="vda"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device="vdb"} 0.039 node_disk_read_bytes_total{device="vda"} 3.71867136e+08 node_disk_read_bytes_total{device="vdb"} 366592 node_disk_read_time_seconds_total{device="vda"} 19.128 node_disk_read_time_seconds_total{device="vdb"} 0.039 node_disk_reads_completed_total{device="vda"} 5619 node_disk_reads_completed_total{device="vdb"} 96 node_disk_reads_merged_total{device="vda"} 5 node_disk_reads_merged_total{device="vdb"} 0 node_disk_write_time_seconds_total{device="vda"} 240.66400000000002 node_disk_write_time_seconds_total{device="vdb"} 0 node_disk_writes_completed_total{device="vda"} 71584 node_disk_writes_completed_total{device="vdb"} 0 node_disk_writes_merged_total{device="vda"} 19761 node_disk_writes_merged_total{device="vdb"} 0 node_disk_written_bytes_total{device="vda"} 2.007924224e+09 node_disk_written_bytes_total{device="vdb"} 0
14.3.4.4. Creating a ServiceMonitor resource for the node exporter service
You can use a Prometheus client library and scrape metrics from the /metrics endpoint to access and view the metrics exposed by the node-exporter service. Use a ServiceMonitor custom resource definition (CRD) to monitor the node exporter service.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Create a YAML file for the
ServiceMonitorresource configuration. In this example, the service monitor matches any service with the labelmetricsand queries theexmetport every 30 seconds.apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metricsCreate the
ServiceMonitorconfiguration for the node-exporter service.$ oc create -f node-exporter-metrics-monitor.yaml
14.3.4.4.1. Accessing the node exporter service outside the cluster
You can access the node-exporter service outside the cluster and view the exposed metrics.
Prerequisites
-
You have access to the cluster as a user with
cluster-adminprivileges or themonitoring-editrole. - You have enabled monitoring for the user-defined project by configuring the node-exporter service.
Procedure
Expose the node-exporter service.
$ oc expose service -n <namespace> <node_exporter_service_name>
Obtain the FQDN (Fully Qualified Domain Name) for the route.
$ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host
Example output
NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org
Use the
curlcommand to display metrics for the node-exporter service.$ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics
Example output
go_gc_duration_seconds{quantile="0"} 1.5382e-05 go_gc_duration_seconds{quantile="0.25"} 3.1163e-05 go_gc_duration_seconds{quantile="0.5"} 3.8546e-05 go_gc_duration_seconds{quantile="0.75"} 4.9139e-05 go_gc_duration_seconds{quantile="1"} 0.000189423
14.3.4.5. Additional resources
14.3.5. Virtual machine health checks
You can configure virtual machine (VM) health checks by defining readiness and liveness probes in the VirtualMachine resource.
14.3.5.1. About readiness and liveness probes
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive.
A readiness probe determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready.
A liveness probe determines whether a VM is responsive. If the probe fails, the VM is deleted and a new VM is created to restore responsiveness.
You can configure readiness and liveness probes by setting the spec.readinessProbe and the spec.livenessProbe fields of the VirtualMachine object. These fields support the following tests:
- HTTP GET
- The probe determines the health of the VM by using a web hook. The test is successful if the HTTP response code is between 200 and 399. You can use an HTTP GET test with applications that return HTTP status codes when they are completely initialized.
- TCP socket
- The probe attempts to open a socket to the VM. The VM is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete.
- Guest agent ping
-
The probe uses the
guest-pingcommand to determine if the QEMU guest agent is running on the virtual machine.
14.3.5.1.1. Defining an HTTP readiness probe
Define an HTTP readiness probe by setting the spec.readinessProbe.httpGet field of the virtual machine (VM) configuration.
Procedure
Include details of the readiness probe in the VM configuration file.
Sample readiness probe with an HTTP GET test
# ... spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8 # ...- 1
- The HTTP GET request to perform to connect to the VM.
- 2
- The port of the VM that the probe queries. In the above example, the probe queries port 1500.
- 3
- The path to access on the HTTP server. In the above example, if the handler for the server’s /healthz path returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is removed from the list of available endpoints.
- 4
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 5
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 7
- The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 8
- The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
14.3.5.1.2. Defining a TCP readiness probe
Define a TCP readiness probe by setting the spec.readinessProbe.tcpSocket field of the virtual machine (VM) configuration.
Procedure
Include details of the TCP readiness probe in the VM configuration file.
Sample readiness probe with a TCP socket test
# ... spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5 # ...- 1
- The time, in seconds, after the VM starts before the readiness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The TCP action to perform.
- 4
- The port of the VM that the probe queries.
- 5
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
14.3.5.1.3. Defining an HTTP liveness probe
Define an HTTP liveness probe by setting the spec.livenessProbe.httpGet field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test.
Procedure
Include details of the HTTP liveness probe in the VM configuration file.
Sample liveness probe with an HTTP GET test
# ... spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6 # ...- 1
- The time, in seconds, after the VM starts before the liveness probe is initiated.
- 2
- The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 3
- The HTTP GET request to perform to connect to the VM.
- 4
- The port of the VM that the probe queries. In the above example, the probe queries port 1500. The VM installs and runs a minimal HTTP server on port 1500 via cloud-init.
- 5
- The path to access on the HTTP server. In the above example, if the handler for the server’s
/healthzpath returns a success code, the VM is considered to be healthy. If the handler returns a failure code, the VM is deleted and a new VM is created. - 6
- The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
14.3.5.2. Defining a watchdog
You can define a watchdog to monitor the health of the guest operating system by performing the following steps:
- Configure a watchdog device for the virtual machine (VM).
- Install the watchdog agent on the guest.
The watchdog device monitors the agent and performs one of the following actions if the guest operating system is unresponsive:
-
poweroff: The VM powers down immediately. Ifspec.runningis set totrueorspec.runStrategyis not set tomanual, then the VM reboots. reset: The VM reboots in place and the guest operating system cannot react.NoteThe reboot time might cause liveness probes to time out. If cluster-level protections detect a failed liveness probe, the VM might be forcibly rescheduled, increasing the reboot time.
-
shutdown: The VM gracefully powers down by stopping all services.
Watchdog is not available for Windows VMs.
14.3.5.2.1. Configuring a watchdog device for the virtual machine
You configure a watchdog device for the virtual machine (VM).
Prerequisites
-
The VM must have kernel support for an
i6300esbwatchdog device. Red Hat Enterprise Linux (RHEL) images supporti6300esb.
Procedure
Create a
YAMLfile with the following contents:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 # ...- 1
- Specify
poweroff,reset, orshutdown.
The example above configures the
i6300esbwatchdog device on a RHEL8 VM with the poweroff action and exposes the device as/dev/watchdog.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -i
Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-trigger
Stop the watchdog service:
# pkill -9 watchdog
14.3.5.2.2. Installing the watchdog agent on the guest
You install the watchdog agent on the guest and start the watchdog service.
Procedure
- Log in to the virtual machine as root user.
Install the
watchdogpackage and its dependencies:# yum install watchdog
Uncomment the following line in the
/etc/watchdog.conffile and save the changes:#watchdog-device = /dev/watchdog
Enable the
watchdogservice to start on boot:# systemctl enable --now watchdog.service
14.3.5.3. Defining a guest agent ping probe
Define a guest agent ping probe by setting the spec.readinessProbe.guestAgentPing field of the virtual machine (VM) configuration.
The guest agent ping probe is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Prerequisites
- The QEMU guest agent must be installed and enabled on the virtual machine.
Procedure
Include details of the guest agent ping probe in the VM configuration file. For example:
Sample guest agent ping probe
# ... spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6 # ...- 1
- The guest agent ping probe to connect to the VM.
- 2
- Optional: The time, in seconds, after the VM starts before the guest agent probe is initiated.
- 3
- Optional: The delay, in seconds, between performing probes. The default delay is 10 seconds. This value must be greater than
timeoutSeconds. - 4
- Optional: The number of seconds of inactivity after which the probe times out and the VM is assumed to have failed. The default value is 1. This value must be lower than
periodSeconds. - 5
- Optional: The number of times that the probe is allowed to fail. The default is 3. After the specified number of attempts, the pod is marked
Unready. - 6
- Optional: The number of times that the probe must report success, after a failure, to be considered successful. The default is 1.
Create the VM by running the following command:
$ oc create -f <file_name>.yaml
14.3.5.4. Additional resources
14.4. Troubleshooting
OpenShift Virtualization provides tools and logs for troubleshooting virtual machines and virtualization components.
You can troubleshoot OpenShift Virtualization components by using the tools provided in the web console or by using the oc CLI tool.
14.4.1. Events
OpenShift Container Platform events are records of important life-cycle information and are useful for monitoring and troubleshooting virtual machine, namespace, and resource issues.
VM events: Navigate to the Events tab of the VirtualMachine details page in the web console.
- Namespace events
You can view namespace events by running the following command:
$ oc get events -n <namespace>
See the list of events for details about specific events.
- Resource events
You can view resource events by running the following command:
$ oc describe <resource> <resource_name>
14.4.2. Logs
You can review the following logs for troubleshooting:
14.4.2.1. Viewing virtual machine logs with the web console
You can view virtual machine logs with the OpenShift Container Platform web console.
Procedure
- Navigate to Virtualization → VirtualMachines.
- Select a virtual machine to open the VirtualMachine details page.
- On the Details tab, click the pod name to open the Pod details page.
- Click the Logs tab to view the logs.
14.4.2.2. Viewing OpenShift Virtualization pod logs
You can view logs for OpenShift Virtualization pods by using the oc CLI tool.
You can configure the verbosity level of the logs by editing the HyperConverged custom resource (CR).
14.4.2.2.1. Viewing OpenShift Virtualization pod logs with the CLI
You can view logs for the OpenShift Virtualization pods by using the oc CLI tool.
Procedure
View a list of pods in the OpenShift Virtualization namespace by running the following command:
$ oc get pods -n openshift-cnv
Example 14.3. Example output
NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m
View the pod log by running the following command:
$ oc logs -n openshift-cnv <pod_name>
NoteIf a pod fails to start, you can use the
--previousoption to view logs from the last attempt.To monitor log output in real time, use the
-foption.Example 14.4. Example output
{"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373726Z"} {"component":"virt-handler","level":"info","msg":"setting rate limiter to 5 QPS and 10 Burst","pos":"virt-handler.go:462","timestamp":"2022-04-17T08:58:37.373782Z"} {"component":"virt-handler","level":"info","msg":"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]","pos":"cpu_plugin.go:96","timestamp":"2022-04-17T08:58:37.390221Z"} {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"}
14.4.2.2.2. Configuring OpenShift Virtualization pod log verbosity
You can configure the verbosity level of OpenShift Virtualization pod logs by editing the HyperConverged custom resource (CR).
Procedure
To set log verbosity for specific components, open the
HyperConvergedCR in your default text editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Set the log level for one or more components by editing the
spec.logVerbosityConfigstanza. For example:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6- 1
- The log verbosity value must be an integer in the range
1–9, where a higher number indicates a more detailed log. In this example, thevirtAPIcomponent logs are exposed if their priority level is5or higher.
- Apply your changes by saving and exiting the editor.
14.4.2.2.3. Common error messages
The following error messages might appear in OpenShift Virtualization logs:
ErrImagePullorImagePullBackOff- Indicates an incorrect deployment configuration or problems with the images that are referenced.
14.4.2.3. Viewing aggregated OpenShift Virtualization logs with the LokiStack
You can view aggregated logs for OpenShift Virtualization pods and containers by using the LokiStack in the web console.
Prerequisites
- You deployed the LokiStack.
Procedure
- Navigate to Observe → Logs in the web console.
-
Select application, for
virt-launcherpod logs, or infrastructure, for OpenShift Virtualization control plane pods and containers, from the log type list. - Click Show Query to display the query field.
- Enter the LogQL query in the query field and click Run Query to display the filtered logs.
14.4.2.3.1. OpenShift Virtualization LogQL queries
You can view and filter aggregated logs for OpenShift Virtualization components by running Loki Query Language (LogQL) queries on the Observe → Logs page in the web console.
The default log type is infrastructure. The virt-launcher log type is application.
Optional: You can include or exclude strings or regular expressions by using line filter expressions.
If the query matches a large number of logs, the query might time out.
Table 14.4. OpenShift Virtualization LogQL example queries
| Component | LogQL query |
|---|---|
| All |
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|
|
|
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="storage"
|
|
|
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="deployment"
|
|
|
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="network"
|
|
|
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="compute"
|
|
|
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|kubernetes_labels_app_kubernetes_io_component="schedule"
|
| Container |
{log_type=~".+",kubernetes_container_name=~"<container>|<container>"} 1
|json|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|
|
| You must select application from the log type list before running this query. {log_type=~".+", kubernetes_container_name="compute"}|json
|!= "custom-ga-command" 1
|
You can filter log lines to include or exclude strings or regular expressions by using line filter expressions.
Table 14.5. Line filter expressions
| Line filter expression | Description |
|---|---|
|
| Log line contains string |
|
| Log line does not contain string |
|
| Log line contains regular expression |
|
| Log line does not contain regular expression |
Example line filter expression
{log_type=~".+"}|json
|kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster"
|= "error" != "timeout"
14.4.2.3.2. Additional resources for LokiStack and LogQL
- About the LokiStack
- Deploying the LokiStack on OpenShift Container Platform
- LogQL log queries in the Grafana documentation
14.4.3. Troubleshooting data volumes
You can check the Conditions and Events sections of the DataVolume object to analyze and resolve issues.
14.4.3.1. About data volume conditions and events
You can diagnose data volume issues by examining the output of the Conditions and Events sections generated by the command:
$ oc describe dv <DataVolume>
The Conditions section displays the following Types:
-
Bound -
Running -
Ready
The Events section provides the following additional information:
-
Typeof event -
Reasonfor logging -
Sourceof the event -
Messagecontaining additional diagnostic information.
The output from oc describe does not always contains Events.
An event is generated when the Status, Reason, or Message changes. Both conditions and events react to changes in the state of the data volume.
For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well.
14.4.3.2. Analyzing data volume conditions and events
By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state.
There are many different combinations of conditions. Each must be evaluated in its unique context.
Examples of various combinations follow.
Bound- A successfully bound PVC displays in this example.Note that the
TypeisBound, so theStatusisTrue. If the PVC is not bound, theStatusisFalse.When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the
ReasonisBoundandStatusisTrue. TheMessageindicates which PVC owns the data volume.Message, in theEventssection, provides further details including how long the PVC has been bound (Age) and by what resource (From), in this casedatavolume-controller:Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv BoundRunning- In this case, note thatTypeisRunningandStatusisFalse, indicating that an event has occurred that caused an attempted operation to fail, changing the Status fromTruetoFalse.However, note that
ReasonisCompletedand theMessagefield indicatesImport Complete.In the
Eventssection, theReasonandMessagecontain additional troubleshooting information about the failed operation. In this example, theMessagedisplays an inability to connect due to a404, listed in theEventssection’s firstWarning.From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the data volume:
Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not FoundReady– IfTypeisReadyandStatusisTrue, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be used, theStatusisFalse:Example output
Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready
14.5. OpenShift Virtualization runbooks
You can use the procedures in these runbooks to diagnose and resolve issues that trigger OpenShift Virtualization alerts.
OpenShift Virtualization alerts are displayed on the Virtualization → Overview → Overview tab in the web console.
14.5.1. CDIDataImportCronOutdated
Meaning
This alert fires when DataImportCron cannot poll or import the latest disk image versions.
DataImportCron polls disk images, checking for the latest versions, and imports the images as persistent volume claims (PVCs). This process ensures that PVCs are updated to the latest version so that they can be used as reliable clone sources or golden images for virtual machines (VMs).
For golden images, latest refers to the latest operating system of the distribution. For other disk images, latest refers to the latest hash of the image that is available.
Impact
VMs might be created from outdated disk images.
VMs might fail to start because no source PVC is available for cloning.
Diagnosis
Check the cluster for a default storage class:
$ oc get sc
The output displays the storage classes with
(default)beside the name of the default storage class. You must set a default storage class, either on the cluster or in theDataImportCronspecification, in order for theDataImportCronto poll and import golden images. If no storage class is defined, the DataVolume controller fails to create PVCs and the following event is displayed:DataVolume.storage spec is missing accessMode and no storageClass to choose profile.Obtain the
DataImportCronnamespace and name:$ oc get dataimportcron -A -o json | jq -r '.items[] | \ select(.status.conditions[] | select(.type == "UpToDate" and \ .status == "False")) | .metadata.namespace + "/" + .metadata.name'
If a default storage class is not defined on the cluster, check the
DataImportCronspecification for a default storage class:$ oc get dataimportcron <dataimportcron> -o yaml | \ grep -B 5 storageClassName
Example output
url: docker://.../cdi-func-test-tinycore storage: resources: requests: storage: 5Gi storageClassName: rook-ceph-blockObtain the name of the
DataVolumeassociated with theDataImportCronobject:$ oc -n <namespace> get dataimportcron <dataimportcron> -o json | \ jq .status.lastImportedPVC.name
Check the
DataVolumelog for error messages:$ oc -n <namespace> get dv <datavolume> -o yaml
Set the
CDI_NAMESPACEenvironment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | \ grep cdi-operator | awk '{print $1}')"Check the
cdi-deploymentlog for error messages:$ oc logs -n $CDI_NAMESPACE deployment/cdi-deployment
Mitigation
-
Set a default storage class, either on the cluster or in the
DataImportCronspecification, to poll and import golden images. The updated Containerized Data Importer (CDI) will resolve the issue within a few seconds. -
If the issue does not resolve itself, delete the data volumes associated with the affected
DataImportCronobjects. The CDI will recreate the data volumes with the default storage class. If your cluster is installed in a restricted network environment, disable the
enableCommonBootImageImportfeature gate in order to opt out of automatic updates:$ oc patch hco kubevirt-hyperconverged -n $CDI_NAMESPACE --type json \ -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", "value": false}]'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.2. CDIDataVolumeUnusualRestartCount
Meaning
This alert fires when a DataVolume object restarts more than three times.
Impact
Data volumes are responsible for importing and creating a virtual machine disk on a persistent volume claim. If a data volume restarts more than three times, these operations are unlikely to succeed. You must diagnose and resolve the issue.
Diagnosis
Obtain the name and namespace of the data volume:
$ oc get dv -A -o json | jq -r '.items[] | \ select(.status.restartCount>3)' | jq '.metadata.name, .metadata.namespace'
Check the status of the pods associated with the data volume:
$ oc get pods -n <namespace> -o json | jq -r '.items[] | \ select(.metadata.ownerReferences[] | \ select(.name=="<dv_name>")).metadata.name'
Obtain the details of the pods:
$ oc -n <namespace> describe pods <pod>
Check the pod logs for error messages:
$ oc -n <namespace> describe logs <pod>
Mitigation
Delete the data volume, resolve the issue, and create a new data volume.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.3. CDINotReady
Meaning
This alert fires when the Containerized Data Importer (CDI) is in a degraded state:
- Not progressing
- Not available to use
Impact
CDI is not usable, so users cannot build virtual machine disks on persistent volume claims (PVCs) using CDI’s data volumes. CDI components are not ready and they stopped progressing towards a ready state.
Diagnosis
Set the
CDI_NAMESPACEenvironment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | \ grep cdi-operator | awk '{print $1}')"Check the CDI deployment for components that are not ready:
$ oc -n $CDI_NAMESPACE get deploy -l cdi.kubevirt.io
Check the details of the failing pod:
$ oc -n $CDI_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $CDI_NAMESPACE logs <pod>
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.4. CDIOperatorDown
Meaning
This alert fires when the Containerized Data Importer (CDI) Operator is down. The CDI Operator deploys and manages the CDI infrastructure components, such as data volume and persistent volume claim (PVC) controllers. These controllers help users build virtual machine disks on PVCs.
Impact
The CDI components might fail to deploy or to stay in a required state. The CDI installation might not function correctly.
Diagnosis
Set the
CDI_NAMESPACEenvironment variable:$ export CDI_NAMESPACE="$(oc get deployment -A | grep cdi-operator | \ awk '{print $1}')"Check whether the
cdi-operatorpod is currently running:$ oc -n $CDI_NAMESPACE get pods -l name=cdi-operator
Obtain the details of the
cdi-operatorpod:$ oc -n $CDI_NAMESPACE describe pods -l name=cdi-operator
Check the log of the
cdi-operatorpod for errors:$ oc -n $CDI_NAMESPACE logs -l name=cdi-operator
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.5. CDIStorageProfilesIncomplete
Meaning
This alert fires when a Containerized Data Importer (CDI) storage profile is incomplete.
If a storage profile is incomplete, the CDI cannot infer persistent volume claim (PVC) fields, such as volumeMode and accessModes, which are required to create a virtual machine (VM) disk.
Impact
The CDI cannot create a VM disk on the PVC.
Diagnosis
Identify the incomplete storage profile:
$ oc get storageprofile <storage_class>
Mitigation
Add the missing storage profile information as in the following example:
$ oc patch storageprofile local --type=merge -p '{"spec": \ {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], \ "volumeMode": "Filesystem"}]}}'
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.6. CnaoDown
Meaning
This alert fires when the Cluster Network Addons Operator (CNAO) is down. The CNAO deploys additional networking components on top of the cluster.
Impact
If the CNAO is not running, the cluster cannot reconcile changes to virtual machine components. As a result, the changes might fail to take effect.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | \ grep cluster-network-addons-operator | awk '{print $1}')"Check the status of the
cluster-network-addons-operatorpod:$ oc -n $NAMESPACE get pods -l name=cluster-network-addons-operator
Check the
cluster-network-addons-operatorlogs for error messages:$ oc -n $NAMESPACE logs -l name=cluster-network-addons-operator
Obtain the details of the
cluster-network-addons-operatorpods:$ oc -n $NAMESPACE describe pods -l name=cluster-network-addons-operator
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.7. HPPNotReady
Meaning
This alert fires when a hostpath provisioner (HPP) installation is in a degraded state.
The HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
Impact
HPP is not usable. Its components are not ready and they are not progressing towards a ready state.
Diagnosis
Set the
HPP_NAMESPACEenvironment variable:$ export HPP_NAMESPACE="$(oc get deployment -A | \ grep hostpath-provisioner-operator | awk '{print $1}')"Check for HPP components that are currently not ready:
$ oc -n $HPP_NAMESPACE get all -l k8s-app=hostpath-provisioner
Obtain the details of the failing pod:
$ oc -n $HPP_NAMESPACE describe pods <pod>
Check the logs of the failing pod:
$ oc -n $HPP_NAMESPACE logs <pod>
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.8. HPPOperatorDown
Meaning
This alert fires when the hostpath provisioner (HPP) Operator is down.
The HPP Operator deploys and manages the HPP infrastructure components, such as the daemon set that provisions hostpath volumes.
Impact
The HPP components might fail to deploy or to remain in the required state. As a result, the HPP installation might not work correctly in the cluster.
Diagnosis
Configure the
HPP_NAMESPACEenvironment variable:$ HPP_NAMESPACE="$(oc get deployment -A | grep \ hostpath-provisioner-operator | awk '{print $1}')"Check whether the
hostpath-provisioner-operatorpod is currently running:$ oc -n $HPP_NAMESPACE get pods -l name=hostpath-provisioner-operator
Obtain the details of the
hostpath-provisioner-operatorpod:$ oc -n $HPP_NAMESPACE describe pods -l name=hostpath-provisioner-operator
Check the log of the
hostpath-provisioner-operatorpod for errors:$ oc -n $HPP_NAMESPACE logs -l name=hostpath-provisioner-operator
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.9. HPPSharingPoolPathWithOS
Meaning
This alert fires when the hostpath provisioner (HPP) shares a file system with other critical components, such as kubelet or the operating system (OS).
HPP dynamically provisions hostpath volumes to provide storage for persistent volume claims (PVCs).
Impact
A shared hostpath pool puts pressure on the node’s disks. The node might have degraded performance and stability.
Diagnosis
Configure the
HPP_NAMESPACEenvironment variable:$ export HPP_NAMESPACE="$(oc get deployment -A | \ grep hostpath-provisioner-operator | awk '{print $1}')"Obtain the status of the
hostpath-provisioner-csidaemon set pods:$ oc -n $HPP_NAMESPACE get pods | grep hostpath-provisioner-csi
Check the
hostpath-provisioner-csilogs to identify the shared pool and path:$ oc -n $HPP_NAMESPACE logs <csi_daemonset> -c hostpath-provisioner
Example output
I0208 15:21:03.769731 1 utils.go:221] pool (<legacy, csi-data-dir>/csi), shares path with OS which can lead to node disk pressure
Mitigation
Using the data obtained in the Diagnosis section, try to prevent the pool path from being shared with the OS. The specific steps vary based on the node and other circumstances.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.10. KubeMacPoolDown
Meaning
KubeMacPool is down. KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts.
Impact
If KubeMacPool is down, VirtualMachine objects cannot be created.
Diagnosis
Set the
KMP_NAMESPACEenvironment variable:$ export KMP_NAMESPACE="$(oc get pod -A --no-headers -l \ control-plane=mac-controller-manager | awk '{print $1}')"Set the
KMP_NAMEenvironment variable:$ export KMP_NAME="$(oc get pod -A --no-headers -l \ control-plane=mac-controller-manager | awk '{print $2}')"Obtain the
KubeMacPool-managerpod details:$ oc describe pod -n $KMP_NAMESPACE $KMP_NAME
Check the
KubeMacPool-managerlogs for error messages:$ oc logs -n $KMP_NAMESPACE $KMP_NAME
Mitigation
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.11. KubeMacPoolDuplicateMacsFound
Meaning
This alert fires when KubeMacPool detects duplicate MAC addresses.
KubeMacPool is responsible for allocating MAC addresses and preventing MAC address conflicts. When KubeMacPool starts, it scans the cluster for the MAC addresses of virtual machines (VMs) in managed namespaces.
Impact
Duplicate MAC addresses on the same LAN might cause network issues.
Diagnosis
Obtain the namespace and the name of the
kubemacpool-mac-controllerpod:$ oc get pod -A -l control-plane=mac-controller-manager --no-headers \ -o custom-columns=":metadata.namespace,:metadata.name"
Obtain the duplicate MAC addresses from the
kubemacpool-mac-controllerlogs:$ oc logs -n <namespace> <kubemacpool_mac_controller> | \ grep "already allocated"
Example output
mac address 02:00:ff:ff:ff:ff already allocated to vm/kubemacpool-test/testvm, br1, conflict with: vm/kubemacpool-test/testvm2, br1
Mitigation
- Update the VMs to remove the duplicate MAC addresses.
Restart the
kubemacpool-mac-controllerpod:$ oc delete pod -n <namespace> <kubemacpool_mac_controller>
14.5.12. KubeVirtComponentExceedsRequestedCPU
Meaning
This alert fires when a component’s CPU usage exceeds the requested limit.
Impact
Usage of CPU resources is not optimal and the node might be overloaded.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the component’s CPU request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | grep requests: -A 2
Check the actual CPU usage by using a PromQL query:
node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate {namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Mitigation
Update the CPU request limit in the HCO custom resource.
14.5.13. KubeVirtComponentExceedsRequestedMemory
Meaning
This alert fires when a component’s memory usage exceeds the requested limit.
Impact
Usage of memory resources is not optimal and the node might be overloaded.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the component’s memory request limit:
$ oc -n $NAMESPACE get deployment <component> -o yaml | \ grep requests: -A 2
Check the actual memory usage by using a PromQL query:
container_memory_usage_bytes{namespace="$NAMESPACE",container="<component>"}
See the Prometheus documentation for more information.
Mitigation
Update the memory request limit in the HCO custom resource.
14.5.14. KubevirtHyperconvergedClusterOperatorCRModification
Meaning
This alert fires when an operand of the HyperConverged Cluster Operator (HCO) is changed by someone or something other than HCO.
HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly. The HyperConverged custom resource is the source of truth for the configuration.
Impact
Changing the operands manually causes the cluster configuration to fluctuate and might lead to instability.
Diagnosis
Check the
component_namevalue in the alert details to determine the operand kind (kubevirt) and the operand name (kubevirt-kubevirt-hyperconverged) that are being changed:Labels alertname=KubevirtHyperconvergedClusterOperatorCRModification component_name=kubevirt/kubevirt-kubevirt-hyperconverged severity=warning
Mitigation
Do not change the HCO operands directly. Use HyperConverged objects to configure the cluster.
The alert resolves itself after 10 minutes if the operands are not changed manually.
14.5.15. KubevirtHyperconvergedClusterOperatorInstallationNotCompletedAlert
Meaning
This alert fires when the HyperConverged Cluster Operator (HCO) runs for more than an hour without a HyperConverged custom resource (CR).
This alert has the following causes:
-
During the installation process, you installed the HCO but you did not create the
HyperConvergedCR. -
During the uninstall process, you removed the
HyperConvergedCR before uninstalling the HCO and the HCO is still running.
Mitigation
The mitigation depends on whether you are installing or uninstalling the HCO:
Complete the installation by creating a
HyperConvergedCR with its default values:$ cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: hco-operatorgroup namespace: kubevirt-hyperconverged spec: {} EOF- Uninstall the HCO. If the uninstall process continues to run, you must resolve that issue in order to cancel the alert.
14.5.16. KubevirtHyperconvergedClusterOperatorUSModification
Meaning
This alert fires when a JSON Patch annotation is used to change an operand of the HyperConverged Cluster Operator (HCO).
HCO configures OpenShift Virtualization and its supporting operators in an opinionated way and overwrites its operands when there is an unexpected change to them. Users must not modify the operands directly.
However, if a change is required and it is not supported by the HCO API, you can force HCO to set a change in an operator by using JSON Patch annotations. These changes are not reverted by HCO during its reconciliation process.
Impact
Incorrect use of JSON Patch annotations might lead to unexpected results or an unstable environment.
Upgrading a system with JSON Patch annotations is dangerous because the structure of the component custom resources might change.
Diagnosis
Check the
annotation_namein the alert details to identify the JSON Patch annotation:Labels alertname=KubevirtHyperconvergedClusterOperatorUSModification annotation_name=kubevirt.kubevirt.io/jsonpatch severity=info
Mitigation
It is best to use the HCO API to change an operand. However, if the change can only be done with a JSON Patch annotation, proceed with caution.
Remove JSON Patch annotations before upgrade to avoid potential issues.
14.5.17. KubevirtVmHighMemoryUsage
Meaning
This alert fires when a container hosting a virtual machine (VM) has less than 20 MB free memory.
Impact
The virtual machine running inside the container is terminated by the runtime if the container’s memory limit is exceeded.
Diagnosis
Obtain the
virt-launcherpod details:$ oc get pod <virt-launcher> -o yaml
Identify
computecontainer processes with high memory usage in thevirt-launcherpod:$ oc exec -it <virt-launcher> -c compute -- top
Mitigation
Increase the memory limit in the
VirtualMachinespecification as in the following example:spec: running: false template: metadata: labels: kubevirt.io/vm: vm-name spec: domain: resources: limits: memory: 200Mi requests: memory: 128Mi
14.5.18. KubeVirtVMIExcessiveMigrations
Meaning
This alert fires when a virtual machine instance (VMI) live migrates more than 12 times over a period of 24 hours.
This migration rate is abnormally high, even during an upgrade. This alert might indicate a problem in the cluster infrastructure, such as network disruptions or insufficient resources.
Impact
A virtual machine (VM) that migrates too frequently might experience degraded performance because memory page faults occur during the transition.
Diagnosis
Verify that the worker node has sufficient resources:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \ jq .items[].status.allocatable
Example output
{ "cpu": "3500m", "devices.kubevirt.io/kvm": "1k", "devices.kubevirt.io/sev": "0", "devices.kubevirt.io/tun": "1k", "devices.kubevirt.io/vhost-net": "1k", "ephemeral-storage": "38161122446", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "7000128Ki", "pods": "250" }Check the status of the worker node:
$ oc get nodes -l node-role.kubernetes.io/worker= -o json | \ jq .items[].status.conditions
Example output
{ "lastHeartbeatTime": "2022-05-26T07:36:01Z", "lastTransitionTime": "2022-05-23T08:12:02Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2022-05-26T07:36:01Z", "lastTransitionTime": "2022-05-23T08:12:02Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2022-05-26T07:36:01Z", "lastTransitionTime": "2022-05-23T08:12:02Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" }, { "lastHeartbeatTime": "2022-05-26T07:36:01Z", "lastTransitionTime": "2022-05-23T08:24:15Z", "message": "kubelet is posting ready status", "reason": "KubeletReady", "status": "True", "type": "Ready" }Log in to the worker node and verify that the
kubeletservice is running:$ systemctl status kubelet
Check the
kubeletjournal log for error messages:$ journalctl -r -u kubelet
Mitigation
Ensure that the worker nodes have sufficient resources (CPU, memory, disk) to run VM workloads without interruption.
If the problem persists, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.19. LowKVMNodesCount
Meaning
This alert fires when fewer than two nodes in the cluster have KVM resources.
Impact
The cluster must have at least two nodes with KVM resources for live migration.
Virtual machines cannot be scheduled or run if no nodes have KVM resources.
Diagnosis
Identify the nodes with KVM resources:
$ oc get nodes -o jsonpath='{.items[*].status.allocatable}' | \ grep devices.kubevirt.io/kvm
Mitigation
Install KVM on the nodes without KVM resources.
14.5.20. LowReadyVirtControllersCount
Meaning
This alert fires when one or more virt-controller pods are running, but none of these pods has been in the Ready state for the past 5 minutes.
A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device creates pods for VMIs and manages their lifecycle. The device is critical for cluster-wide virtualization functionality.
Impact
This alert indicates that a cluster-level failure might occur. Actions related to VM lifecycle management, such as launching a new VMI or shutting down an existing VMI, will fail.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Verify a
virt-controllerdevice is available:$ oc get deployment -n $NAMESPACE virt-controller \ -o jsonpath='{.status.readyReplicas}'Check the status of the
virt-controllerdeployment:$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the
virt-controllerdeployment to check for status conditions, such as crashing pods or failures to pull images:$ oc -n $NAMESPACE describe deploy virt-controller
Check if any problems occurred with the nodes. For example, they might be in a
NotReadystate:$ oc get nodes
Mitigation
This alert can have multiple causes, including the following:
- The cluster has insufficient memory.
- The nodes are down.
- The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
- There are network issues.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.21. LowReadyVirtOperatorsCount
Meaning
This alert fires when one or more virt-operator pods are running, but none of these pods has been in a Ready state for the last 10 minutes.
The virt-operator is the first Operator to start in a cluster. The virt-operator deployment has a default replica of two virt-operator pods.
Its primary responsibilities include the following:
- Installing, live-updating, and live-upgrading a cluster
-
Monitoring the lifecycle of top-level controllers, such as
virt-controller,virt-handler,virt-launcher, and managing their reconciliation - Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact
A cluster-level failure might occur. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might become unavailable. Such a state also triggers the NoReadyVirtOperator alert.
The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Obtain the name of the
virt-operatordeployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the
virt-operatordeployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a
NotReadystate:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.22. LowVirtAPICount
Meaning
This alert fires when only one available virt-api pod is detected during a 60-minute period, although at least two nodes are available for scheduling.
Impact
An API call outage might occur during node eviction because the virt-api pod becomes a single point of failure.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the number of available
virt-apipods:$ oc get deployment -n $NAMESPACE virt-api \ -o jsonpath='{.status.readyReplicas}'Check the status of the
virt-apideployment for error conditions:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the nodes for issues such as nodes in a
NotReadystate:$ oc get nodes
Mitigation
Try to identify the root cause and to resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.23. LowVirtControllersCount
Meaning
This alert fires when a low number of virt-controller pods is detected. At least one virt-controller pod must be available in order to ensure high availability. The default number of replicas is 2.
A virt-controller device monitors the custom resource definitions (CRDs) of a virtual machine instance (VMI) and manages the associated pods. The device create pods for VMIs and manages the lifecycle of the pods. The device is critical for cluster-wide virtualization functionality.
Impact
The responsiveness of OpenShift Virtualization might become negatively affected. For example, certain requests might be missed.
In addition, if another virt-launcher instance terminates unexpectedly, OpenShift Virtualization might become completely unresponsive.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Verify that running
virt-controllerpods are available:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-controller
Check the
virt-launcherlogs for error messages:$ oc -n $NAMESPACE logs <virt-launcher>
Obtain the details of the
virt-launcherpod to check for status conditions such as unexpected termination or aNotReadystate.$ oc -n $NAMESPACE describe pod/<virt-launcher>
Mitigation
This alert can have a variety of causes, including:
- Not enough memory on the cluster
- Nodes are down
- The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
- Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.24. LowVirtOperatorCount
Meaning
This alert fires when only one virt-operator pod in a Ready state has been running for the last 60 minutes.
The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:
- Installing, live-updating, and live-upgrading a cluster
-
Monitoring the lifecycle of top-level controllers, such as
virt-controller,virt-handler,virt-launcher, and managing their reconciliation - Certain cluster-wide tasks, such as certificate rotation and infrastructure management
Impact
The virt-operator cannot provide high availability (HA) for the deployment. HA requires two or more virt-operator pods in a Ready state. The default deployment is two pods.
The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its decreased availability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the states of the
virt-operatorpods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Review the logs of the affected
virt-operatorpods:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the affected
virt-operatorpods:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
14.5.25. NetworkAddonsConfigNotReady
Meaning
This alert fires when the NetworkAddonsConfig custom resource (CR) of the Cluster Network Addons Operator (CNAO) is not ready.
CNAO deploys additional networking components on the cluster. This alert indicates that one of the deployed components is not ready.
Impact
Network functionality is affected.
Diagnosis
Check the status conditions of the
NetworkAddonsConfigCR to identify the deployment or daemon set that is not ready:$ oc get networkaddonsconfig \ -o custom-columns="":.status.conditions[*].message
Example output
DaemonSet "cluster-network-addons/macvtap-cni" update is being processed...
Check the component’s pod for errors:
$ oc -n cluster-network-addons get daemonset <pod> -o yaml
Check the component’s logs:
$ oc -n cluster-network-addons logs <pod>
Check the component’s details for error conditions:
$ oc -n cluster-network-addons describe <pod>
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.26. NoLeadingVirtOperator
Meaning
This alert fires when no virt-operator pod with a leader lease has been detected for 10 minutes, although the virt-operator pods are in a Ready state. The alert indicates that no leader pod is available.
The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:
- Installing, live updating, and live upgrading a cluster
-
Monitoring the lifecycle of top-level controllers, such as
virt-controller,virt-handler,virt-launcher, and managing their reconciliation - Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator deployment has a default replica of 2 pods, with one pod holding a leader lease.
Impact
This alert indicates a failure at the level of the cluster. As a result, critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A -o \ custom-columns="":.metadata.namespace)"
Obtain the status of the
virt-operatorpods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operatorpod logs to determine the leader status:$ oc -n $NAMESPACE logs | grep lead
Leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:18.635387Z"} I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator... I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator {"component":"virt-operator","level":"info","msg":"Started leading", "pos":"application.go:385","timestamp":"2021-11-30T12:15:19.216836Z"}Non-leader pod example:
{"component":"virt-operator","level":"info","msg":"Attempting to acquire leader status","pos":"application.go:400","timestamp":"2021-11-30T12:15:20.533696Z"} I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator...Obtain the details of the affected
virt-operatorpods:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.27. NoReadyVirtController
Meaning
This alert fires when no available virt-controller devices have been detected for 5 minutes.
The virt-controller devices monitor the custom resource definitions of virtual machine instances (VMIs) and manage the associated pods. The devices create pods for VMIs and manage the lifecycle of the pods.
Therefore, virt-controller devices are critical for all cluster-wide virtualization functionality.
Impact
Any actions related to VM lifecycle management fail. This notably includes launching a new VMI or shutting down an existing VMI.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Verify the number of
virt-controllerdevices:$ oc get deployment -n $NAMESPACE virt-controller \ -o jsonpath='{.status.readyReplicas}'Check the status of the
virt-controllerdeployment:$ oc -n $NAMESPACE get deploy virt-controller -o yaml
Obtain the details of the
virt-controllerdeployment to check for status conditions such as crashing pods or failure to pull images:$ oc -n $NAMESPACE describe deploy virt-controller
Obtain the details of the
virt-controllerpods:$ get pods -n $NAMESPACE | grep virt-controller
Check the logs of the
virt-controllerpods for error messages:$ oc logs -n $NAMESPACE <virt-controller>
Check the nodes for problems, such as a
NotReadystate:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.28. NoReadyVirtOperator
Meaning
This alert fires when no virt-operator pod in a Ready state has been detected for 10 minutes.
The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:
- Installing, live-updating, and live-upgrading a cluster
-
Monitoring the life cycle of top-level controllers, such as
virt-controller,virt-handler,virt-launcher, and managing their reconciliation - Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The default deployment is two virt-operator pods.
Impact
This alert indicates a cluster-level failure. Critical cluster management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be not available.
The virt-operator is not directly responsible for virtual machines in the cluster. Therefore, its temporary unavailability does not significantly affect workloads.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Obtain the name of the
virt-operatordeployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Generate the description of the
virt-operatordeployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check for node issues, such as a
NotReadystate:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the Diagnosis procedure.
14.5.29. OrphanedVirtualMachineInstances
Meaning
This alert fires when a virtual machine instance (VMI), or virt-launcher pod, runs on a node that does not have a running virt-handler pod. Such a VMI is called orphaned.
Impact
Orphaned VMIs cannot be managed.
Diagnosis
Check the status of the
virt-handlerpods to view the nodes on which they are running:$ oc get pods --all-namespaces -o wide -l kubevirt.io=virt-handler
Check the status of the VMIs to identify VMIs running on nodes that do not have a running
virt-handlerpod:$ oc get vmis --all-namespaces
Check the status of the
virt-handlerdaemon:$ oc get daemonset virt-handler --all-namespaces
Example output
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE ... virt-handler 2 2 2 2 2 ...
The daemon set is considered healthy if the
Desired,Ready, andAvailablecolumns contain the same value.If the
virt-handlerdaemon set is not healthy, check thevirt-handlerdaemon set for pod deployment issues:$ oc get daemonset virt-handler --all-namespaces -o yaml | jq .status
Check the nodes for issues such as a
NotReadystatus:$ oc get nodes
Check the
spec.workloadsstanza of theKubeVirtcustom resource (CR) for a workloads placement policy:$ oc get kubevirt kubevirt --all-namespaces -o yaml
Mitigation
If a workloads placement policy is configured, add the node with the VMI to the policy.
Possible causes for the removal of a virt-handler pod from a node include changes to the node’s taints and tolerations or to a pod’s scheduling rules.
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.30. OutdatedVirtualMachineInstanceWorkloads
Meaning
This alert fires when running virtual machine instances (VMIs) in outdated virt-launcher pods are detected 24 hours after the OpenShift Virtualization control plane has been updated.
Impact
Outdated VMIs might not have access to new OpenShift Virtualization features.
Outdated VMIs will not receive the security fixes associated with the virt-launcher pod update.
Diagnosis
Identify the outdated VMIs:
$ oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces
Check the
KubeVirtcustom resource (CR) to determine whetherworkloadUpdateMethodsis configured in theworkloadUpdateStrategystanza:$ oc get kubevirt kubevirt --all-namespaces -o yaml
Check each outdated VMI to determine whether it is live-migratable:
$ oc get vmi <vmi> -o yaml
Example output
apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance # ... status: conditions: - lastProbeTime: null lastTransitionTime: null message: cannot migrate VMI which does not use masquerade to connect to the pod network reason: InterfaceNotLiveMigratable status: "False" type: LiveMigratable
Mitigation
Configuring automated workload updates
Update the HyperConverged CR to enable automatic workload updates.
Stopping a VM associated with a non-live-migratable VMI
If a VMI is not live-migratable and if
runStrategy: alwaysis set in the correspondingVirtualMachineobject, you can update the VMI by manually stopping the virtual machine (VM):$ virctl stop --namespace <namespace> <vm>
A new VMI spins up immediately in an updated virt-launcher pod to replace the stopped VMI. This is the equivalent of a restart action.
Manually stopping a live-migratable VM is destructive and not recommended because it interrupts the workload.
Migrating a live-migratable VMI
If a VMI is live-migratable, you can update it by creating a VirtualMachineInstanceMigration object that targets a specific running VMI. The VMI is migrated into an updated virt-launcher pod.
Create a
VirtualMachineInstanceMigrationmanifest and save it asmigration.yaml:apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> namespace: <namespace> spec: vmiName: <vmi_name>
Create a
VirtualMachineInstanceMigrationobject to trigger the migration:$ oc create -f migration.yaml
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.31. SSPCommonTemplatesModificationReverted
Meaning
This alert fires when the Scheduling, Scale, and Performance (SSP) Operator reverts changes to common templates as part of its reconciliation procedure.
The SSP Operator deploys and reconciles the common templates and the Template Validator. If a user or script changes a common template, the changes are reverted by the SSP Operator.
Impact
Changes to common templates are overwritten.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \ awk '{print $1}')"Check the
ssp-operatorlogs for templates with reverted changes:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator | \ grep 'common template' -C 3
Mitigation
Try to identify and resolve the cause of the changes.
Ensure that changes are made only to copies of templates, and not to the templates themselves.
14.5.32. SSPDown
Meaning
This alert fires when all the Scheduling, Scale and Performance (SSP) Operator pods are down.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Impact
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates and/or the Template Validator might not be updated or reset if they fail.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \ awk '{print $1}')"Check the status of the
ssp-operatorpods.$ oc -n $NAMESPACE get pods -l control-plane=ssp-operator
Obtain the details of the
ssp-operatorpods:$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the
ssp-operatorlogs for error messages:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.33. SSPFailingToReconcile
Meaning
This alert fires when the reconcile cycle of the Scheduling, Scale and Performance (SSP) Operator fails repeatedly, although the SSP Operator is running.
The SSP Operator is responsible for deploying and reconciling the common templates and the Template Validator.
Impact
Dependent components might not be deployed. Changes in the components might not be reconciled. As a result, the common templates or the Template Validator might not be updated or reset if they fail.
Diagnosis
Export the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \ awk '{print $1}')"Obtain the details of the
ssp-operatorpods:$ oc -n $NAMESPACE describe pods -l control-plane=ssp-operator
Check the
ssp-operatorlogs for errors:$ oc -n $NAMESPACE logs --tail=-1 -l control-plane=ssp-operator
Obtain the status of the
virt-template-validatorpods:$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the
virt-template-validatorpods:$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the
virt-template-validatorlogs for errors:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.34. SSPHighRateRejectedVms
Meaning
This alert fires when a user or script attempts to create or modify a large number of virtual machines (VMs), using an invalid configuration.
Impact
The VMs are not created or modified. As a result, the environment might not behave as expected.
Diagnosis
Export the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \ awk '{print $1}')"Check the
virt-template-validatorlogs for errors that might indicate the cause:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Example output
{"component":"kubevirt-template-validator","level":"info","msg":"evalution summary for ubuntu-3166wmdbbfkroku0:\nminimal-required-memory applied: FAIL, value 1073741824 is lower than minimum [2147483648]\n\nsucceeded=false", "pos":"admission.go:25","timestamp":"2021-09-28T17:59:10.934470Z"}
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.35. SSPTemplateValidatorDown
Meaning
This alert fires when all the Template Validator pods are down.
The Template Validator checks virtual machines (VMs) to ensure that they do not violate their templates.
Impact
VMs are not validated against their templates. As a result, VMs might be created with specifications that do not match their respective workloads.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get deployment -A | grep ssp-operator | \ awk '{print $1}')"Obtain the status of the
virt-template-validatorpods:$ oc -n $NAMESPACE get pods -l name=virt-template-validator
Obtain the details of the
virt-template-validatorpods:$ oc -n $NAMESPACE describe pods -l name=virt-template-validator
Check the
virt-template-validatorlogs for error messages:$ oc -n $NAMESPACE logs --tail=-1 -l name=virt-template-validator
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.36. VirtAPIDown
Meaning
This alert fires when all the API Server pods are down.
Impact
OpenShift Virtualization objects cannot send API calls.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-apipods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the status of the
virt-apideployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Check the
virt-apideployment details for issues such as crashing pods or image pull failures:$ oc -n $NAMESPACE describe deploy virt-api
Check for issues such as nodes in a
NotReadystate:$ oc get nodes
Mitigation
Try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.37. VirtApiRESTErrorsBurst
Meaning
More than 80% of REST calls have failed in the virt-api pods in the last 5 minutes.
Impact
A very high rate of failed REST calls to virt-api might lead to slow response and execution of API calls, and potentially to API calls being completely dismissed.
However, currently running virtual machine workloads are not likely to be affected.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Obtain the list of
virt-apipods on your deployment:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the
virt-apilogs for error messages:$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the
virt-apipods:$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might be in a
NotReadystate:$ oc get nodes
Check the status of the
virt-apideployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the
virt-apideployment:$ oc -n $NAMESPACE describe deploy virt-api
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.38. VirtApiRESTErrorsHigh
Meaning
More than 5% of REST calls have failed in the virt-api pods in the last 60 minutes.
Impact
A high rate of failed REST calls to virt-api might lead to slow response and execution of API calls.
However, currently running virtual machine workloads are not likely to be affected.
Diagnosis
Set the
NAMESPACEenvironment variable as follows:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-apipods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-api
Check the
virt-apilogs:$ oc logs -n $NAMESPACE <virt-api>
Obtain the details of the
virt-apipods:$ oc describe -n $NAMESPACE <virt-api>
Check if any problems occurred with the nodes. For example, they might be in a
NotReadystate:$ oc get nodes
Check the status of the
virt-apideployment:$ oc -n $NAMESPACE get deploy virt-api -o yaml
Obtain the details of the
virt-apideployment:$ oc -n $NAMESPACE describe deploy virt-api
Mitigation
Based on the information obtained during the diagnosis procedure, try to identify the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.39. VirtControllerDown
Meaning
No running virt-controller pod has been detected for 5 minutes.
Impact
Any actions related to virtual machine (VM) lifecycle management fail. This notably includes launching a new virtual machine instance (VMI) or shutting down an existing VMI.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-controllerdeployment:$ oc get deployment -n $NAMESPACE virt-controller -o yaml
Review the logs of the
virt-controllerpod:$ oc get logs <virt-controller>
Mitigation
This alert can have a variety of causes, including the following:
- Node resource exhaustion
- Not enough memory on the cluster
- Nodes are down
- The API server is overloaded. For example, the scheduler might be under a heavy load and therefore not completely available.
- Networking issues
Identify the root cause and fix it, if possible.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.40. VirtControllerRESTErrorsBurst
Meaning
More than 80% of REST calls in virt-controller pods failed in the last 5 minutes.
The virt-controller has likely fully lost the connection to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-controllerpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Status updates are not propagated and actions like migrations cannot take place. However, running workloads are not impacted.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
List the available
virt-controllerpods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the
virt-controllerlogs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-controller>
Mitigation
If the
virt-controllerpod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.41. VirtControllerRESTErrorsHigh
Meaning
More than 5% of REST calls failed in virt-controller in the last 60 minutes.
This is most likely because virt-controller has partially lost connection to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-controllerpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Node-related actions, such as starting and migrating, and scheduling virtual machines, are delayed. Running workloads are not affected, but reporting their current status might be delayed.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
List the available
virt-controllerpods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-controller
Check the
virt-controllerlogs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-controller>
Mitigation
If the
virt-controllerpod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-controller>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.42. VirtHandlerDaemonSetRolloutFailing
Meaning
The virt-handler daemon set has failed to deploy on one or more worker nodes after 15 minutes.
Impact
This alert is a warning. It does not indicate that all virt-handler daemon sets have failed to deploy. Therefore, the normal lifecycle of virtual machines is not affected unless the cluster is overloaded.
Diagnosis
Identify worker nodes that do not have a running virt-handler pod:
Export the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handlerpods to identify pods that have not deployed:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Obtain the name of the worker node of the
virt-handlerpod:$ oc -n $NAMESPACE get pod <virt-handler> -o jsonpath='{.spec.nodeName}'
Mitigation
If the virt-handler pods failed to deploy because of insufficient resources, you can delete other pods on the affected worker node.
14.5.43. VirtHandlerRESTErrorsBurst
Meaning
More than 80% of REST calls failed in virt-handler in the last 5 minutes. This alert usually indicates that the virt-handler pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-handlerpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Status updates are not propagated and node-related actions, such as migrations, fail. However, running workloads on the affected node are not impacted.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handlerpod:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the
virt-handlerlogs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-handler>
Mitigation
If the
virt-handlercannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.44. VirtHandlerRESTErrorsHigh
Meaning
More than 5% of REST calls failed in virt-handler in the last 60 minutes. This alert usually indicates that the virt-handler pods have partially lost connection to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-handlerpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Node-related actions, such as starting and migrating workloads, are delayed on the node that virt-handler is running on. Running workloads are not affected, but reporting their current status might be delayed.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-handlerpod:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-handler
Check the
virt-handlerlogs for error messages when connecting to the API server:$ oc logs -n $NAMESPACE <virt-handler>
Mitigation
If the
virt-handlercannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-handler>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.45. VirtOperatorDown
Meaning
This alert fires when no virt-operator pod in the Running state has been detected for 10 minutes.
The virt-operator is the first Operator to start in a cluster. Its primary responsibilities include the following:
- Installing, live-updating, and live-upgrading a cluster
-
Monitoring the life cycle of top-level controllers, such as
virt-controller,virt-handler,virt-launcher, and managing their reconciliation - Certain cluster-wide tasks, such as certificate rotation and infrastructure management
The virt-operator deployment has a default replica of 2 pods.
Impact
This alert indicates a failure at the level of the cluster. Critical cluster-wide management functionalities, such as certification rotation, upgrade, and reconciliation of controllers, might not be available.
The virt-operator is not directly responsible for virtual machines (VMs) in the cluster. Therefore, its temporary unavailability does not significantly affect VM workloads.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operatordeployment:$ oc -n $NAMESPACE get deploy virt-operator -o yaml
Obtain the details of the
virt-operatordeployment:$ oc -n $NAMESPACE describe deploy virt-operator
Check the status of the
virt-operatorpods:$ oc get pods -n $NAMESPACE -l=kubevirt.io=virt-operator
Check for node issues, such as a
NotReadystate:$ oc get nodes
Mitigation
Based on the information obtained during the diagnosis procedure, try to find the root cause and resolve the issue.
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.46. VirtOperatorRESTErrorsBurst
Meaning
This alert fires when more than 80% of the REST calls in the virt-operator pods failed in the last 5 minutes. This usually indicates that the virt-operator pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-operatorpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Cluster-level actions, such as upgrading and controller reconciliation, might not be available.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operatorpods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operatorlogs for error messages when connecting to the API server:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the
virt-operatorpod:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
If the
virt-operatorpod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.47. VirtOperatorRESTErrorsHigh
Meaning
This alert fires when more than 5% of the REST calls in virt-operator pods failed in the last 60 minutes. This usually indicates the virt-operator pods cannot connect to the API server.
This error is frequently caused by one of the following problems:
- The API server is overloaded, which causes timeouts. To verify if this is the case, check the metrics of the API server, and view its response times and overall calls.
-
The
virt-operatorpod cannot reach the API server. This is commonly caused by DNS issues on the node and networking connectivity issues.
Impact
Cluster-level actions, such as upgrading and controller reconciliation, might be delayed.
However, workloads such as virtual machines (VMs) and VM instances (VMIs) are not likely to be affected.
Diagnosis
Set the
NAMESPACEenvironment variable:$ export NAMESPACE="$(oc get kubevirt -A \ -o custom-columns="":.metadata.namespace)"
Check the status of the
virt-operatorpods:$ oc -n $NAMESPACE get pods -l kubevirt.io=virt-operator
Check the
virt-operatorlogs for error messages when connecting to the API server:$ oc -n $NAMESPACE logs <virt-operator>
Obtain the details of the
virt-operatorpod:$ oc -n $NAMESPACE describe pod <virt-operator>
Mitigation
If the
virt-operatorpod cannot connect to the API server, delete the pod to force a restart:$ oc delete -n $NAMESPACE <virt-operator>
If you cannot resolve the issue, log in to the Customer Portal and open a support case, attaching the artifacts gathered during the diagnosis procedure.
14.5.48. VMCannotBeEvicted
Meaning
This alert fires when the eviction strategy of a virtual machine (VM) is set to LiveMigration but the VM is not migratable.
Impact
Non-migratable VMs prevent node eviction. This condition affects operations such as node drain and updates.
Diagnosis
Check the VMI configuration to determine whether the value of
evictionStrategyisLiveMigrate:$ oc get vmis -o yaml
Check for a
Falsestatus in theLIVE-MIGRATABLEcolumn to identify VMIs that are not migratable:$ oc get vmis -o wide
Obtain the details of the VMI and check
spec.conditionsto identify the issue:$ oc get vmi <vmi> -o yaml
Example output
status: conditions: - lastProbeTime: null lastTransitionTime: null message: cannot migrate VMI which does not use masquerade to connect to the pod network reason: InterfaceNotLiveMigratable status: "False" type: LiveMigratable
Mitigation
Set the evictionStrategy of the VMI to shutdown or resolve the issue that prevents the VMI from migrating.