Chapter 12. Logging, events, and monitoring

12.1. Viewing virtual machine logs

12.1.1. Understanding virtual machine logs

Logs are collected for OpenShift Container Platform Builds, Deployments, and pods. In OpenShift Virtualization, virtual machine logs can be retrieved from the virtual machine launcher Pod in either the web console or the CLI.

The -f option follows the log output in real time, which is useful for monitoring progress and error checking.

If the launcher Pod is failing to start, use the --previous option to see the logs of the last attempt.

Warning

ErrImagePull and ImagePullBackOff errors can be caused by an incorrect Deployment configuration or problems with the images that are referenced.

12.1.2. Viewing virtual machine logs in the CLI

Get virtual machine logs from the virtual machine launcher pod.

Procedure

  • Use the following command:

    $ oc logs <virt-launcher-name>

12.1.3. Viewing virtual machine logs in the web console

Get virtual machine logs from the associated virtual machine launcher pod.

Procedure

  1. In the OpenShift Virtualization console, click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. In the Details tab, click the virt-launcher-<name> Pod in the Pod section.
  5. Click Logs.

12.2. Viewing events

12.2.1. Understanding virtual machine events

OpenShift Container Platform events are records of important life-cycle information in a namespace and are useful for monitoring and troubleshooting resource scheduling, creation, and deletion issues.

OpenShift Virtualization adds events for virtual machines and virtual machine instances. These can be viewed from either the web console or the CLI.

See also: Viewing system event information in an OpenShift Container Platform cluster.

12.2.2. Viewing the events for a virtual machine in the web console

You can view the stream events for a running a virtual machine from the Virtual Machine Overview panel of the web console.

The ▮▮ button pauses the events stream.
The ▶ button continues a paused events stream.

Procedure

  1. Click WorkloadsVirtualization from the side menu.
  2. Click the Virtual Machines tab.
  3. Select a virtual machine to open the Virtual Machine Overview screen.
  4. Click Events to view all events for the virtual machine.

12.2.3. Viewing namespace events in the CLI

Use the OpenShift Container Platform client to get the events for a namespace.

Procedure

  • In the namespace, use the oc get command:

    $ oc get events

12.2.4. Viewing resource events in the CLI

Events are included in the resource description, which you can get using the OpenShift Container Platform client.

Procedure

  • In the namespace, use the oc describe command. The following example shows how to get the events for a virtual machine, a virtual machine instance, and the virt-launcher Pod for a virtual machine:

    $ oc describe vm <vm>
    $ oc describe vmi <vmi>
    $ oc describe pod virt-launcher-<name>

12.3. Diagnosing DataVolumes using events and conditions

Use the oc describe command to analyze and help resolve issues with DataVolumes.

12.3.1. About conditions and events

Diagnose DataVolume issues by examining the output of the Conditions and Events sections generated by the command:

$ oc describe dv <DataVolume>

There are three Types in the Conditions section that display:

  • Bound
  • Running
  • Ready

The Events section provides the following additional information:

  • Type of event
  • Reason for logging
  • Source of the event
  • Message containing additional diagnostic information.

The output from oc describe does not always contains Events.

An event is generated when either Status, Reason, or Message changes. Both conditions and events react to changes in the state of the DataVolume.

For example, if you misspell the URL during an import operation, the import generates a 404 message. That message change generates an event with a reason. The output in the Conditions section is updated as well.

12.3.2. Analyzing DataVolumes using conditions and events

By inspecting the Conditions and Events sections generated by the describe command, you determine the state of the DataVolume in relation to PersistentVolumeClaims (PVCs), and whether or not an operation is actively running or completed. You might also receive messages that offer specific details about the status of the DataVolume, and how it came to be in its current state.

There are many different combinations of conditions. Each must be evaluated in its unique context.

Examples of various combinations follow.

  • Bound – A successfully bound PVC displays in this example.

    Note that the Type is Bound, so the Status is True. If the PVC is not bound, the Status is False.

    When the PVC is bound, an event is generated stating that the PVC is bound. In this case, the Reason is Bound and Status is True. The Message indicates which PVC owns the DataVolume.

    Message, in the Events section, provides further details including how long the PVC has been bound (Age) and by what resource (From), in this case datavolume-controller:

    Example output

    Status:
    	Conditions:
    		Last Heart Beat Time:  2020-07-15T03:58:24Z
    		Last Transition Time:  2020-07-15T03:58:24Z
    		Message:               PVC win10-rootdisk Bound
    		Reason:                Bound
    		Status:                True
    		Type:                  Bound
    
    	Events:
    		Type     Reason     Age    From                   Message
    		----     ------     ----   ----                   -------
    		Normal   Bound      24s    datavolume-controller  PVC example-dv Bound

  • Running – In this case, note that Type is Running and Status is False, indicating that an event has occurred that caused an attempted operation to fail, changing the Status from True to False.

    However, note that Reason is Completed and the Message field indicates Import Complete.

    In the Events section, the Reason and Message contain additional troubleshooting information about the failed operation. In this example, the Message displays an inability to connect due to a 404, listed in the Events section’s first Warning.

    From this information, you conclude that an import operation was running, creating contention for other operations that are attempting to access the DataVolume:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time:  2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Message:               Import Complete
    		 Reason:                Completed
    		 Status:                False
    		 Type:                  Running
    
    	Events:
    		Type     Reason           Age                From                   Message
    		----     ------           ----               ----                   -------
    		Warning  Error            12s (x2 over 14s)  datavolume-controller  Unable to connect
    		to http data source: expected status code 200, got 404. Status: 404 Not Found

  • Ready – If Type is Ready and Status is True, then the DataVolume is ready to be used, as in the following example. If the DataVolume is not ready to be used, the Status is False:

    Example output

    Status:
    	 Conditions:
    		 Last Heart Beat Time: 2020-07-15T04:31:39Z
    		 Last Transition Time:  2020-07-15T04:31:39Z
    		 Status:                True
    		 Type:                  Ready

12.4. Viewing information about virtual machine workloads

You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OpenShift Container Platform web console.

12.4.1. About the Virtual Machines dashboard

Access virtual machines from the OpenShift Container Platform web console by navigating to the WorkloadsVirtualization page. The WorkloadsVirtualization page contains two tabs: * Virtual Machines * Virtual Machine Templates

The following cards describe each virtual machine:

  • Details provides identifying information about the virtual machine, including:

    • Name
    • Namespace
    • Date of creation
    • Node name
    • IP address
  • Inventory lists the virtual machine’s resources, including:

    • Network interface controllers (NICs)
    • Disks
  • Status includes:

    • The current status of the virtual machine
    • A note indicating whether or not the QEMU guest agent is installed on the virtual machine
  • Utilization includes charts that display usage data for:

    • CPU
    • Memory
    • Filesystem
    • Network transfer
Note

Use the drop-down list to choose a duration for the utilization data. The available options are 1 Hour, 6 Hours, and 24 Hours.

  • Events lists messages about virtual machine activity over the past hour. To view additional events, click View all.

12.5. Monitoring virtual machine health

Use this procedure to create liveness and readiness probes to monitor virtual machine health.

12.5.1. About liveness and readiness probes

When a VirtualMachineInstance (VMI) fails, liveness probes stop the VMI. Controllers such as VirtualMachine then spawn other VMIs, restoring virtual machine responsiveness.

Readiness probes tell services and endpoints that the VirtualMachineInstance is ready to receive traffic from services. If readiness probes fail, the VirtualMachineInstance is removed from applicable endpoints until the probe recovers.

12.5.2. Define an HTTP liveness probe

This procedure provides an example configuration file for defining HTTP liveness probes.

Procedure

  1. Customize a YAML configuration file to create an HTTP liveness probe, using the following code block as an example. In this example:

    • You configure a probe using spec.livenessProbe.httpGet, which queries port 1500 of the virtual machine instance, after an initial delay of 120 seconds.
    • The virtual machine instance installs and runs a minimal HTTP server on port 1500 using cloud-init.
    Note

    The timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachineInstance
    metadata:
      labels:
        special: vmi-fedora
      name: vmi-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 1024M
      livenessProbe:
        initialDelaySeconds: 120
        periodSeconds: 20
        httpGet:
          port: 1500
        timeoutSeconds: 10
      terminationGracePeriodSeconds: 0
      volumes:
      - name: containerdisk
        registryDisk:
          image: kubevirt/fedora-cloud-registry-disk-demo
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - setenforce 0
              - dnf install -y nmap-ncat
              - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
        name: cloudinitdisk
  2. Create the VirtualMachineInstance by running the following command:

    $ oc create -f <file name>.yaml

12.5.3. Define a TCP liveness probe

This procedure provides an example configuration file for defining TCP liveness probes.

Procedure

  1. Customize a YAML configuration file to create an TCP liveness probe, using this code block as an example. In this example:

    • You configure a probe using spec.livenessProbe.tcpSocket, which queries port 1500 of the virtual machine instance, after an initial delay of 120 seconds.
    • The virtual machine instance installs and runs a minimal HTTP server on port 1500 using cloud-init.
    Note

    The timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachineInstance
    metadata:
      labels:
        special: vmi-fedora
      name: vmi-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 1024M
      livenessProbe:
        initialDelaySeconds: 120
        periodSeconds: 20
        tcpSocket:
          port: 1500
        timeoutSeconds: 10
      terminationGracePeriodSeconds: 0
      volumes:
      - name: containerdisk
        registryDisk:
          image: kubevirt/fedora-cloud-registry-disk-demo
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - setenforce 0
              - dnf install -y nmap-ncat
              - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
        name: cloudinitdisk
  2. Create the VirtualMachineInstance by running the following command:

    $ oc create -f <file name>.yaml

12.5.4. Define a readiness probe

This procedure provides an example configuration file for defining readiness probes.

Procedure

  1. Customize a YAML configuration file to create a readiness probe. Readiness probes are configured in a similar manner to liveness probes. However, note the following differences in this example:

    • Readiness probes are saved using a different spec name. For example, you create a readiness probe as spec.readinessProbe instead of as spec.livenessProbe.<type-of-probe>.
    • When creating a readiness probe, you optionally set a failureThreshold and a successThreshold to switch between ready and non-ready states, should the probe succeed or fail multiple times.
    Note

    The timeoutSeconds value must be lower than the periodSeconds value. The timeoutSeconds default value is 1. The periodSeconds default value is 10.

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachineInstance
    metadata:
      labels:
        special: vmi-fedora
      name: vmi-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
        resources:
          requests:
            memory: 1024M
      readinessProbe:
        httpGet:
          port: 1500
        initialDelaySeconds: 120
        periodSeconds: 20
        timeoutSeconds: 10
        failureThreshold: 3
        successThreshold: 3
      terminationGracePeriodSeconds: 0
      volumes:
      - name: containerdisk
        registryDisk:
          image: kubevirt/fedora-cloud-registry-disk-demo
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - setenforce 0
              - dnf install -y nmap-ncat
              - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\n\\nHello World!'
        name: cloudinitdisk
  2. Create the VirtualMachineInstance by running the following command:

    $ oc create -f <file name>.yaml

12.6. Using the OpenShift Container Platform dashboard to get cluster information

Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by clicking Home > Dashboards > Overview from the OpenShift Container Platform web console.

The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards.

12.6.1. About the OpenShift Container Platform dashboards page

The OpenShift Container Platform dashboard consists of the following cards:

  • Details provides a brief overview of informational cluster details.

    Status include ok, error, warning, in progress, and unknown. Resources can add custom status names.

    • Cluster ID
    • Provider
    • Version
  • Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about:

    • Number of nodes
    • Number of pods
    • Persistent storage volume claims
    • Virtual machines (available if OpenShift Virtualization is installed)
    • Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment).
  • Cluster Health summarizes the current health of the cluster as a whole, including relevant alerts and descriptions. If OpenShift Virtualization is installed, the overall health of OpenShift Virtualization is diagnosed as well. If more than one subsystem is present, click See All to view the status of each subsystem.
  • Cluster Capacity charts help administrators understand when additional resources are required in the cluster. The charts contain an inner ring that displays current consumption, while an outer ring displays thresholds configured for the resource, including information about:

    • CPU time
    • Memory allocation
    • Storage consumed
    • Network resources consumed
  • Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption.
  • Events lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host.
  • Top Consumers helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage).

12.7. OpenShift Container Platform cluster monitoring, logging, and Telemetry

OpenShift Container Platform provides various resources for monitoring at the cluster level.

12.7.1. About OpenShift Container Platform cluster monitoring

OpenShift Container Platform includes a pre-configured, pre-installed, and self-updating monitoring stack that is based on the Prometheus open source project and its wider eco-system. It provides monitoring of cluster components and includes a set of alerts to immediately notify the cluster administrator about any occurring problems and a set of Grafana dashboards. The cluster monitoring stack is only supported for monitoring OpenShift Container Platform clusters.

Important

To ensure compatibility with future OpenShift Container Platform updates, configuring only the specified monitoring stack options is supported.

12.7.2. About cluster logging components

The cluster logging components include a collector deployed to each node in the OpenShift Container Platform cluster that collects all node and container logs and writes them to a log store. You can use a centralized web UI to create rich visualizations and dashboards with the aggregated data.

The major components of cluster logging are:

  • collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd.
  • log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
  • visualization - This is the UI component you can use to view logs, graphs, charts, and so forth. The current implementation is Kibana.

For more information on cluster logging, see the OpenShift Container Platform cluster logging documentation.

12.7.3. About Telemetry

Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document.

This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience.

This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use.

12.7.3.1. Information collected by Telemetry

The following information is collected by Telemetry:

  • The unique random identifier that is generated during an installation
  • Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability
  • Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update
  • The name of the provider platform that OpenShift Container Platform is deployed on and the data center location
  • Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each
  • The number of running virtual machine instances in a cluster
  • The number of etcd members and the number of objects stored in the etcd cluster
  • The OpenShift Container Platform framework components installed in a cluster and their condition and status
  • Usage information about components, features, and extensions
  • Usage details about Technology Previews and unsupported configurations
  • Information about degraded software
  • Information about nodes that are marked as NotReady
  • Events for all namespaces listed as "related objects" for a degraded Operator
  • Configuration details that help Red Hat Support to provide beneficial support for customers. This includes node configuration at the cloud infrastructure level, host names, IP addresses, Kubernetes pod names, namespaces, and services.
  • Information about the validity of certificates

Telemetry does not collect identifying information such as user names, or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.

12.7.4. CLI troubleshooting and debugging commands

For a list of the oc client troubleshooting and debugging commands, see the OpenShift Container Platform CLI tools documentation.

12.8. Collecting OpenShift Virtualization data for Red Hat Support

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to OpenShift Virtualization.

For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift Virtualization.

12.8.1. About the must-gather tool

The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as:

  • Resource definitions
  • Audit logs
  • Service logs

You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product.

When you run oc adm must-gather, a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local. This directory is created in the current working directory.

12.8.2. About collecting OpenShift Virtualization data

You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with OpenShift Virtualization:

  • The Hyperconverged Cluster Operator namespaces (and child objects)
  • All namespaces (and their child objects) that belong to any OpenShift Virtualization resources
  • All OpenShift Virtualization Custom Resource Definitions (CRDs)
  • All namespaces that contain virtual machines
  • All virtual machine definitions

To collect OpenShift Virtualization data with must-gather, you must specify the OpenShift Virtualization image: --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.4.9.

12.8.3. Gathering data about specific features

You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command.

Note

To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.
  • The OpenShift Container Platform CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.
  2. Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization:

    $ oc adm must-gather \
     --image-stream=openshift/must-gather \ 1
     --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.4.9 2
    1
    The default OpenShift Container Platform must-gather image
    2
    The must-gather image for OpenShift Virtualization

    You can use the must-gather tool with additional arguments to gather data that is specifically related to cluster logging and the Cluster Logging Operator in your cluster. For cluster logging, run the following command:

    $ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator \
     -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')

    Example 12.1. Example must-gather output for cluster logging

    ├── cluster-logging
    │  ├── clo
    │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
    │  │  ├── clusterlogforwarder_cr
    │  │  ├── cr
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── logforwarding_cr
    │  ├── collector
    │  │  ├── fluentd-2tr64
    │  ├── curator
    │  │  └── curator-1596028500-zkz4s
    │  ├── eo
    │  │  ├── csv
    │  │  ├── deployment
    │  │  └── elasticsearch-operator-7dc7d97b9d-jb4r4
    │  ├── es
    │  │  ├── cluster-elasticsearch
    │  │  │  ├── aliases
    │  │  │  ├── health
    │  │  │  ├── indices
    │  │  │  ├── latest_documents.json
    │  │  │  ├── nodes
    │  │  │  ├── nodes_stats.json
    │  │  │  └── thread_pool
    │  │  ├── cr
    │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  │  └── logs
    │  │     ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
    │  ├── install
    │  │  ├── co_logs
    │  │  ├── install_plan
    │  │  ├── olmo_logs
    │  │  └── subscription
    │  └── kibana
    │     ├── cr
    │     ├── kibana-9d69668d4-2rkvz
    ├── cluster-scoped-resources
    │  └── core
    │     ├── nodes
    │     │  ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml
    │     └── persistentvolumes
    │        ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml
    ├── event-filter.html
    ├── gather-debug.log
    └── namespaces
       ├── openshift-logging
       │  ├── apps
       │  │  ├── daemonsets.yaml
       │  │  ├── deployments.yaml
       │  │  ├── replicasets.yaml
       │  │  └── statefulsets.yaml
       │  ├── batch
       │  │  ├── cronjobs.yaml
       │  │  └── jobs.yaml
       │  ├── core
       │  │  ├── configmaps.yaml
       │  │  ├── endpoints.yaml
       │  │  ├── events
       │  │  │  ├── curator-1596021300-wn2ks.162634ebf0055a94.yaml
       │  │  │  ├── curator.162638330681bee2.yaml
       │  │  │  ├── elasticsearch-delete-app-1596020400-gm6nl.1626341a296c16a1.yaml
       │  │  │  ├── elasticsearch-delete-audit-1596020400-9l9n4.1626341a2af81bbd.yaml
       │  │  │  ├── elasticsearch-delete-infra-1596020400-v98tk.1626341a2d821069.yaml
       │  │  │  ├── elasticsearch-rollover-app-1596020400-cc5vc.1626341a3019b238.yaml
       │  │  │  ├── elasticsearch-rollover-audit-1596020400-s8d5s.1626341a31f7b315.yaml
       │  │  │  ├── elasticsearch-rollover-infra-1596020400-7mgv8.1626341a35ea59ed.yaml
       │  │  ├── events.yaml
       │  │  ├── persistentvolumeclaims.yaml
       │  │  ├── pods.yaml
       │  │  ├── replicationcontrollers.yaml
       │  │  ├── secrets.yaml
       │  │  └── services.yaml
       │  ├── openshift-logging.yaml
       │  ├── pods
       │  │  ├── cluster-logging-operator-74dd5994f-6ttgt
       │  │  │  ├── cluster-logging-operator
       │  │  │  │  └── cluster-logging-operator
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  └── cluster-logging-operator-74dd5994f-6ttgt.yaml
       │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff
       │  │  │  ├── cluster-logging-operator-registry
       │  │  │  │  └── cluster-logging-operator-registry
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml
       │  │  │  └── mutate-csv-and-generate-sqlite-db
       │  │  │     └── mutate-csv-and-generate-sqlite-db
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── curator-1596028500-zkz4s
       │  │  ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms
       │  │  ├── elasticsearch-delete-app-1596030300-bpgcx
       │  │  │  ├── elasticsearch-delete-app-1596030300-bpgcx.yaml
       │  │  │  └── indexmanagement
       │  │  │     └── indexmanagement
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── fluentd-2tr64
       │  │  │  ├── fluentd
       │  │  │  │  └── fluentd
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── fluentd-2tr64.yaml
       │  │  │  └── fluentd-init
       │  │  │     └── fluentd-init
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  │  ├── kibana-9d69668d4-2rkvz
       │  │  │  ├── kibana
       │  │  │  │  └── kibana
       │  │  │  │     └── logs
       │  │  │  │        ├── current.log
       │  │  │  │        ├── previous.insecure.log
       │  │  │  │        └── previous.log
       │  │  │  ├── kibana-9d69668d4-2rkvz.yaml
       │  │  │  └── kibana-proxy
       │  │  │     └── kibana-proxy
       │  │  │        └── logs
       │  │  │           ├── current.log
       │  │  │           ├── previous.insecure.log
       │  │  │           └── previous.log
       │  └── route.openshift.io
       │     └── routes.yaml
       └── openshift-operators-redhat
          ├── ...
  3. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:

    $ tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1
    1
    Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name.
  4. Attach the compressed file to your support case on the Red Hat Customer Portal.