Chapter 2. Logging

OpenShift Dedicated includes an optional logging stack based on Elasticsearch, Fluentd, and Kibana (EFK). When deployed, a three shard Elasticsearch cluster is deployed with one replica per shard. The logging stack in OpenShift is designed for short-term retention to aid application troubleshooting, not for long-term log archiving.

2.1. Cluster operations logging

Red Hat provides services to maintain the health and performance of each OpenShift Dedicated cluster and its components. This includes cluster operations and audit logs. Cluster operations logs are enabled through the optional Cluster Logging Operator and Elasticsearch Operator as described in the OpenShift Dedicated product documentation. When deployed, the cluster will aggregate cluster logs from the OpenShift cluster, nodes, and pods and retain them for 1 hour to assist the SRE team in cluster troubleshooting. It is not intended for customers to have access to operations logs, they will remain under full control of Red Hat.

2.2. Cluster audit logging

Cluster Audit logs are always enabled. Audit logs are streamed to a log aggregation system outside the cluster VPC for automated security analysis and secure retention for 90 days. Red Hat controls the log aggregation system. Customers do not have access. Customers may receive a copy of their cluster’s audit logs upon request through a support ticket. Audit log requests must specify a date and time range not to exceed 21 days. When requesting audit logs, customers should be aware that audit logs are many GB per day in size.

2.3. Application logging

Application logs sent to STDOUT will be collected by Fluentd and made available through the cluster logging stack, if it is installed. Retention is set to 7 days, but will not exceed 200GiB worth of logs per shard. For longer term retention, customers should follow the sidecar container design in their deployments and forward logs to the log aggregation or analytics service of their choice.

It is Red Hat’s expectation and guidance that application logging workloads are scheduled on a customer’s worker nodes. This includes workloads such as ElasticSearch and the Kibana dashboard. Application logging is considered a customer workload, given that the rates of logging are different per cluster and per customer.