Chapter 3. Logging 5.6
3.1. Logging 5.6 Release Notes
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-X where X is the version of logging you have installed.
3.1.1. Logging 5.6.5
This release includes OpenShift Logging Bug Fix Release 5.6.5.
3.1.1.1. Bug fixes
- Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. (LOG-3419)
- Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. (LOG-3750)
- Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. (LOG-3583)
- Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. (LOG-3480)
- Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. (LOG-4008)
3.1.1.2. CVEs
3.1.2. Logging 5.6.4
This release includes OpenShift Logging Bug Fix Release 5.6.4.
3.1.2.1. Bug fixes
- Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. (LOG-3280)
- Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. (LOG-3454)
-
Before this update, when the
tls.insecureSkipVerifyoption was set totrue, the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even whentls.insecureSkipVerifyis enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3475) - Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. (LOG-3640)
-
Before this update, if the
collectionfield contained{}it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. (LOG-3733) -
Before this update, the
nodeSelectorattribute for the Gateway component of LokiStack did not have any effect. With this update, thenodeSelectorattribute functions as expected. (LOG-3783) - Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. (LOG-3814)
-
Before this update, if the
tls.insecureSkipVerifyfield was set totrue, the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even whentls.insecureSkipVerifyis enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. (LOG-3838) - Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.(LOG-3763)
3.1.2.2. CVEs
3.1.3. Logging 5.6.3
This release includes OpenShift Logging Bug Fix Release 5.6.3.
3.1.3.1. Bug fixes
- Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. (LOG-3717)
-
Before this update, the Fluentd collector did not capture OAuth login events stored in
/var/log/auth-server/audit.log. With this update, Fluentd captures these OAuth login events, resolving the issue. (LOG-3729)
3.1.3.2. CVEs
3.1.4. Logging 5.6.2
This release includes OpenShift Logging Bug Fix Release 5.6.2.
3.1.4.1. Bug fixes
-
Before this update, the collector did not set
levelfields correctly based on priority for systemd logs. With this update,levelfields are set correctly. (LOG-3429) - Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. (LOG-3584)
-
Before this update, creating a
ClusterLogForwardercustom resource (CR) with an output value ofdefaultdid not generate any errors. With this update, an error warning that this value is invalid generates appropriately. (LOG-3437) -
Before this update, when the
ClusterLogForwardercustom resource (CR) had multiple pipelines configured with one output set asdefault, the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. (LOG-3559) - Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. (LOG-3608)
- Before this update, patch releases removed previous versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that previous releases of the same minor version stay in the catalog. (LOG-3635)
3.1.4.2. CVEs
3.1.5. Logging 5.6.1
This release includes OpenShift Logging Bug Fix Release 5.6.1.
3.1.5.1. Bug fixes
- Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. (LOG-3494)
-
Before this update, the Loki Operator would not retry setting the status of the
LokiStackCR, which caused stale status information. With this update, the Operator retries status information updates on conflict. (LOG-3496) -
Before this update, the Loki Operator Webhook server caused TLS errors when the
kube-apiserver-operatorOperator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. (LOG-3510) - Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. (LOG-3441), (LOG-3397)
- Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. (LOG-3463)
-
Before this update, the
Red Hat OpenShift LoggingOperator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. (LOG-3447) -
Before this update the reconciliation of the
ClusterLogForwardercustom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.(LOG-3477)
3.1.5.2. CVEs
3.1.6. Logging 5.6.0
This release includes OpenShift Logging Release 5.6.
3.1.6.1. Deprecation notice
In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
3.1.6.2. Enhancements
- With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. (LOG-895)
- With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. (LOG-2695)
- With this update, Splunk is an available output option for log forwarding. (LOG-2913)
- With this update, Vector replaces Fluentd as the default Collector. (LOG-2222)
- With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. (LOG-3388)
-
With this update, logs from any source contain a field
openshift.cluster_id, the unique identifier of the cluster in which the Operator is deployed. You can view theclusterIDvalue with the command below. (LOG-2715)
$ oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}'3.1.6.3. Known Issues
-
Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the
.character. This fixes the limitation of Elasticsearch by replacing.in the label keys with_. As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. (LOG-3463)
3.1.6.4. Bug fixes
- Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. (LOG-2993)
- Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. (LOG-3072)
-
Before this update, the Operator removed any custom outputs defined in the
ClusterLogForwardercustom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing theClusterLogForwardercustom resource. (LOG-3090) - Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. (LOG-3331)
-
Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a
ReplicationFactorof1. With this update, the operator sets the actual value for the size used. (LOG-3296) -
Before this update, Vector parsed the message field when JSON parsing was enabled without also defining
structuredTypeKeyorstructuredTypeNamevalues. With this update, a value is required for eitherstructuredTypeKeyorstructuredTypeNamewhen writing structured logs to Elasticsearch. (LOG-3195) - Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. (LOG-3161)
- Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. (LOG-3157)
-
Before this update, Kibana had a fixed
24hOAuth cookie expiration time, which resulted in 401 errors in Kibana whenever theaccessTokenInactivityTimeoutfield was set to a value lower than24h. With this update, Kibana’s OAuth cookie expiration time synchronizes to theaccessTokenInactivityTimeout, with a default value of24h. (LOG-3129) - Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. (LOG-2919)
-
Before this update, the
.leveland`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. (LOG-2819) - Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. (LOG-2789)
- Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. (LOG-2315)
- Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. (LOG-1806)
-
Before this update, the
must-gatherscript did not complete becauseocneeds a folder with write permission to build its cache. With this update,ochas write permissions to a folder, and themust-gatherscript completes successfully. (LOG-3446) - Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. (LOG-3235)
-
Before this update, Vector was missing the field
sequence, which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the fieldopenshift.sequencehas been added to the event logs. (LOG-3106)
3.1.6.5. CVEs
3.2. Getting started with logging 5.6
This overview of the logging deployment process is provided for ease of reference. It is not a substitute for full documentation. For new installations, Vector and LokiStack are recommended.
As of logging version 5.5, you have the option of choosing from Fluentd or Vector collector implementations, and Elasticsearch or LokiStack as log stores. Documentation for logging is in the process of being updated to reflect these underlying component changes.
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Prerequisites
- Log store preference: Elasticsearch or LokiStack
- Collector implementation preference: Fluentd or Vector
- Credentials for your log forwarding outputs
As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
Install the Operator for the log store you’d like to use.
- For Elasticsearch, install the OpenShift Elasticsearch Operator.
For LokiStack, install the Loki Operator.
NoteIf you have installed the Loki Operator, create a
LokiStackcustom resource (CR) instance as well.
- Install the Red Hat OpenShift Logging Operator.
Create a
ClusterLoggingCR instance.- Select your collector implementation.
3.3. Understanding logging architecture
The logging subsystem consists of these logical components:
-
Collector- Reads container log data from each node and forwards log data to configured outputs. -
Store- Stores log data for analysis; the default output for the forwarder. -
Visualization- Graphical interface for searching, querying, and viewing stored logs.
These components are managed by Operators and Custom Resource (CR) YAML files.
The logging subsystem for Red Hat OpenShift collects container logs and node logs. These are categorized into types:
-
application- Container logs generated by non-infrastructure containers. -
infrastructure- Container logs from namespaceskube-*andopenshift-\*, and node logs fromjournald. -
audit- Logs fromauditd,kube-apiserver,openshift-apiserver, andovnif enabled.
The logging collector is a daemonset that deploys pods to each OpenShift Container Platform node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and OpenShift Container Platform.
Container logs are generated by containers running in pods running on the cluster. Each container generates a separate log stream. The collector collects the logs from these sources and forwards them internally or externally as configured in the ClusterLogForwarder custom resource.
3.3.1. Support considerations for logging
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Operators reconcile any differences. The Operators reverse everything to the defined state by default and by design.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed.
The following modifications are explicitly not supported:
- Deploying logging to namespaces not specified in the documentation.
- Installing custom Elasticsearch, Kibana, Fluentd, or Loki instances on OpenShift Container Platform.
- Changes to the Kibana Custom Resource (CR) or Elasticsearch CR.
- Changes to secrets or config maps not specified in the documentation.
The logging subsystem for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems.
The logging subsystem for Red Hat OpenShift is not:
- A high scale log collection system
- Security Information and Event Monitoring (SIEM) compliant
- Historical or long term log retention or storage
- A guaranteed log sink
- Secure storage - audit logs are not stored by default
3.4. Configuring your logging deployment
You can configure your logging subsystem deployment with Custom Resource (CR) YAML files implemented by each Operator.
Red Hat Openshift Logging Operator:
-
ClusterLogging(CL) - Deploys the collector and forwarder which currently are both implemented by a daemonset running on each node. -
ClusterLogForwarder(CLF) - Generates collector configuration to forward logs per user configuration.
Loki Operator:
-
LokiStack- Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.
OpenShift Elasticsearch Operator:
These CRs are generated and managed by the ClusterLogging Operator, manual changes cannot be made without being overwritten by the Operator.
-
ElasticSearch- Configure and deploy an Elasticsearch instance as the default log store. -
Kibana- Configure and deploy Kibana instance to search, query and view logs.
The supported way of configuring the logging subsystem for Red Hat OpenShift is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Operators reconcile any differences. The Operators reverse everything to the defined state by default and by design.
If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged. An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed.
3.4.1. Enabling stream-based retention with Loki
With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules.
-
To enable stream-based retention, create or edit the
LokiStackcustom resource (CR):
oc create -f <file-name>.yaml
- You can refer to the examples below to configure your LokiStack CR.
Example global stream-based retention
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global: 1
retention: 2
days: 20
streams:
- days: 4
priority: 1
selector: '{kubernetes_namespace_name=~"test.+"}' 3
- days: 1
priority: 1
selector: '{log_type="infrastructure"}'
managementState: Managed
replicationFactor: 1
size: 1x.small
storage:
schemas:
- effectiveDate: "2020-10-11"
version: v11
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
tenants:
mode: openshift-logging
- 1
- Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage.
- 2
- Retention is enabled in the cluster when this block is added to the CR.
- 3
- Contains the LogQL query used to define the log stream.
Example per-tenant stream-based retention
apiVersion: loki.grafana.com/v1
kind: LokiStack
metadata:
name: logging-loki
namespace: openshift-logging
spec:
limits:
global:
retention:
days: 20
tenants: 1
application:
retention:
days: 1
streams:
- days: 4
selector: '{kubernetes_namespace_name=~"test.+"}' 2
infrastructure:
retention:
days: 5
streams:
- days: 1
selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}'
managementState: Managed
replicationFactor: 1
size: 1x.small
storage:
schemas:
- effectiveDate: "2020-10-11"
version: v11
secret:
name: logging-loki-s3
type: aws
storageClassName: standard
tenants:
mode: openshift-logging
- 1
- Sets retention policy by tenant. Valid tenant types are
application,audit, andinfrastructure. - 2
- Contains the LogQL query used to define the log stream.
- Then apply your configuration:
oc apply -f <file-name>.yaml
This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage.
3.4.2. Enabling multi-line exception detection
Enables multi-line error detection of container logs.
Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions.
Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information.
Example java exception
java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null
at testjava.Main.handle(Main.java:47)
at testjava.Main.printMe(Main.java:19)
at testjava.Main.main(Main.java:10)
-
To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the
ClusterLogForwarderCustom Resource (CR) contains adetectMultilineErrorsfield, with a value oftrue.
Example ClusterLogForwarder CR
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: my-app-logs
inputRefs:
- application
outputRefs:
- default
detectMultilineErrors: true
3.4.2.1. Details
When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message’s content is replaced with the concatenated content of all the message fields in the sequence.
Table 3.1. Supported languages per collector:
| Language | Fluentd | Vector |
|---|---|---|
| Java | ✓ | ✓ |
| JS | ✓ | ✓ |
| Ruby | ✓ | ✓ |
| Python | ✓ | ✓ |
| Golang | ✓ | ✓ |
| PHP | ✓ | |
| Dart | ✓ | ✓ |
3.4.2.2. Troubleshooting
When enabled, the collector configuration will include a new section with type: detect_exceptions
Example vector configuration section
[transforms.detect_exceptions_app-logs] type = "detect_exceptions" inputs = ["application"] languages = ["All"] group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000
Example fluentd config section
<label @MULTILINE_APP_LOGS>
<match kubernetes.**>
@type detect_exceptions
remove_tag_prefix 'kubernetes'
message message
force_line_breaks true
multiline_flush_interval .2
</match>
</label>
3.5. Administering your logging deployment
3.5.1. Deploying Red Hat OpenShift Logging Operator using the web console
You can use the OpenShift Container Platform web console to deploy the Red Hat OpenShift Logging Operator.
Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility.
Procedure
To deploy the Red Hat OpenShift Logging Operator using the OpenShift Container Platform web console:
Install the Red Hat OpenShift Logging Operator:
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Type Logging in the Filter by keyword field.
- Choose Red Hat OpenShift Logging from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stablechannel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-XwhereXis the version of logging you have installed.- Ensure that A specific namespace on the cluster is selected under Installation Mode.
- Ensure that Operator recommended namespace is openshift-logging under Installed Namespace.
- Select Enable Operator recommended cluster monitoring on this Namespace.
Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Select Enable or Disable for the Console plugin.
- Click Install.
Verify that the Red Hat OpenShift Logging Operator is installed by switching to the Operators → Installed Operators page.
- Ensure that Red Hat OpenShift Logging is listed in the openshift-logging project with a Status of Succeeded.
Create a ClusterLogging instance.
NoteThe form view of the web console does not include all available options. The YAML view is recommended for completing your setup.
In the collection section, select a Collector Implementation.
NoteAs of logging version 5.6 Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead.
In the logStore section, select a type.
NoteAs of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.
- Click Create.
3.5.2. Deploying the Loki Operator using the web console
You can use the OpenShift Container Platform web console to install the Loki Operator.
Prerequisites
- Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)
Procedure
To install the Loki Operator using the OpenShift Container Platform web console:
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
Type Loki in the Filter by keyword field.
- Choose Loki Operator from the list of available Operators, and click Install.
Select stable or stable-5.y as the Update Channel.
NoteThe
stablechannel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel tostable-XwhereXis the version of logging you have installed.- Ensure that All namespaces on the cluster is selected under Installation Mode.
- Ensure that openshift-operators-redhat is selected under Installed Namespace.
Select Enable Operator recommended cluster monitoring on this Namespace.
This option sets the
openshift.io/cluster-monitoring: "true"label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes theopenshift-operators-redhatnamespace.Select an option for Update approval.
- The Automatic option allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.
- The Manual option requires a user with appropriate credentials to approve the Operator update.
- Click Install.
Verify that the LokiOperator installed by switching to the Operators → Installed Operators page.
- Ensure that LokiOperator is listed with Status as Succeeded in all the projects.
Create a
SecretYAML file that uses theaccess_key_idandaccess_key_secretfields to specify your credentials andbucketnames,endpoint, andregionto define the object storage location. AWS is used in the following example:apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1
Select Create instance under LokiStack on the Details tab. Then select YAML view. Paste in the following template, subsituting values where appropriate.
apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: logging-loki-s3 3 type: s3 4 storageClassName: <storage_class_name> 5 tenants: mode: openshift-logging- 1
- Name should be
logging-loki. - 2
- Select your Loki deployment size.
- 3
- Define the secret used for your log storage.
- 4
- Define corresponding storage type.
- 5
- Enter the name of an existing storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed using
oc get storageclasses.Apply the configuration:
oc apply -f logging-loki.yaml
Create or edit a
ClusterLoggingCR:apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki collection: type: vectorApply the configuration:
oc apply -f cr-lokistack.yaml
3.5.3. Installing from OperatorHub using the CLI
Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
Install the
occommand to your local system.
Procedure
View the list of Operators available to the cluster from OperatorHub:
$ oc get packagemanifests -n openshift-marketplace
Example output
NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ...
Note the catalog for your desired Operator.
Inspect your desired Operator to verify its supported install modes and available channels:
$ oc describe packagemanifests <operator_name> -n openshift-marketplace
An Operator group, defined by an
OperatorGroupobject, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the
AllNamespacesorSingleNamespacemode. If the Operator you intend to install uses theAllNamespaces, then theopenshift-operatorsnamespace already has an appropriate Operator group in place.However, if the Operator uses the
SingleNamespacemode and you do not already have an appropriate Operator group in place, you must create one.NoteThe web console version of this procedure handles the creation of the
OperatorGroupandSubscriptionobjects automatically behind the scenes for you when choosingSingleNamespacemode.Create an
OperatorGroupobject YAML file, for exampleoperatorgroup.yaml:Example
OperatorGroupobjectapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>
Create the
OperatorGroupobject:$ oc apply -f operatorgroup.yaml
Create a
Subscriptionobject YAML file to subscribe a namespace to an Operator, for examplesub.yaml:Example
SubscriptionobjectapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar
- 1
- For default
AllNamespacesinstall mode usage, specify theopenshift-operatorsnamespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace forSingleNamespaceinstall mode usage. - 2
- Name of the channel to subscribe to.
- 3
- Name of the Operator to subscribe to.
- 4
- Name of the catalog source that provides the Operator.
- 5
- Namespace of the catalog source. Use
openshift-marketplacefor the default OperatorHub catalog sources. - 6
- The
envparameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. - 7
- The
envFromparameter defines a list of sources to populate Environment Variables in the container. - 8
- The
volumesparameter defines a list of Volumes that must exist on the pod created by OLM. - 9
- The
volumeMountsparameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If avolumeMountreferences avolumethat does not exist, OLM fails to deploy the Operator. - 10
- The
tolerationsparameter defines a list of Tolerations for the pod created by OLM. - 11
- The
resourcesparameter defines resource constraints for all the containers in the pod created by OLM. - 12
- The
nodeSelectorparameter defines aNodeSelectorfor the pod created by OLM.
Create the
Subscriptionobject:$ oc apply -f sub.yaml
At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.
3.5.4. Deleting Operators from a cluster using the web console
Cluster administrators can delete installed Operators from a selected namespace by using the web console.
Prerequisites
-
You have access to an OpenShift Container Platform cluster web console using an account with
cluster-adminpermissions.
Procedure
- Navigate to the Operators → Installed Operators page.
- Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it.
On the right side of the Operator Details page, select Uninstall Operator from the Actions list.
An Uninstall Operator? dialog box is displayed.
Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates.
NoteThis action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs.
3.5.5. Deleting Operators from a cluster using the CLI
Cluster administrators can delete installed Operators from a selected namespace by using the CLI.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. -
occommand installed on workstation.
Procedure
Check the current version of the subscribed Operator (for example,
jaeger) in thecurrentCSVfield:$ oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV
Example output
currentCSV: jaeger-operator.v1.8.2
Delete the subscription (for example,
jaeger):$ oc delete subscription jaeger -n openshift-operators
Example output
subscription.operators.coreos.com "jaeger" deleted
Delete the CSV for the Operator in the target namespace using the
currentCSVvalue from the previous step:$ oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators
Example output
clusterserviceversion.operators.coreos.com "jaeger-operator.v1.8.2" deleted
3.6. Logging References
3.6.1. Collector features
| Output | Protocol | Tested with | Fluentd | Vector |
|---|---|---|---|---|
| Cloudwatch | REST over HTTP(S) | ✓ | ✓ | |
| Elasticsearch v6 | v6.8.1 | ✓ | ✓ | |
| Elasticsearch v7 | v7.12.2, 7.17.7 | ✓ | ✓ | |
| Elasticsearch v8 | v8.4.3 | ✓ | ||
| Fluent Forward | Fluentd forward v1 | Fluentd 1.14.6, Logstash 7.10.1 | ✓ | |
| Google Cloud Logging | ✓ | |||
| HTTP | HTTP 1.1 | Fluentd 1.14.6, Vector 0.21 | ||
| Kafka | Kafka 0.11 | Kafka 2.4.1, 2.7.0, 3.3.1 | ✓ | ✓ |
| Loki | REST over HTTP(S) | Loki 2.3.0, 2.7 | ✓ | ✓ |
| Splunk | HEC | v8.2.9, 9.0.0 | ✓ | |
| Syslog | RFC3164, RFC5424 | Rsyslog 8.37.0-9.el7 | ✓ |
Table 3.2. Log Sources
| Feature | Fluentd | Vector |
|---|---|---|
| App container logs | ✓ | ✓ |
| App-specific routing | ✓ | ✓ |
| App-specific routing by namespace | ✓ | ✓ |
| Infra container logs | ✓ | ✓ |
| Infra journal logs | ✓ | ✓ |
| Kube API audit logs | ✓ | ✓ |
| OpenShift API audit logs | ✓ | ✓ |
| Open Virtual Network (OVN) audit logs | ✓ | ✓ |
Table 3.3. Authorization and Authentication
| Feature | Fluentd | Vector |
|---|---|---|
| Elasticsearch certificates | ✓ | ✓ |
| Elasticsearch username / password | ✓ | ✓ |
| Cloudwatch keys | ✓ | ✓ |
| Cloudwatch STS | ✓ | ✓ |
| Kafka certificates | ✓ | ✓ |
| Kafka username / password | ✓ | ✓ |
| Kafka SASL | ✓ | ✓ |
| Loki bearer token | ✓ | ✓ |
Table 3.4. Normalizations and Transformations
| Feature | Fluentd | Vector |
|---|---|---|
| Viaq data model - app | ✓ | ✓ |
| Viaq data model - infra | ✓ | ✓ |
| Viaq data model - infra(journal) | ✓ | ✓ |
| Viaq data model - Linux audit | ✓ | ✓ |
| Viaq data model - kube-apiserver audit | ✓ | ✓ |
| Viaq data model - OpenShift API audit | ✓ | ✓ |
| Viaq data model - OVN | ✓ | ✓ |
| Loglevel Normalization | ✓ | ✓ |
| JSON parsing | ✓ | ✓ |
| Structured Index | ✓ | ✓ |
| Multiline error detection | ✓ | |
| Multicontainer / split indices | ✓ | ✓ |
| Flatten labels | ✓ | ✓ |
| CLF static labels | ✓ | ✓ |
Table 3.5. Tuning
| Feature | Fluentd | Vector |
|---|---|---|
| Fluentd readlinelimit | ✓ | |
| Fluentd buffer | ✓ | |
| - chunklimitsize | ✓ | |
| - totallimitsize | ✓ | |
| - overflowaction | ✓ | |
| - flushthreadcount | ✓ | |
| - flushmode | ✓ | |
| - flushinterval | ✓ | |
| - retrywait | ✓ | |
| - retrytype | ✓ | |
| - retrymaxinterval | ✓ | |
| - retrytimeout | ✓ |
Table 3.6. Visibility
| Feature | Fluentd | Vector |
|---|---|---|
| Metrics | ✓ | ✓ |
| Dashboard | ✓ | ✓ |
| Alerts | ✓ |
Table 3.7. Miscellaneous
| Feature | Fluentd | Vector |
|---|---|---|
| Global proxy support | ✓ | ✓ |
| x86 support | ✓ | ✓ |
| ARM support | ✓ | ✓ |
| IBM Power support | ✓ | ✓ |
| IBM Z support | ✓ | ✓ |
| IPv6 support | ✓ | ✓ |
| Log event buffering | ✓ | |
| Disconnected Cluster | ✓ | ✓ |
Additional resources
3.6.2. Logging 5.6 API reference
3.6.2.1. ClusterLogForwarder
ClusterLogForwarder is an API to configure forwarding logs.
You configure forwarding by specifying a list of pipelines, which forward from a set of named inputs to a set of named outputs.
There are built-in input names for common log categories, and you can define custom inputs to do additional filtering.
There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster.
For more details see the documentation on the API fields.
| Property | Type | Description |
|---|---|---|
| spec | object | Specification of the desired behavior of ClusterLogForwarder |
| status | object | Status of the ClusterLogForwarder |
3.6.2.1.1. .spec
3.6.2.1.1.1. Description
ClusterLogForwarderSpec defines how logs should be forwarded to remote targets.
3.6.2.1.1.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| inputs | array | (optional) Inputs are named filters for log messages to be forwarded. |
| outputDefaults | object | (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. |
| outputs | array | (optional) Outputs are named destinations for log messages. |
| pipelines | array | Pipelines forward the messages selected by a set of inputs to a set of outputs. |
3.6.2.1.2. .spec.inputs[]
3.6.2.1.2.1. Description
InputSpec defines a selector of log messages.
3.6.2.1.2.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| application | object |
(optional) Application, if present, enables named set of |
| name | string |
Name used to refer to the input of a |
3.6.2.1.3. .spec.inputs[].application
3.6.2.1.3.1. Description
Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs.
3.6.2.1.3.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| namespaces | array | (optional) Namespaces from which to collect application logs. |
| selector | object | (optional) Selector for logs from pods with matching labels. |
3.6.2.1.4. .spec.inputs[].application.namespaces[]
3.6.2.1.4.1. Description
3.6.2.1.4.1.1. Type
- array
3.6.2.1.5. .spec.inputs[].application.selector
3.6.2.1.5.1. Description
A label selector is a label query over a set of resources.
3.6.2.1.5.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| matchLabels | object | (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels |
3.6.2.1.6. .spec.inputs[].application.selector.matchLabels
3.6.2.1.6.1. Description
3.6.2.1.6.1.1. Type
- object
3.6.2.1.7. .spec.outputDefaults
3.6.2.1.7.1. Description
3.6.2.1.7.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| elasticsearch | object | (optional) Elasticsearch OutputSpec default values |
3.6.2.1.8. .spec.outputDefaults.elasticsearch
3.6.2.1.8.1. Description
ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index
3.6.2.1.8.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| enableStructuredContainerLogs | bool | (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow |
| structuredTypeKey | string | (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index |
| structuredTypeName | string | (optional) StructuredTypeName specifies the name of elasticsearch schema |
3.6.2.1.9. .spec.outputs[]
3.6.2.1.9.1. Description
Output defines a destination for log messages.
3.6.2.1.9.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| syslog | object | (optional) |
| fluentdForward | object | (optional) |
| elasticsearch | object | (optional) |
| kafka | object | (optional) |
| cloudwatch | object | (optional) |
| loki | object | (optional) |
| googleCloudLogging | object | (optional) |
| splunk | object | (optional) |
| name | string |
Name used to refer to the output from a |
| secret | object | (optional) Secret for authentication. |
| tls | object | TLS contains settings for controlling options on TLS client connections. |
| type | string | Type of output plugin. |
| url | string | (optional) URL to send log records to. |
3.6.2.1.10. .spec.outputs[].secret
3.6.2.1.10.1. Description
OutputSecretSpec is a secret reference containing name only, no namespace.
3.6.2.1.10.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| name | string | Name of a secret in the namespace configured for log forwarder secrets. |
3.6.2.1.11. .spec.outputs[].tls
3.6.2.1.11.1. Description
OutputTLSSpec contains options for TLS connections that are agnostic to the output type.
3.6.2.1.11.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| insecureSkipVerify | bool | If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. |
3.6.2.1.12. .spec.pipelines[]
3.6.2.1.12.1. Description
PipelinesSpec link a set of inputs to a set of outputs.
3.6.2.1.12.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| detectMultilineErrors | bool | (optional) DetectMultilineErrors enables multiline error detection of container logs |
| inputRefs | array |
InputRefs lists the names ( |
| labels | object | (optional) Labels applied to log records passing through this pipeline. |
| name | string |
(optional) Name is optional, but must be unique in the |
| outputRefs | array |
OutputRefs lists the names ( |
| parse | string | (optional) Parse enables parsing of log entries into structured logs |
3.6.2.1.13. .spec.pipelines[].inputRefs[]
3.6.2.1.13.1. Description
3.6.2.1.13.1.1. Type
- array
3.6.2.1.14. .spec.pipelines[].labels
3.6.2.1.14.1. Description
3.6.2.1.14.1.1. Type
- object
3.6.2.1.15. .spec.pipelines[].outputRefs[]
3.6.2.1.15.1. Description
3.6.2.1.15.1.1. Type
- array
3.6.2.1.16. .status
3.6.2.1.16.1. Description
ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder
3.6.2.1.16.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| conditions | object | Conditions of the log forwarder. |
| inputs | Conditions | Inputs maps input name to condition of the input. |
| outputs | Conditions | Outputs maps output name to condition of the output. |
| pipelines | Conditions | Pipelines maps pipeline name to condition of the pipeline. |
3.6.2.1.17. .status.conditions
3.6.2.1.17.1. Description
3.6.2.1.17.1.1. Type
- object
3.6.2.1.18. .status.inputs
3.6.2.1.18.1. Description
3.6.2.1.18.1.1. Type
- Conditions
3.6.2.1.19. .status.outputs
3.6.2.1.19.1. Description
3.6.2.1.19.1.1. Type
- Conditions
3.6.2.1.20. .status.pipelines
3.6.2.1.20.1. Description
3.6.2.1.20.1.1. Type
- Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API
| Property | Type | Description |
|---|---|---|
| spec | object | Specification of the desired behavior of ClusterLogging |
| status | object | Status defines the observed state of ClusterLogging |
3.6.2.1.21. .spec
3.6.2.1.21.1. Description
ClusterLoggingSpec defines the desired state of ClusterLogging
3.6.2.1.21.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| collection | object | Specification of the Collection component for the cluster |
| curation | object | (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster |
| forwarder | object | (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster |
| logStore | object | (optional) Specification of the Log Storage component for the cluster |
| managementState | string | (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator |
| visualization | object | (optional) Specification of the Visualization component for the cluster |
3.6.2.1.22. .spec.collection
3.6.2.1.22.1. Description
This is the struct that will contain information pertinent to Log and event collection
3.6.2.1.22.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| resources | object | (optional) The resource requirements for the collector |
| nodeSelector | object | (optional) Define which Nodes the Pods are scheduled on. |
| tolerations | array | (optional) Define the tolerations the Pods will accept |
| fluentd | object | (optional) Fluentd represents the configuration for forwarders of type fluentd. |
| logs | object | (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster |
| type | string | (optional) The type of Log Collection to configure |
3.6.2.1.23. .spec.collection.fluentd
3.6.2.1.23.1. Description
FluentdForwarderSpec represents the configuration for forwarders of type fluentd.
3.6.2.1.23.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| buffer | object | |
| inFile | object |
3.6.2.1.24. .spec.collection.fluentd.buffer
3.6.2.1.24.1. Description
FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.
For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters
For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters
For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters
3.6.2.1.24.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| chunkLimitSize | string | (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be |
| flushInterval | string | (optional) FlushInterval represents the time duration to wait between two consecutive flush |
| flushMode | string | (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode |
| flushThreadCount | int | (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer |
| overflowAction | string | (optional) OverflowAction represents the action for the fluentd buffer plugin to |
| retryMaxInterval | string | (optional) RetryMaxInterval represents the maximum time interval for exponential backoff |
| retryTimeout | string | (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up |
| retryType | string | (optional) RetryType represents the type of retrying flush operations. Flush operations can |
| retryWait | string | (optional) RetryWait represents the time duration between two consecutive retries to flush |
| totalLimitSize | string | (optional) TotalLimitSize represents the threshold of node space allowed per fluentd |
3.6.2.1.25. .spec.collection.fluentd.inFile
3.6.2.1.25.1. Description
FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.
For general parameters refer to: https://docs.fluentd.org/input/tail#parameters
3.6.2.1.25.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| readLinesLimit | int | (optional) ReadLinesLimit represents the number of lines to read with each I/O operation |
3.6.2.1.26. .spec.collection.logs
3.6.2.1.26.1. Description
3.6.2.1.26.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| fluentd | object | Specification of the Fluentd Log Collection component |
| type | string | The type of Log Collection to configure |
3.6.2.1.27. .spec.collection.logs.fluentd
3.6.2.1.27.1. Description
CollectorSpec is spec to define scheduling and resources for a collector
3.6.2.1.27.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| nodeSelector | object | (optional) Define which Nodes the Pods are scheduled on. |
| resources | object | (optional) The resource requirements for the collector |
| tolerations | array | (optional) Define the tolerations the Pods will accept |
3.6.2.1.28. .spec.collection.logs.fluentd.nodeSelector
3.6.2.1.28.1. Description
3.6.2.1.28.1.1. Type
- object
3.6.2.1.29. .spec.collection.logs.fluentd.resources
3.6.2.1.29.1. Description
3.6.2.1.29.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.30. .spec.collection.logs.fluentd.resources.limits
3.6.2.1.30.1. Description
3.6.2.1.30.1.1. Type
- object
3.6.2.1.31. .spec.collection.logs.fluentd.resources.requests
3.6.2.1.31.1. Description
3.6.2.1.31.1.1. Type
- object
3.6.2.1.32. .spec.collection.logs.fluentd.tolerations[]
3.6.2.1.32.1. Description
3.6.2.1.32.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
| key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
| operator | string | (optional) Operator represents a key's relationship to the value. |
| tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
| value | string | (optional) Value is the taint value the toleration matches to. |
3.6.2.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds
3.6.2.1.33.1. Description
3.6.2.1.33.1.1. Type
- int
3.6.2.1.34. .spec.curation
3.6.2.1.34.1. Description
This is the struct that will contain information pertinent to Log curation (Curator)
3.6.2.1.34.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| curator | object | The specification of curation to configure |
| type | string | The kind of curation to configure |
3.6.2.1.35. .spec.curation.curator
3.6.2.1.35.1. Description
3.6.2.1.35.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| nodeSelector | object | Define which Nodes the Pods are scheduled on. |
| resources | object | (optional) The resource requirements for Curator |
| schedule | string | The cron schedule that the Curator job is run. Defaults to "30 3 * * *" |
| tolerations | array |
3.6.2.1.36. .spec.curation.curator.nodeSelector
3.6.2.1.36.1. Description
3.6.2.1.36.1.1. Type
- object
3.6.2.1.37. .spec.curation.curator.resources
3.6.2.1.37.1. Description
3.6.2.1.37.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.38. .spec.curation.curator.resources.limits
3.6.2.1.38.1. Description
3.6.2.1.38.1.1. Type
- object
3.6.2.1.39. .spec.curation.curator.resources.requests
3.6.2.1.39.1. Description
3.6.2.1.39.1.1. Type
- object
3.6.2.1.40. .spec.curation.curator.tolerations[]
3.6.2.1.40.1. Description
3.6.2.1.40.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
| key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
| operator | string | (optional) Operator represents a key's relationship to the value. |
| tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
| value | string | (optional) Value is the taint value the toleration matches to. |
3.6.2.1.41. .spec.curation.curator.tolerations[].tolerationSeconds
3.6.2.1.41.1. Description
3.6.2.1.41.1.1. Type
- int
3.6.2.1.42. .spec.forwarder
3.6.2.1.42.1. Description
ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd.
3.6.2.1.42.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| fluentd | object |
3.6.2.1.43. .spec.forwarder.fluentd
3.6.2.1.43.1. Description
FluentdForwarderSpec represents the configuration for forwarders of type fluentd.
3.6.2.1.43.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| buffer | object | |
| inFile | object |
3.6.2.1.44. .spec.forwarder.fluentd.buffer
3.6.2.1.44.1. Description
FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing.
For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters
For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters
For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters
3.6.2.1.44.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| chunkLimitSize | string | (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be |
| flushInterval | string | (optional) FlushInterval represents the time duration to wait between two consecutive flush |
| flushMode | string | (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode |
| flushThreadCount | int | (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer |
| overflowAction | string | (optional) OverflowAction represents the action for the fluentd buffer plugin to |
| retryMaxInterval | string | (optional) RetryMaxInterval represents the maximum time interval for exponential backoff |
| retryTimeout | string | (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up |
| retryType | string | (optional) RetryType represents the type of retrying flush operations. Flush operations can |
| retryWait | string | (optional) RetryWait represents the time duration between two consecutive retries to flush |
| totalLimitSize | string | (optional) TotalLimitSize represents the threshold of node space allowed per fluentd |
3.6.2.1.45. .spec.forwarder.fluentd.inFile
3.6.2.1.45.1. Description
FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs.
For general parameters refer to: https://docs.fluentd.org/input/tail#parameters
3.6.2.1.45.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| readLinesLimit | int | (optional) ReadLinesLimit represents the number of lines to read with each I/O operation |
3.6.2.1.46. .spec.logStore
3.6.2.1.46.1. Description
The LogStoreSpec contains information about how logs are stored.
3.6.2.1.46.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| elasticsearch | object | Specification of the Elasticsearch Log Store component |
| lokistack | object | LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. |
| retentionPolicy | object | (optional) Retention policy defines the maximum age for an index after which it should be deleted |
| type | string | The Type of Log Storage to configure. The operator currently supports either using ElasticSearch |
3.6.2.1.47. .spec.logStore.elasticsearch
3.6.2.1.47.1. Description
3.6.2.1.47.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| nodeCount | int | Number of nodes to deploy for Elasticsearch |
| nodeSelector | object | Define which Nodes the Pods are scheduled on. |
| proxy | object | Specification of the Elasticsearch Proxy component |
| redundancyPolicy | string | (optional) |
| resources | object | (optional) The resource requirements for Elasticsearch |
| storage | object | (optional) The storage specification for Elasticsearch data nodes |
| tolerations | array |
3.6.2.1.48. .spec.logStore.elasticsearch.nodeSelector
3.6.2.1.48.1. Description
3.6.2.1.48.1.1. Type
- object
3.6.2.1.49. .spec.logStore.elasticsearch.proxy
3.6.2.1.49.1. Description
3.6.2.1.49.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| resources | object |
3.6.2.1.50. .spec.logStore.elasticsearch.proxy.resources
3.6.2.1.50.1. Description
3.6.2.1.50.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.51. .spec.logStore.elasticsearch.proxy.resources.limits
3.6.2.1.51.1. Description
3.6.2.1.51.1.1. Type
- object
3.6.2.1.52. .spec.logStore.elasticsearch.proxy.resources.requests
3.6.2.1.52.1. Description
3.6.2.1.52.1.1. Type
- object
3.6.2.1.53. .spec.logStore.elasticsearch.resources
3.6.2.1.53.1. Description
3.6.2.1.53.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.54. .spec.logStore.elasticsearch.resources.limits
3.6.2.1.54.1. Description
3.6.2.1.54.1.1. Type
- object
3.6.2.1.55. .spec.logStore.elasticsearch.resources.requests
3.6.2.1.55.1. Description
3.6.2.1.55.1.1. Type
- object
3.6.2.1.56. .spec.logStore.elasticsearch.storage
3.6.2.1.56.1. Description
3.6.2.1.56.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| size | object | The max storage capacity for the node to provision. |
| storageClassName | string | (optional) The name of the storage class to use with creating the node's PVC. |
3.6.2.1.57. .spec.logStore.elasticsearch.storage.size
3.6.2.1.57.1. Description
3.6.2.1.57.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| Format | string | Change Format at will. See the comment for Canonicalize for |
| d | object | d is the quantity in inf.Dec form if d.Dec != nil |
| i | int | i is the quantity in int64 scaled form, if d.Dec == nil |
| s | string | s is the generated value of this quantity to avoid recalculation |
3.6.2.1.58. .spec.logStore.elasticsearch.storage.size.d
3.6.2.1.58.1. Description
3.6.2.1.58.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| Dec | object |
3.6.2.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec
3.6.2.1.59.1. Description
3.6.2.1.59.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| scale | int | |
| unscaled | object |
3.6.2.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled
3.6.2.1.60.1. Description
3.6.2.1.60.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| abs | Word | sign |
| neg | bool |
3.6.2.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs
3.6.2.1.61.1. Description
3.6.2.1.61.1.1. Type
- Word
3.6.2.1.62. .spec.logStore.elasticsearch.storage.size.i
3.6.2.1.62.1. Description
3.6.2.1.62.1.1. Type
- int
| Property | Type | Description |
|---|---|---|
| scale | int | |
| value | int |
3.6.2.1.63. .spec.logStore.elasticsearch.tolerations[]
3.6.2.1.63.1. Description
3.6.2.1.63.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
| key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
| operator | string | (optional) Operator represents a key's relationship to the value. |
| tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
| value | string | (optional) Value is the taint value the toleration matches to. |
3.6.2.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds
3.6.2.1.64.1. Description
3.6.2.1.64.1.1. Type
- int
3.6.2.1.65. .spec.logStore.lokistack
3.6.2.1.65.1. Description
LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace.
3.6.2.1.65.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| name | string | Name of the LokiStack resource. |
3.6.2.1.66. .spec.logStore.retentionPolicy
3.6.2.1.66.1. Description
3.6.2.1.66.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| application | object | |
| audit | object | |
| infra | object |
3.6.2.1.67. .spec.logStore.retentionPolicy.application
3.6.2.1.67.1. Description
3.6.2.1.67.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
| maxAge | string | (optional) |
| namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
| pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.6.2.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[]
3.6.2.1.68.1. Description
3.6.2.1.68.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
| namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.6.2.1.69. .spec.logStore.retentionPolicy.audit
3.6.2.1.69.1. Description
3.6.2.1.69.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
| maxAge | string | (optional) |
| namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
| pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.6.2.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[]
3.6.2.1.70.1. Description
3.6.2.1.70.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
| namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.6.2.1.71. .spec.logStore.retentionPolicy.infra
3.6.2.1.71.1. Description
3.6.2.1.71.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| diskThresholdPercent | int | (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) |
| maxAge | string | (optional) |
| namespaceSpec | array | (optional) The per namespace specification to delete documents older than a given minimum age |
| pruneNamespacesInterval | string | (optional) How often to run a new prune-namespaces job |
3.6.2.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[]
3.6.2.1.72.1. Description
3.6.2.1.72.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| minAge | string | (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) |
| namespace | string | Target Namespace to delete logs older than MinAge (defaults to 7d) |
3.6.2.1.73. .spec.visualization
3.6.2.1.73.1. Description
This is the struct that will contain information pertinent to Log visualization (Kibana)
3.6.2.1.73.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| kibana | object | Specification of the Kibana Visualization component |
| type | string | The type of Visualization to configure |
3.6.2.1.74. .spec.visualization.kibana
3.6.2.1.74.1. Description
3.6.2.1.74.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| nodeSelector | object | Define which Nodes the Pods are scheduled on. |
| proxy | object | Specification of the Kibana Proxy component |
| replicas | int | Number of instances to deploy for a Kibana deployment |
| resources | object | (optional) The resource requirements for Kibana |
| tolerations | array |
3.6.2.1.75. .spec.visualization.kibana.nodeSelector
3.6.2.1.75.1. Description
3.6.2.1.75.1.1. Type
- object
3.6.2.1.76. .spec.visualization.kibana.proxy
3.6.2.1.76.1. Description
3.6.2.1.76.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| resources | object |
3.6.2.1.77. .spec.visualization.kibana.proxy.resources
3.6.2.1.77.1. Description
3.6.2.1.77.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.78. .spec.visualization.kibana.proxy.resources.limits
3.6.2.1.78.1. Description
3.6.2.1.78.1.1. Type
- object
3.6.2.1.79. .spec.visualization.kibana.proxy.resources.requests
3.6.2.1.79.1. Description
3.6.2.1.79.1.1. Type
- object
3.6.2.1.80. .spec.visualization.kibana.replicas
3.6.2.1.80.1. Description
3.6.2.1.80.1.1. Type
- int
3.6.2.1.81. .spec.visualization.kibana.resources
3.6.2.1.81.1. Description
3.6.2.1.81.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| limits | object | (optional) Limits describes the maximum amount of compute resources allowed. |
| requests | object | (optional) Requests describes the minimum amount of compute resources required. |
3.6.2.1.82. .spec.visualization.kibana.resources.limits
3.6.2.1.82.1. Description
3.6.2.1.82.1.1. Type
- object
3.6.2.1.83. .spec.visualization.kibana.resources.requests
3.6.2.1.83.1. Description
3.6.2.1.83.1.1. Type
- object
3.6.2.1.84. .spec.visualization.kibana.tolerations[]
3.6.2.1.84.1. Description
3.6.2.1.84.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| effect | string | (optional) Effect indicates the taint effect to match. Empty means match all taint effects. |
| key | string | (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. |
| operator | string | (optional) Operator represents a key's relationship to the value. |
| tolerationSeconds | int | (optional) TolerationSeconds represents the period of time the toleration (which must be |
| value | string | (optional) Value is the taint value the toleration matches to. |
3.6.2.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds
3.6.2.1.85.1. Description
3.6.2.1.85.1.1. Type
- int
3.6.2.1.86. .status
3.6.2.1.86.1. Description
ClusterLoggingStatus defines the observed state of ClusterLogging
3.6.2.1.86.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| collection | object | (optional) |
| conditions | object | (optional) |
| curation | object | (optional) |
| logStore | object | (optional) |
| visualization | object | (optional) |
3.6.2.1.87. .status.collection
3.6.2.1.87.1. Description
3.6.2.1.87.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| logs | object | (optional) |
3.6.2.1.88. .status.collection.logs
3.6.2.1.88.1. Description
3.6.2.1.88.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| fluentdStatus | object | (optional) |
3.6.2.1.89. .status.collection.logs.fluentdStatus
3.6.2.1.89.1. Description
3.6.2.1.89.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| clusterCondition | object | (optional) |
| daemonSet | string | (optional) |
| nodes | object | (optional) |
| pods | string | (optional) |
3.6.2.1.90. .status.collection.logs.fluentdStatus.clusterCondition
3.6.2.1.90.1. Description
operator-sdk generate crds does not allow map-of-slice, must use a named type.
3.6.2.1.90.1.1. Type
- object
3.6.2.1.91. .status.collection.logs.fluentdStatus.nodes
3.6.2.1.91.1. Description
3.6.2.1.91.1.1. Type
- object
3.6.2.1.92. .status.conditions
3.6.2.1.92.1. Description
3.6.2.1.92.1.1. Type
- object
3.6.2.1.93. .status.curation
3.6.2.1.93.1. Description
3.6.2.1.93.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| curatorStatus | array | (optional) |
3.6.2.1.94. .status.curation.curatorStatus[]
3.6.2.1.94.1. Description
3.6.2.1.94.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| clusterCondition | object | (optional) |
| cronJobs | string | (optional) |
| schedules | string | (optional) |
| suspended | bool | (optional) |
3.6.2.1.95. .status.curation.curatorStatus[].clusterCondition
3.6.2.1.95.1. Description
operator-sdk generate crds does not allow map-of-slice, must use a named type.
3.6.2.1.95.1.1. Type
- object
3.6.2.1.96. .status.logStore
3.6.2.1.96.1. Description
3.6.2.1.96.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| elasticsearchStatus | array | (optional) |
3.6.2.1.97. .status.logStore.elasticsearchStatus[]
3.6.2.1.97.1. Description
3.6.2.1.97.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| cluster | object | (optional) |
| clusterConditions | object | (optional) |
| clusterHealth | string | (optional) |
| clusterName | string | (optional) |
| deployments | array | (optional) |
| nodeConditions | object | (optional) |
| nodeCount | int | (optional) |
| pods | object | (optional) |
| replicaSets | array | (optional) |
| shardAllocationEnabled | string | (optional) |
| statefulSets | array | (optional) |
3.6.2.1.98. .status.logStore.elasticsearchStatus[].cluster
3.6.2.1.98.1. Description
3.6.2.1.98.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| activePrimaryShards | int | The number of Active Primary Shards for the Elasticsearch Cluster |
| activeShards | int | The number of Active Shards for the Elasticsearch Cluster |
| initializingShards | int | The number of Initializing Shards for the Elasticsearch Cluster |
| numDataNodes | int | The number of Data Nodes for the Elasticsearch Cluster |
| numNodes | int | The number of Nodes for the Elasticsearch Cluster |
| pendingTasks | int | |
| relocatingShards | int | The number of Relocating Shards for the Elasticsearch Cluster |
| status | string | The current Status of the Elasticsearch Cluster |
| unassignedShards | int | The number of Unassigned Shards for the Elasticsearch Cluster |
3.6.2.1.99. .status.logStore.elasticsearchStatus[].clusterConditions
3.6.2.1.99.1. Description
3.6.2.1.99.1.1. Type
- object
3.6.2.1.100. .status.logStore.elasticsearchStatus[].deployments[]
3.6.2.1.100.1. Description
3.6.2.1.100.1.1. Type
- array
3.6.2.1.101. .status.logStore.elasticsearchStatus[].nodeConditions
3.6.2.1.101.1. Description
3.6.2.1.101.1.1. Type
- object
3.6.2.1.102. .status.logStore.elasticsearchStatus[].pods
3.6.2.1.102.1. Description
3.6.2.1.102.1.1. Type
- object
3.6.2.1.103. .status.logStore.elasticsearchStatus[].replicaSets[]
3.6.2.1.103.1. Description
3.6.2.1.103.1.1. Type
- array
3.6.2.1.104. .status.logStore.elasticsearchStatus[].statefulSets[]
3.6.2.1.104.1. Description
3.6.2.1.104.1.1. Type
- array
3.6.2.1.105. .status.visualization
3.6.2.1.105.1. Description
3.6.2.1.105.1.1. Type
- object
| Property | Type | Description |
|---|---|---|
| kibanaStatus | array | (optional) |
3.6.2.1.106. .status.visualization.kibanaStatus[]
3.6.2.1.106.1. Description
3.6.2.1.106.1.1. Type
- array
| Property | Type | Description |
|---|---|---|
| clusterCondition | object | (optional) |
| deployment | string | (optional) |
| pods | string | (optional) The status for each of the Kibana pods for the Visualization component |
| replicaSets | array | (optional) |
| replicas | int | (optional) |
3.6.2.1.107. .status.visualization.kibanaStatus[].clusterCondition
3.6.2.1.107.1. Description
3.6.2.1.107.1.1. Type
- object
3.6.2.1.108. .status.visualization.kibanaStatus[].replicaSets[]
3.6.2.1.108.1. Description
3.6.2.1.108.1.1. Type
- array