ClusterLogForwarder pipeline is not working
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4.6+
- Red Hat OpenShift Logging (RHOL)
- 5
Issue
- Few
pipelines
ofClusterLogForwarder
are not sending logs to desired destination. - Status section of
ClusterLogForwarder
shows message as invalid, degradedpipelines
and duplicate name:
status:
conditions:
- lastTransitionTime: "2023-04-11T08:45:41Z"
message: duplicate name "my-app-logs-default"
reason: Invalid
status: "False"
type: Ready
- lastTransitionTime: "2023-04-11T08:45:41Z"
message: 'degraded pipelines: invalid [pipeline_1_], degraded []'
reason: Invalid
status: "True"
type: Degraded
Resolution
- Provide each
pipeline
a different name, such that it should be uniquely describing itself. - Refer to below snippet as an example:
$ oc get -o yaml clusterlogforwarder/instance -n openshift-logging
pipelines:
- inputRefs:
- my-app-logs
name: my-app-logs-default <---------------- [1]
outputRefs:
- kafka-logs
- inputRefs:
- audit
name: my-app-logs-default <---------------- [2]
outputRefs:
- default
// [1] and [2] must not be same.
Root Cause
- This issue is seen when two or more pipelines are created with same name in ClusterLogForwarder.
- According to documentation providing name to a
pipeline
is optional, but if it is provided then it should be unique.
Diagnostic Steps
- Check if two or more
pipelines
have same name inClusterLogForwarder
. - Check if the status of
ClusterLogForwarder
instance's yaml shows message as degradedpipelines
and duplicate name:
$ oc get clusterlogforwarder/instance -o json -n openshift-logging | jq ".status.conditions"
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments