Many fluentd pods in 'MatchNodeSelector' status during deployment
Issue
- After deploying openshift logging with the following configuration:
openshift_logging_es_pvc_dynamic=true
openshift_logging_es_pvc_size=100G
openshift_logging_es_cluster_size=3
openshift_logging_es_memory_limit=4G
openshift_logging_es_number_of_replicas=1
- Expected to see a fluentd instance on each node, however we see the following:
NAME READY STATUS RESTARTS AGE
logging-curator-1-1mz2x 1/1 Running 4 23m
logging-es-20sd50o9-1-h94dm 1/1 Running 0 23m
logging-es-602kzxam-1-0zb9t 1/1 Running 0 23m
logging-es-b7i434tm-1-jld4n 1/1 Running 0 23m
logging-fluentd-0wxbw 0/1 MatchNodeSelector 0 23m
logging-fluentd-2jpz5 0/1 MatchNodeSelector 0 23m
logging-fluentd-3b9vx 0/1 MatchNodeSelector 0 23m
logging-fluentd-3jsm5 0/1 MatchNodeSelector 0 23m
logging-fluentd-4cmgl 0/1 MatchNodeSelector 0 23m
logging-fluentd-b8xlj 1/1 Running 0 23m
logging-fluentd-jf0r1 1/1 Running 0 23m
logging-fluentd-jv950 1/1 Running 0 23m
logging-fluentd-lmjrn 0/1 MatchNodeSelector 0 23m
logging-fluentd-qq8hs 0/1 MatchNodeSelector 0 23m
logging-fluentd-x1s19 1/1 Running 0 23m
logging-kibana-1-qbn7q 2/2 Running 0 23m
- Fluentd only started on our
applicationnodes. - Other fluentd on infra and master node are in
MatchNodeSelector
Environment
- OpenShift Container Platform (OCP)
- 3.5
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.