fluentd pods fill up with logs saying primary shard is not active Timeout: [1m]
Issue
Fluentd pod logs fill up over time with repeated messages. Fluentd pods either crash and restart, or have to be restarted manually at which point they resume functioning for a while. Fluentd memory usage becomes very high.
\"error\"=>{\"type\"=>\"unavailable_shards_exception\", \"reason\"=>\"[.operations.2018.04.25][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.operations.2018.04.25] containing [3562] requests]\"}}}, {\"create\"=>{\"_index\"=>\"project.test-prod.18a7d719-48a4-11e7-9d36-1293b0956c5c.2018.04.19\", \"_type\"=>\"com.redhat.viaq.common\", \"_id\"=>\"AWNgFjm-zg6tB7-fHPSL\", \"_version\"=>1, \"_shards\"=>{\"total\"=>1, \"successful\"=>1, \"failed\"=>0}, \"status\"=>201}}, {\"create\"=>{\"_index\"=>\".operations.2018.04.25\", \"_type\"=>\"com.redhat.viaq.common\", \"_id\"=>\"AWNgFjm-zg6tB7-fHPSM\", \"status\"=>503, \"error\"=>{\"type\"=>\"unavailable_shards_exception\", \"reason\"=>\"[.operations.2018.04.25][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.operations.2018.04.25] containing [3562] requests]\"}}}, {\"create\"=>{\"_index\"=>\"project.test-prod.18a7d719-48a4-11e7-9d36-1293b0956c5c.2018.04.19\", \"_type\"=>\"com.redhat.viaq.common\", \"_id\"=>\"AWNgFjm-zg6tB7-fHPSN\", \"_version\"=>1, \"_shards\"=>{\"total\"=>1, \"successful\"=>1, \"failed\"=>0}, \"status\"=>201}}, {\"create\"=>{\"_index\"=>\".operations.2018.04.25\", \"_type\"=>\"com.redhat.viaq.common\", \"_id\"=>\"AWNgFjm-zg6tB7-fHPSO\", \"status\"=>503, \"error\"=>{\"type\"=>\"unavailable_shards_exception\", \"reason\"=>\"[.operations.2018.04.25][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [.operations.2018.04.25] containing [3562] requests]\"}}}, . . .
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.