Creating a new index to repair a broken alias in OpenShift Logging
Issue
- The last index in an alias was deleted
- Elasticsearch disk was full and throwing
ClusterBlockException
. After deleting the indices, a new index is not created (infra, audit, or app) - The write Elasticsearch alias has become corrupt
- How to create a new write alias in Elasticsearch?
-
Fluentd pods show below logs repetitively:
2021-08-19 18:22:26 +0000 [warn]: [default] failed to flush the buffer. retry_time=213 next_retry_seconds=2021-08-19 18:23:33 +0000 chunk="5c9ebdc1580ff14c3bba17d48ebd5327" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc\", :port=>9200, :scheme=>\"https\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"no write index is defined for alias [infra-write]. The write index may be explicitly disabled using is_write_index=false or the alias points to multiple indices without one being designated as a write index\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"no write index is defined for alias [infra-write]. The write index may be explicitly disabled using is_write_index=false or the alias points to multiple indices without one being designated as a write index\"},\"status\":400}"
Environment
- Red Hat OpenShift Container Platform (RHOCP)
- 4
- Red Hat OpenShift Logging
- 5
- Elasticsearch
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.