ElasticSearch 1.5.2 cluster health turns red in OpenShift Enterprise 3.2

Solution Verified - Updated -

Issue

OpenShift Enterprise (OSE) 3.2 comes with ElasticSearch (ES) 1.5.2 as part of the ElasticSearch, Fluentd and Kibana (EFK) logging stacks.

The issue is started when one of the ES data node is down/halted because of OutOfMemory issue
That crash caused the ES cluster status health to become red.

Running /_cluster/health API returning the following result:

sh-4.2$ ${curl_get} $ES_URL/_cluster/health?pretty=true
{
   "cluster_name" : "logging-es",
   "status" : "red",
   "timed_out" : false,
   "number_of_nodes" : 2,
   "number_of_data_nodes" : 2,
   "active_primary_shards" : 5652,
   "active_shards" : 6865,
   "relocating_shards" : 0,
   "initializing_shards" : 4,
   "unassigned_shards" : 4561,
   "number_of_pending_tasks" : 149
}

After fixing OutOfMemory issue and redeploy the ES Pods, ES health cluster status still red.

sh-4.2$ ${curl_get} $ES_URL/_cluster/health?pretty=true
{
  "cluster_name" : "logging-es",
  "status" : "red",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 5367,
  "active_shards" : 10734,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1006,
  "number_of_pending_tasks" : 0
} 

Environment

  • OpenShift Enterprise 3.2
  • ElasticSearch 1.5.2
  • Installed 3 nodes ES cluster which consist of 1 ES master and 2 ES data nodes.

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content