Logging-Kibana Pod Runs Out Of Memory

Solution Verified - Updated -

Issue

  • Noticed that the kibana container, within the logging-kibana pod keeps going OOM, and restarting every 1.5-2 hours.
  • It looks like a new environment variable KIBANA_MEMORY_LIMIT was added to the dc/logging-kibana deployment configuration. It appears to be setting the memory to 736 MB.
  • Kibana pod is terminating with exit code 137 (OOMKilled)

Environment

  • OpenShift Enterprise 3.5

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.

Current Customers and Partners

Log in for full access

Log In
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.