oom-killer killed processes in 3scale containers

Solution Verified - Updated -

Issue

  • oom-killer killed processes in 3scale containers.
  • The memory limits of 3scale containers were the default value.

    • Is there any specific reason for the OOM? Is it a common issue or a specific issue of this process?

      2019/05/28 17:46:51 114,"May 28 17:46:51 EXAMPLE kernel: ruby invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=976"
      2019/05/28 17:46:51 131,May 28 17:46:51 EXAMPLE kernel: ruby cpuset=docker-f683ae6b1ce6751dcb08f97b00631f06f8be69fb0e97cc15787801d6d6181b68.scope mems_allowed=0
      
  • The memory usage of the system-developer container in system-app-6-xxxxx suddenly bloats from around 82.7% to 93% in 5 minutes. On the other hand, the other 2 system-developer containers in system-app-6-yyyyy and system-app-6-zzzzz keep around 80% memory usage. Why memory usage of the container is so high?

Environment

  • Red Hat 3scale API Management
    • 2.x On-Premises

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content