APICast pods have started crashing and unable to start at all

Solution Verified - Updated -

Issue

  • After changing the Custom Resources(CR) to increase the CPU/Memory limits, APICast pods have started crashing and are unable to start at all.
  • I tried to revert to the actual configuration, but still pods are crashing
  • Following are the event logs

    LAST SEEN   TYPE      REASON      OBJECT                           MESSAGE
    41m         Normal    Pulled      pod/apicast-production-1-XXXXX   Container image "registry.redhat.io/3scale-amp2/apicast-gateway-rhel8@sha256:479b4fe71b59333733eb4e12e68724cf20d6fd16e1eb34214ace5cc43effb81c" already present on machine
    21m         Warning   Unhealthy   pod/apicast-production-1-XXXXX   Liveness probe failed: Get "http://xxx.xxx.xxx.xxx:8090/status/live": dial tcp 10.130.6.31:8090: connect: connection refused
    11m         Warning   BackOff     pod/apicast-production-1-XXXXX   Back-off restarting failed container
    6m47s       Warning   Unhealthy   pod/apicast-production-1-XXXXX   Liveness probe failed: Get "http://xxx.xxx.xxx.xxx:8090/status/live": dial tcp 10.129.8.135:8090: connect: connection refused
    103s        Normal    Pulled      pod/apicast-production-1-XXXXX   Container image "registry.redhat.io/3scale-amp2/apicast-gateway-rhel8@sha256:479b4fe71b59333733eb4e12e68724cf20d6fd16e1eb34214ace5cc43effb81c" already present on machine
    99s         Warning   BackOff     pod/apicast-production-1-XXXXX   Back-off restarting failed container
    

Environment

  • Red Hat 3scale API Management
    • 2.10.0

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content