Loki Ingester pod complain InvalidBucketState in RHOCP4

Solution Verified - Updated -

Issue

  • Loki ingester pod has error related to the state of the ODF bucket.

    level=error ts=2024-09-02T06:56:22.272728198Z caller=flush.go:178 component=ingester loop=26 org_id=application msg=“failed to flush” retries=2 err=“failed to flush chunks: store put chunk: InvalidBucketState: The request is not valid with the current state of the bucket.\n\tstatus code: 409, request id: node-abc.xyz, host id: node-abc.xyz, num_chunks: 1, labels: {kubernetes_container_name=\“abc-xyz\“, kubernetes_host=\“xxxx-xxx-xx-xxx-infra\“, kubernetes_namespace_name=\“xxxx-xxxxxxx-xxxxxxxxxx-xxxxx-xxxx\“, kubernetes_pod_name=\“xxxx-xxxxxxx-xxxxxxxxxx-xxxxx-xxxx\“, log_type=\“application\“, service_name=\“unknown_service\“}”
    
  • noobaa-default-backing-store status is fluctuating between Ready and Rejected.

    $ oc get backingstore -n openshift-storage
    
    NAME                                                TYPE        PHASE   AGE
    noobaa-default-backing-store                        pv-pool     Ready   17d
    
    After a while
    
    $ oc get backingstore -n openshift-storage
    
    NAME                                                TYPE        PHASE      AGE
    noobaa-default-backing-store                        pv-pool     Rejected   17d
    

Environment

  • Red Hat OpenShift Container Platform (RHOCP)
    • 4
  • Red Hat OpenShift Logging (RHOL)
    • 5
  • Red Hat OpenShift Data Foundation (RHODF)
  • Loki

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content