PrometheusDuplicateTimestamps alerts triggered due to multiple LocalVolume CRs with same tolerations

Solution Verified - Updated -

Issue

  • PrometheusDuplicateTimestamps alerts triggered.
  • Prometheus pods show below error:

    caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://<pod-IP>/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=3
    
  • Prometheus pod debug logs below error:

    2025-06-17T16:22:05.467505385+07:00 ts=2025-06-17T09:22:05.467Z caller=scrape.go:1777 level=debug component="scrape manager" scrape_pool=serviceMonitor/openshift-monitoring/kube-state-metrics/0 target=https://<pod-IP>:8443/metrics msg="Duplicate sample for timestamp" series="kube_pod_tolerations{namespace=\"openshift-local-storage\",pod=\"diskmaker-manager-gc2nm\",uid=\"541ce1ec-67b3-49dc-989b-e08d3a301087\",key=\"node-role.kubernetes.io/infra\",operator=\"Exists\",effect=\"NoSchedule\"}"
    

Environment

  • Red Hat OpenShift Container Platform 4

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content