Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • The Kubernetes scheduler consistently selects a specific node for scheduling pods, even though there are no taints or affinities influencing the decision

    Posted on

    I recently deployed PostgreSQL with resource requests of 1000m CPU and 1Gi memory. The pod consistently gets scheduled on worker2 but eventually terminates due to disk pressure. Why doesn’t the scheduler try to place it on a healthier node? However, if I increase the CPU and memory requests, it gets scheduled on a different node

    attaching the logs below:
    I0108 16:36:31.359116 1 schedule_one.go:252] "Successfully bound pod to node" pod="rcem1403/postgres-0" node="ocp-w-2.lab.ocp3.lan" evaluatedNodes=23 feasibleNodes=19
    I0108 16:37:42.625494 1 schedule_one.go:252] "Successfully bound pod to node" pod="rcem1403/postgres-0" node="ocp-w-2.lab.ocp3.lan" evaluatedNodes=23 feasibleNodes=19
    I0108 16:39:11.480078 1 schedule_one.go:252] "Successfully bound pod to node" pod="rcem1403/postgres-0" node="ocp-w-2.lab.ocp3.lan" evaluatedNodes=23 feasibleNodes=19
    I0108 16:40:44.348799 1 schedule_one.go:252] "Successfully bound pod to node" pod="rcem1403/postgres-0" node="ocp-w-2.lab.ocp3.lan" evaluatedNodes=23 feasibleNodes=19

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2025 Red Hat