Thousands of warnings, "WARNING: CPU: X PID: XXXXX at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320", are logged in kernel ring buffer after upgrading to RHEL7.8 and newer.

Solution Verified - Updated -

Issue

  • Thousands of warnings, "WARNING: CPU: X PID: XXXXX at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320", are logged in kernel ring buffer after upgrading to RHEL7.8 and newer.
[  951.237010] ------------[ cut here ]------------
[  951.237014] WARNING: CPU: 1 PID: 17277 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
    ...
[  951.237095] CPU: 1 PID: 17277 Comm: QThread Kdump: loaded Tainted: P        W  OE  ------------   3.10.0-1160.6.1.el7.x86_64 #1
[  951.237097] Hardware name: HP ProLiant ML30 Gen9/ProLiant ML30 Gen9, BIOS U23 05/21/2018
[  951.237097] Call Trace:
[  951.237100]  [<ffffffffb5181400>] dump_stack+0x19/0x1b
[  951.237102]  [<ffffffffb4a9b228>] __warn+0xd8/0x100
[  951.237104]  [<ffffffffb4a9b36d>] warn_slowpath_null+0x1d/0x20
[  951.237106]  [<ffffffffb4aebde2>] dequeue_rt_stack+0xc2/0x320
[  951.237108]  [<ffffffffb4aec822>] dequeue_rt_entity+0x12/0x60
[  951.237110]  [<ffffffffb4aece9d>] dequeue_task_rt+0x2d/0x80
[  951.237112]  [<ffffffffb4ad7006>] deactivate_task+0x46/0xd0
[  951.237114]  [<ffffffffb5187075>] __schedule+0x6d5/0x860
[  951.237115]  [<ffffffffb4c25e4d>] ? __slab_free+0x9d/0x290
[  951.237117]  [<ffffffffb5187229>] schedule+0x29/0x70
[  951.237119]  [<ffffffffb518648d>] schedule_hrtimeout_range_clock+0x12d/0x150
[  951.237121]  [<ffffffffb4bc6f6b>] ? free_compound_page+0x1b/0x20
[  951.237123]  [<ffffffffb4ac674c>] ? add_wait_queue+0x3c/0x50
[  951.237125]  [<ffffffffb51864c3>] schedule_hrtimeout_range+0x13/0x20
[  951.237127]  [<ffffffffb4c63f75>] poll_schedule_timeout+0x55/0xc0
[  951.237129]  [<ffffffffb4c6570d>] do_sys_poll+0x48d/0x590
[  951.237131]  [<ffffffffb5113984>] ? unix_stream_recvmsg+0x54/0x70
[  951.237133]  [<ffffffffb51119f0>] ? unix_ioctl+0x70/0x70
[  951.237135]  [<ffffffffb50344f5>] ? sock_recvmsg+0xc5/0x100
[  951.237137]  [<ffffffffb4c63e90>] ? __pollwait+0xf0/0xf0
[  951.237139]  [<ffffffffb4b84254>] ? filter_match_preds_cb+0x124/0x180
[  951.237141]  [<ffffffffb4a35c19>] ? sched_clock+0x9/0x10
[  951.237142]  [<ffffffffb4b84130>] ? count_leafs_cb+0x30/0x30
[  951.237144]  [<ffffffffb4b83b88>] ? walk_pred_tree+0x58/0x110
[  951.237146]  [<ffffffffb4b693dc>] ? ring_buffer_unlock_commit+0x2c/0x260
[  951.237148]  [<ffffffffb4c65914>] SyS_poll+0x74/0x110
[  951.237151]  [<ffffffffb5195226>] tracesys+0xa6/0xcc
[  951.237152] ---[ end trace 4b84b8bf1b564b86 ]---
[  951.237220] ------------[ cut here ]------------
crash> log -t | grep WARNING | sort | uniq -c
      2 WARNING: CPU: 1 PID: 16909 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      7 WARNING: CPU: 1 PID: 16991 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      2 WARNING: CPU: 1 PID: 17105 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      8 WARNING: CPU: 1 PID: 17144 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      4 WARNING: CPU: 1 PID: 17145 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      1 WARNING: CPU: 1 PID: 17260 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
     31 WARNING: CPU: 1 PID: 17276 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
    238 WARNING: CPU: 1 PID: 17277 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      2 WARNING: CPU: 1 PID: 22158 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      3 WARNING: CPU: 1 PID: 22159 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
      1 WARNING: CPU: 1 PID: 22258 at kernel/sched/rt.c:1062 dequeue_rt_stack+0xc2/0x320
  • Furthermore, the massive printk() as a result of the huge amount of warnings causes a hard lockup often due to the rq->lock contention.

Environment

  • Red Hat Enterprise Linux 7.7.z (kernel-3.10.0-1062.7.1.el7 and newer)
  • Red Hat Enterprise Linux 7.8 (kernel-3.10.0-1127.el7 and newer)
  • Red Hat Enterprise Linux 7.9 (kernel-3.10.0-1160.el7 and newer)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In