Why docker daemons becomes un-responsive and a hung task related to "kdmremove do_deferred_remove" is seen in the logs ?
Issue
- Why docker daemons becomes un-responsive and a hung task related to "kdmremove do_deferred_remove" is seen in the logs ?
Jan 28 04:06:46 testlab kernel: INFO: task docker-current:27922 blocked for more than 120 seconds.
Jan 28 04:06:46 testlab kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 28 04:06:46 testlab kernel: docker-current D 0000000000000000 0 27922 17056 0x00000080
Jan 28 04:06:46 testlab kernel: ffff880f9119fab0 0000000000000086 ffff880fbc9ddc00 ffff880f9119ffd8
Jan 28 04:06:46 testlab kernel: ffff880f9119ffd8 ffff880f9119ffd8 ffff880fbc9ddc00 ffff880f9119fbf0
Jan 28 04:06:46 testlab kernel: ffff880f9119fbf8 7fffffffffffffff ffff880fbc9ddc00 0000000000000000
Jan 28 04:06:46 testlab kernel: Call Trace:
Jan 28 04:06:46 testlab kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 28 04:06:46 testlab kernel: [<ffffffff81638569>] schedule_timeout+0x209/0x2d0
Jan 28 04:06:46 testlab kernel: [<ffffffff8163a228>] ? __schedule+0x2d8/0x900
Jan 28 04:06:46 testlab kernel: [<ffffffff8163ac46>] wait_for_completion+0x116/0x170
Jan 28 04:06:46 testlab kernel: [<ffffffff810b8c00>] ? wake_up_state+0x20/0x20
Jan 28 04:06:46 testlab kernel: [<ffffffff810ab666>] __synchronize_srcu+0x106/0x1a0
Jan 28 04:06:46 testlab kernel: [<ffffffff810ab180>] ? call_srcu+0x70/0x70
Jan 28 04:06:46 testlab kernel: [<ffffffff81219dff>] ? __sync_blockdev+0x1f/0x40
Jan 28 04:06:46 testlab kernel: [<ffffffff810ab71d>] synchronize_srcu+0x1d/0x20
Jan 28 04:06:46 testlab kernel: [<ffffffffa000318d>] __dm_suspend+0x5d/0x220 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa0004c9a>] dm_suspend+0xca/0xf0 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa0009ff0>] ? table_load+0x380/0x380 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa000a184>] dev_suspend+0x194/0x250 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa0009ff0>] ? table_load+0x380/0x380 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa000aa35>] ctl_ioctl+0x255/0x500 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffff8112483d>] ? call_rcu_sched+0x1d/0x20
Jan 28 04:06:46 testlab kernel: [<ffffffffa000acf3>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffff811f1e05>] do_vfs_ioctl+0x2e5/0x4c0
Jan 28 04:06:46 testlab kernel: [<ffffffff8128bb9e>] ? file_has_perm+0xae/0xc0
Jan 28 04:06:46 testlab kernel: [<ffffffff81640d01>] ? __do_page_fault+0xb1/0x450
Jan 28 04:06:46 testlab kernel: [<ffffffff811f2081>] SyS_ioctl+0xa1/0xc0
Jan 28 04:06:46 testlab kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
Jan 28 04:06:46 testlab kernel: INFO: task kworker/u64:8:70871 blocked for more than 120 seconds.
Jan 28 04:06:46 testlab kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 28 04:06:46 testlab kernel: kworker/u64:8 D ffff880ca792d8e0 0 70871 2 0x00000080
Jan 28 04:06:46 testlab kernel: Workqueue: kdmremove do_deferred_remove [dm_mod]
Jan 28 04:06:46 testlab kernel: ffff8808c53d7cf0 0000000000000046 ffff8808c451e780 ffff8808c53d7fd8
Jan 28 04:06:46 testlab kernel: ffff8808c53d7fd8 ffff8808c53d7fd8 ffff8808c451e780 ffff880ca792d8d8
Jan 28 04:06:46 testlab kernel: ffff880ca792d8dc ffff8808c451e780 00000000ffffffff ffff880ca792d8e0
Jan 28 04:06:46 testlab kernel: Call Trace:
Jan 28 04:06:46 testlab kernel: [<ffffffff8163b959>] schedule_preempt_disabled+0x29/0x70
Jan 28 04:06:46 testlab kernel: [<ffffffff81639655>] __mutex_lock_slowpath+0xc5/0x1c0
Jan 28 04:06:46 testlab kernel: [<ffffffff81638abf>] mutex_lock+0x1f/0x2f
Jan 28 04:06:46 testlab kernel: [<ffffffffa0002e9d>] __dm_destroy+0xad/0x340 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa00047e3>] dm_destroy+0x13/0x20 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa0008d85>] dm_hash_remove_all+0x75/0x130 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa000b51a>] dm_deferred_remove+0x1a/0x20 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffffa0000dae>] do_deferred_remove+0xe/0x10 [dm_mod]
Jan 28 04:06:46 testlab kernel: [<ffffffff8109d5fb>] process_one_work+0x17b/0x470
Jan 28 04:06:46 testlab kernel: [<ffffffff8109e3cb>] worker_thread+0x11b/0x400
Jan 28 04:06:46 testlab kernel: [<ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400
Jan 28 04:06:46 testlab kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Jan 28 04:06:46 testlab kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan 28 04:06:46 testlab kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Jan 28 04:06:46 testlab kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Environment
- Red Hat Enterprise Linux 7.2
- kernel: 3.10.0-327.4.4.el7.x86_64
- docker: docker-1.10.3-44.el7.x86_64
Subscriber exclusive content
A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
