IO errors logged when filesystem is mounted on NVMe multipath device with discard options

Solution Verified - Updated -

Issue

  • Trace logged for nvme_core [exception RIP: nvme_setup_discard] for sending discard IO on NVMe dm-multipath device
[218499.047714] XFS (dm-7): Mounting V5 Filesystem
[218499.052717] XFS (dm-7): Ending clean mount
[218680.257829] WARNING: CPU: 4 PID: 1863 at drivers/nvme/host/core.c:702 nvme_setup_discard+0x16c/0x1e0 [nvme_core]
[218680.268082] Modules linked in: vfat fat isofs ext4 mbcache jbd2 dm_round_robin dm_service_time dm_multipath sunrpc intel_rapl_msr intel_rapl_common isst_if_common skx_edac nfit libnvdimm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm irqbypass crct10dif_pclmul crc32_pclmul iTCO_wdt dell_smbios iTCO_vendor_support wmi_bmof dell_wmi_descriptor ghash_clmulni_intel dcdbas intel_cstate intel_uncore mei_me intel_rapl_perf pcspkr i2c_i801 lpc_ich mei wmi ipmi_si ipmi_devintf ipmi_msghandler acpi_power_meter ip_tables xfs libcrc32c sd_mod sg mgag200 drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm_vram_helper drm_ttm_helper ttm ahci libahci nvme drm crc32c_intel libata nvme_core megaraid_sas tg3 i2c_algo_bit dm_mirror dm_region_hash dm_log dm_mod
[218680.336964] CPU: 4 PID: 1863 Comm: kworker/4:1H Tainted: G          I      --------- -  - 4.18.0-240.10.1.el8_3.x86_64 #1
[218680.347994] Hardware name: Dell Inc. PowerEdge R640/06DKY5, BIOS 2.8.1 06/26/2020
[218680.355604] Workqueue: xfs-log/dm-7 xlog_ioend_work [xfs]
[218680.361091] RIP: 0010:nvme_setup_discard+0x16c/0x1e0 [nvme_core]
[218680.367180] Code: 38 4c 8b 88 08 0d 00 00 4c 2b 0d ef da e7 e5 49 c1 f9 06 49 c1 e1 0c 4c 03 0d f0 da e7 e5 4c 89 c8 48 85 f6 0f 85 df fe ff ff <0f> 0b ba 00 00 00 80 49 01 d1 72 55 48 c7 c2 00 00 00 80 48 2b 15
[218680.386022] RSP: 0018:ffffb1e8c2ee36f0 EFLAGS: 00010202
[218680.391335] RAX: ffff928bcf00d000 RBX: ffff928bf6f94a10 RCX: ffff928c4f00d000
[218680.398554] RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000078
[218680.405772] RBP: ffffb1e8c2ee37a0 R08: ffff928bcf00d000 R09: ffff928bcf00d000
[218680.412992] R10: 0000000000000008 R11: 0000000000000003 R12: ffff928afa084f00
[218680.420211] R13: 0000000000000001 R14: ffff928de7474000 R15: ffff928b3c4e4b50
[218680.427430] FS:  0000000000000000(0000) GS:ffff928bf7a80000(0000) knlGS:0000000000000000
[218680.435610] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[218680.441443] CR2: 00007f8b19c68934 CR3: 000000034c60a003 CR4: 00000000007606e0
[218680.448680] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[218680.455917] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[218680.463134] PKRU: 55555554
[218680.465934] Call Trace:
[218680.468478]  nvme_setup_cmd+0x151/0x310 [nvme_core]
[218680.473441]  nvme_queue_rq+0x77/0x990 [nvme]
[218680.477811]  ? finish_wait+0x80/0x80
[218680.481476]  ? mempool_alloc+0x67/0x190
[218680.485398]  ? finish_wait+0x80/0x80
[218680.489068]  __blk_mq_try_issue_directly+0xfe/0x230
[218680.494032]  blk_mq_request_issue_directly+0x4e/0xb0
[218680.499086]  ? blk_account_io_start+0xcb/0xe0
[218680.503544]  dm_mq_queue_rq+0x213/0x3f0 [dm_mod]
[218680.508263]  blk_mq_dispatch_rq_list+0xcd/0x6f0
[218680.512880]  ? elv_rb_del+0x1f/0x30
[218680.516462]  ? deadline_remove_request+0x55/0xc0
[218680.521164]  ? dd_dispatch_request+0x1ba/0x230
[218680.525697]  blk_mq_do_dispatch_sched+0x11a/0x160
[218680.530490]  __blk_mq_sched_dispatch_requests+0xfe/0x160
[218680.535898]  blk_mq_sched_dispatch_requests+0x30/0x60
[218680.541038]  __blk_mq_run_hw_queue+0x51/0xd0
[218680.545396]  __blk_mq_delay_run_hw_queue+0x141/0x160
[218680.550449]  blk_mq_sched_insert_requests+0x71/0xf0
[218680.555415]  blk_mq_flush_plug_list+0x196/0x2c0
[218680.560035]  ? __sbitmap_queue_get+0x24/0x90
[218680.564392]  blk_flush_plug_list+0xd7/0x100
[218680.568674]  blk_mq_make_request+0x258/0x5d0
[218680.573041]  generic_make_request+0xcf/0x310
[218680.577401]  submit_bio+0x3c/0x160
[218680.580893]  blk_next_bio+0x33/0x40
[218680.584473]  __blkdev_issue_discard+0xe2/0x1a0
[218680.589029]  xlog_cil_committed+0x18f/0x300 [xfs]
[218680.593818]  ? __switch_to_asm+0x41/0x70
[218680.597828]  ? __switch_to_asm+0x41/0x70
[218680.601838]  ? __switch_to_asm+0x41/0x70
[218680.605865]  xlog_cil_process_committed+0x72/0x90 [xfs]
[218680.611196]  xlog_state_do_callback+0x1db/0x2d0 [xfs]
[218680.616369]  xlog_ioend_work+0x3d/0x90 [xfs]
[218680.620725]  process_one_work+0x1a7/0x360
[218680.624824]  worker_thread+0x30/0x390
[218680.628585]  ? create_worker+0x1a0/0x1a0
[218680.632606]  kthread+0x112/0x130
[218680.635924]  ? kthread_flush_work_fn+0x10/0x10
[218680.640457]  ret_from_fork+0x1f/0x40
[218680.644124] ---[ end trace e8c8a1498d69615e ]---
[218680.648846] device-mapper: multipath: 253:7: Failing path 259:1.
[218680.654967] blk_update_request: I/O error, dev dm-7, sector 120 op 0x3:(DISCARD) flags 0x4000 phys_seg 5 prio class 0
[218680.665716] blk_update_request: I/O error, dev dm-7, sector 8192192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.676752] blk_update_request: I/O error, dev dm-7, sector 16384192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.687877] blk_update_request: I/O error, dev dm-7, sector 24576192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.698992] blk_update_request: I/O error, dev dm-7, sector 32768192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.710113] blk_update_request: I/O error, dev dm-7, sector 40960192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.721232] blk_update_request: I/O error, dev dm-7, sector 49152192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.732351] blk_update_request: I/O error, dev dm-7, sector 57344192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.743501] blk_update_request: I/O error, dev dm-7, sector 65536192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218680.754621] blk_update_request: I/O error, dev dm-7, sector 73728192 op 0x3:(DISCARD) flags 0x4000 phys_seg 4 prio class 0
[218684.992355] device-mapper: multipath: 253:7: Reinstating path 259:1.

Environment

  • Red Hat Enterprise Linux 8.3
    • NVMe DM-Multipath
    • Filesystem with discard option

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content