RHEL 8.5 is failing to discover NVMe devices for a Hitachi SVOS-RF-System SAN

Solution Unverified - Updated -

Issue

  • RHEL 8.5 is failing to discover NVMe devices for a Hitachi SVOS-RF-System SAN:
rtkit-daemon[2417]: Watchdog thread running.
kernel: nvme nvme2: NVME-FC{0}: create association : host wwpn 0x1234567890  rport wwpn 0x1234567890: NQN "nqn.2014-08.org.nvmexpress.discovery"
kernel: qla2xxx [0000:37:00.1]-2104:5: qla_nvme_alloc_queue: handle 000000004aacc764, idx =1, qsize 32
kernel: qla2xxx [0000:37:00.1]-2121:5: Returning existing qpair of 00000000bb48d580 for idx=1
avahi-daemon[2432]: Found user 'avahi' (UID 70) and group 'avahi' (GID 70).
kernel: nvme nvme2: queue_size 128 > ctrl maxcmd 0, reducing to maxcmd
kernel: nvme nvme2: NVME-FC{0}: controller connect complete
kernel: nvme nvme2: NVME-FC{0}: new ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
kernel: nvme nvme2: Removing ctrl: NQN "nqn.2014-08.org.nvmexpress.discovery"
sssd[2412]: Starting up
smartd[2428]: smartd 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-348.23.1.el8_5.x86_64] (local build)
smartd[2428]: Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
smartd[2428]: Opened configuration file /etc/smartmontools/smartd.conf
smartd[2428]: Configuration file /etc/smartmontools/smartd.conf was parsed, found DEVICESCAN, scanning devices
systemd[1]: nvmefc-boot-connections.service: Succeeded.
  • Once the server is up, we can connect manually:
[root@host ~]# date;nvme connect --transport fc --traddr nn-0x1234567890:pn-0x1234567890 --host-traddr nn-0x1234567890:pn-0x1234567890 -n nqn.xx-xx.xx.xx.hitachi:nvme:storage-subsystem-xx-xx-xx.xx
[root@host ~]# date; nvme connect --transport fc --traddr nn-0x1234567890:pn-0x1234567890--host-traddr nn-0x1234567890:pn-0x1234567890 -n nqn.xx-xx.xx.xx.hitachi:nvme:storage-subsystem-xx.x-xxxx-xxxxxx.xxxx

kernel: nvme nvme2: NVME-FC{0}: create association : host wwpn 0x1234567890  rport wwpn 0x1234567890: NQN "nqn.xxxx-xx.xx.xx.hitachi:nvme:storage-subsystem-sn.x-xxxxx-xxxxxx.xxxxx"
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 00000000617d65db, idx =1, qsize 32
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 000000004743a8fb for idx=1
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 00000000cd324d1e, idx =1, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 000000004743a8fb for idx=1
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 000000009347ee7d, idx =2, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 000000005c140c86 for idx=2
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 000000001f3117c3, idx =3, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 000000000c310056 for idx=3
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 0000000015870fd7, idx =4, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 0000000037338b1b for idx=4
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 000000009e25d52f, idx =5, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 00000000e77aa932 for idx=5
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 000000001658733e, idx =6, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 0000000050258748 for idx=6
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 0000000003009451, idx =7, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 00000000e4307999 for idx=7
kernel: qla2xxx [0000:37:00.0]-2104:1: qla_nvme_alloc_queue: handle 00000000dd772ece, idx =8, qsize 128
kernel: qla2xxx [0000:37:00.0]-2121:1: Returning existing qpair of 00000000d8adb43a for idx=8
kernel: nvme nvme2: NVME-FC{0}: controller connect complete
kernel: nvme nvme2: NVME-FC{0}: new ctrl: NQN "nqn.xxxx-xx.xx.xx.hitachi:nvme:storage-subsystem-sn.x-xxxxx-xxxxxxx.xxxxx"
kernel: nvme2n1: detected capacity change from 0 to 10737418240
kernel: nvme2n2: detected capacity change from 0 to 10737418240
kernel: nvme2n3: detected capacity change from 0 to 10737418240
kernel: nvme2n4: detected capacity change from 0 to 10737418240
kernel: nvme2n5: detected capacity change from 0 to 10737418240
kernel: nvme2n6: detected capacity change from 0 to 10737418240
kernel: nvme2n7: detected capacity change from 0 to 10737418240
kernel: nvme2n8: detected capacity change from 0 to 10737418240
kernel: nvme2n9: detected capacity change from 0 to 10737418240
kernel: nvme2n10: detected capacity change from 0 to 10737418240
kernel: qla2xxx [0000:37:00.0]-3002:1: nvme: Sched: Set ZIO exchange threshold to 1.
kernel: qla2xxx [0000:37:00.0]-ffffff:1: SET ZIO Activity exchange threshold to 6.
kernel: qla2xxx [0000:37:00.0]-4000:1: DPC handler sleeping.

Environment

  • Red Hat Enterprise Linux 8
  • kernel-4.18.0-348.23.1.el8

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content