Chapter 12. Configuring NVMe over fabrics using NVMe/FC

The Non-volatile Memory Express™ (NVMe™) over Fibre Channel (NVMe™/FC) transport is fully supported in host mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. As a system administrator, complete the tasks in the following sections to deploy the NVMe/FC setup:

12.1. Overview of NVMe over fabric devices

Non-volatile Memory Express™ (NVMe™) is an interface that allows host software utility to communicate with solid state drives.

Use the following types of fabric transport to configure NVMe over fabric devices:

NVMe over Remote Direct Memory Access (NVMe/RDMA)
For information about how to configure NVMe™/RDMA, see Configuring NVMe over fabrics using NVMe/RDMA.
NVMe over Fibre Channel (NVMe/FC)
For information about how to configure NVMe™/FC, see Configuring NVMe over fabrics using NVMe/FC.
NVMe over TCP (NVMe/TCP)
For information about how to configure NVMe/FC, see Configuring NVMe over fabrics using NVMe/TCP.

When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices.

12.2. Configuring the NVMe host for Broadcom adapters

Use this procedure to configure the Non-volatile Memory Express™ (NVMe™) host for Broadcom adapters client using the NVMe management command line interface (nvme-cli) utility.

Procedure

  1. Install the nvme-cli utility:

    # dnf install nvme-cli

    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host.

  2. Find the World Wide Node Name (WWNN) and World Wide Port Name (WWPN) identifiers of the local and remote ports:

    # cat /sys/class/scsi_host/host*/nvme_info
    
    NVME Host Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE
    NVME RPORT       WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 000000000e Cmpl 000000000e Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002
        abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000000 Err 00000000

    Using these host-traddr and traddr values, find the subsystem NVMe Qualified Name (NQN):

    # nvme discover --transport fc \ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn:  nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1
    traddr:  nn-0x204600a098cbcac6:pn-0x204700a098cbcac6

    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.

    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host-traddr.

  3. Connect to the NVMe controller using the nvme-cli:

    # nvme connect --transport fc \ --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 \ --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 \ -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 \ -k 5
    Note

    If you see the keep-alive timer (5 seconds) expired! error when a connection time exceeds the default keep-alive timeout value, increase it using the -k option. For example, you can use, -k 7.

    Here,

    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.

    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host-traddr.

    Replace nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 with the subnqn.

    Replace 5 with the keep-alive timeout value in seconds.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    
    
    # lsblk |grep nvme
    nvme0n1                     259:0    0   100G  0 disk

Additional resources

12.3. Configuring the NVMe host for QLogic adapters

Use this procedure to configure Non-volatile Memory Express™ (NVMe™) host for Qlogic adapters client using the NVMe management command line interface (nvme-cli) utility.

Procedure

  1. Install the nvme-cli utility:

    # dnf install nvme-cli

    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host.

  2. Reload the qla2xxx module:

    # modprobe -r qla2xxx
    # modprobe qla2xxx
  3. Find the World Wide Node Name (WWNN) and World Wide Port Name (WWPN) identifiers of the local and remote ports:

    # dmesg |grep traddr
    
    [    6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700
    [    6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d

    Using these host-traddr and traddr values, find the subsystem NVMe Qualified Name (NQN):

    # nvme discover --transport fc \ --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 \ --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn:  nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468
    traddr:  nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6

    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.

    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host-traddr.

  4. Connect to the NVMe controller using the nvme-cli tool:

    # nvme connect --transport fc \ --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 \ --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 \ -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468\ -k 5
    Note

    If you see the keep-alive timer (5 seconds) expired! error when a connection time exceeds the default keep-alive timeout value, increase it using the -k option. For example, you can use, -k 7.

    Here,

    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.

    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host-traddr.

    Replace nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 with the subnqn.

    Replace 5 with the keep-live timeout value in seconds.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    
    # lsblk |grep nvme
    nvme0n1                     259:0    0   100G  0 disk

Additional resources

12.4. Next steps