Chapter 13. Overview of NVMe over fabric devices

Non-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives.

Use the following types of fabric transport to configure NVMe over fabric devices:

When using FC and RDMA, the solid-state drive does not have to be local to your system; it can be configured remotely through a FC or RDMA controller.

13.1. NVMe over fabrics using RDMA

In NVMe/RDMA setup, NVMe target and NVMe initiator is configured.

As a system administrator, complete the tasks in the following sections to deploy the NVMe over fabrics using RDMA (NVMe/RDMA):

13.1.1. Setting up an NVMe/RDMA target using configfs

Use this procedure to configure an NVMe/RDMA target using configfs.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.

Procedure

  1. Create the nvmet-rdma subsystem:

    # modprobe nvmet-rdma
    
    # mkdir /sys/kernel/config/nvmet/subsystems/testnqn
    
    # cd /sys/kernel/config/nvmet/subsystems/testnqn

    Replace testnqn with the subsystem name.

  2. Allow any host to connect to this target:

    # echo 1 > attr_allow_any_host
  3. Configure a namespace:

    # mkdir namespaces/10
    
    # cd namespaces/10

    Replace 10 with the namespace number

  4. Set a path to the NVMe device:

    #echo -n /dev/nvme0n1 > device_path
  5. Enable the namespace:

    # echo 1 > enable
  6. Create a directory with an NVMe port:

    # mkdir /sys/kernel/config/nvmet/ports/1
    
    # cd /sys/kernel/config/nvmet/ports/1
  7. Display the IP address of mlx5_ib0:

    # ip addr show mlx5_ib0
    
    8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
        link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
        inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0
           valid_lft forever preferred_lft forever
        inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
  8. Set the transport address for the target:

    # echo -n 172.31.0.202 > addr_traddr
  9. Set RDMA as the transport type:

    # echo rdma > addr_trtype
    
    # echo 4420 > addr_trsvcid
  10. Set the address family for the port:

    # echo ipv4 > addr_adrfam
  11. Create a soft link:

    # ln -s /sys/kernel/config/nvmet/subsystems/testnqn   /sys/kernel/config/nvmet/ports/1/subsystems/testnqn

Verification steps

  • Verify that the NVMe target is listening on the given port and ready for connection requests:

    # dmesg | grep "enabling port"
    [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)

Additional resources

  • The nvme man page.

13.1.2. Setting up the NVMe/RDMA target using nvmetcli

Use the nvmetcli to edit, view, and start NVMe target. The nvmetcli provides a command line and an interactive shell option. Use this procedure to configure the NVMe/RDMA target by nvmetcli.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.
  • Execute the nvmetcli operations as a root user.

Procedure

  1. Install the nvmetcli package:

    # yum install nvmetcli
  2. Download the rdma.json file

    # wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
  3. Edit the rdma.json file and change the traddr value to 172.31.0.202.
  4. Setup the target by loading the NVMe target configuration file:

    # nvmetcli restore rdma.json
Note

If the NVMe target configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file.

Verification steps

  • Verify that the NVMe target is listening on the given port and ready for connection requests:

    #dmesg | tail -1
    [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
  • (Optional) Clear the current NVMe target:

    # nvmetcli clear

Additional resources

  • The nvmetcli man page.
  • The nvme man page.

13.1.3. Configuring an NVMe/RDMA client

Use this procedure to configure an NVMe/RDMA client using the NVMe management command line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # yum install nvme-cli
  2. Load the nvme-rdma module if its not loaded:

    # modprobe nvme-rdma
  3. Discover available subsystems on the NVMe target:

    # nvme discover -t rdma -a 172.31.0.202 -s 4420
    
    Discovery Log Number of Records 1, Generation counter 2
    =====Discovery Log Entry 0======
    trtype:  rdma
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified, sq flow control disable supported
    portid:  1
    trsvcid: 4420
    subnqn:  testnqn
    traddr:  172.31.0.202
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms:    rdma-cm
    rdma_pkey: 0x0000
  4. Connect to the discovered subsystems:

    # nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    nvme0n1
    
    #cat /sys/class/nvme/nvme0/transport
    rdma

    Replace testnqn with the NVMe subsystem name.

    Replace 172.31.0.202 with the target IP address.

    Replace 4420 with the port number.

Verification steps

  • List the NVMe devices that are currently connected:

    # nvme list
  • (Optional) Disconnect from the target:

    # nvme disconnect -n testnqn
    NQN:testnqn disconnected 1 controller(s)
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home

Additional resources

13.2. NVMe over fabrics using FC

The NVMe over Fibre Channel (FC-NVMe) is fully supported in initiator mode when used with certain Broadcom Emulex and Marvell Qlogic Fibre Channel adapters. As a system administrator, complete the tasks in the following sections to deploy the FC-NVMe:

13.2.1. Configuring the NVMe initiator for Broadcom adapters

Use this procedure to configure the NVMe initiator for Broadcom adapters client using the NVMe management command line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # yum install nvme-cli

    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host.

    To generate a new hostnqn:

    #nvme gen-hostnqn
  2. Find the WWNN and WWPN of the local and remote ports and use the output to find the subsystem NQN:

    # cat /sys/class/scsi_host/host*/nvme_info
    
    NVME Initiator Enabled
    XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
    NVME LPORT lpfc0 WWPN x10000090fae0b5f5 WWNN x20000090fae0b5f5 DID x010f00 ONLINE
    NVME RPORT       WWPN x204700a098cbcac6 WWNN x204600a098cbcac6 DID x01050e TARGET DISCSRVC ONLINE
    
    NVME Statistics
    LS: Xmt 000000000e Cmpl 000000000e Abort 00000000
    LS XMIT: Err 00000000  CMPL: xb 00000000 Err 00000000
    Total FCP Cmpl 00000000000008ea Issue 00000000000008ec OutIO 0000000000000002
        abort 00000000 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
    FCP CMPL: xb 00000000 Err 00000000
    # nvme discover --transport fc --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn:  nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1
    traddr:  nn-0x204600a098cbcac6:pn-0x204700a098cbcac6

    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.

    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr.

  3. Connect to the NVMe target using the nvme-cli:

    # nvme connect --transport fc --traddr nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 --host-traddr nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 -n nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1

    Replace nn-0x204600a098cbcac6:pn-0x204700a098cbcac6 with the traddr.

    Replace nn-0x20000090fae0b5f5:pn-0x10000090fae0b5f5 with the host_traddr.

    Replace nqn.1992-08.com.netapp:sn.e18bfca87d5e11e98c0800a098cbcac6:subsystem.st14_nvme_ss_1_1 with the subnqn.

Verification steps

  • List the NVMe devices that are currently connected:

    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    # lsblk |grep nvme
    nvme0n1                     259:0    0   100G  0 disk

Additional resources

13.2.2. Configuring the NVMe initiator for QLogic adapters

Use this procedure to configure NVMe initiator for Qlogic adapters client using the NVMe management command line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # yum install nvme-cli

    This creates the hostnqn file in the /etc/nvme/ directory. The hostnqn file identifies the NVMe host.

    To generate a new hostnqn:

    #nvme gen-hostnqn
  2. Remove and reload the qla2xxx module:

    # rmmod qla2xxx
    # modprobe qla2xxx
  3. Find the WWNN and WWPN of the local and remote ports:

    # dmesg |grep traddr
    
    [    6.139862] qla2xxx [0000:04:00.0]-ffff:0: register_localport: host-traddr=nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 on portID:10700
    [    6.241762] qla2xxx [0000:04:00.0]-2102:0: qla_nvme_register_remote: traddr=nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 PortID:01050d

    Using this host-traddr and traddr, find the subsystem NQN:

    nvme discover --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host-traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62
    
    Discovery Log Number of Records 2, Generation counter 49530
    =====Discovery Log Entry 0======
    trtype:  fc
    adrfam:  fibre-channel
    subtype: nvme subsystem
    treq:    not specified
    portid:  0
    trsvcid: none
    subnqn:  nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468
    traddr:  nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6

    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.

    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr.

  4. Connect to the NVMe target using the nvme-cli tool:

    # nvme connect  --transport fc --traddr nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 --host_traddr nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 -n nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468

    Replace nn-0x203b00a098cbcac6:pn-0x203d00a098cbcac6 with the traddr.

    Replace nn-0x20000024ff19bb62:pn-0x21000024ff19bb62 with the host_traddr.

    Replace nqn.1992-08.com.netapp:sn.c9ecc9187b1111e98c0800a098cbcac6:subsystem.vs_nvme_multipath_1_subsystem_468 with the subnqn.

Verification steps

  • List the NVMe devices that are currently connected:

    # nvme list
    Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
    ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
    /dev/nvme0n1     80BgLFM7xMJbAAAAAAAC NetApp ONTAP Controller                  1         107.37  GB / 107.37  GB      4 KiB +  0 B   FFFFFFFF
    
    # lsblk |grep nvme
    nvme0n1                     259:0    0   100G  0 disk

Additional resources