Chapter 4. NVMe over fabrics using RDMA

In an NVMe over RDMA (NVMe/RDMA) setup, you configure an NVMe target and an NVMe initiator.

As a system administrator, complete the following tasks to deploy the NVMe/RDMA setup:

4.1. Overview of NVMe over fabric devices

Non-volatile Memory Express (NVMe) is an interface that allows host software utility to communicate with solid state drives.

Use the following types of fabric transport to configure NVMe over fabric devices:

NVMe over Remote Direct Memory Access (NVMe/RDMA)
For information on how to configure NVMe/RDMA, see NVMe over fabrics using RDMA.
NVMe over Fibre Channel (FC-NVMe)
For information on how to configure FC-NVMe, see NVMe over fabrics using FC.

When using Fibre Channel (FC) and Remote Direct Memory Access (RDMA), the solid-state drive does not have to be local to your system; it can be configured remotely through a FC or RDMA controller.

4.2. Setting up an NVMe/RDMA target using configfs

Use this procedure to configure an NVMe/RDMA target using configfs.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.

Procedure

  1. Create the nvmet-rdma subsystem:

    # modprobe nvmet-rdma
    
    # mkdir /sys/kernel/config/nvmet/subsystems/testnqn
    
    # cd /sys/kernel/config/nvmet/subsystems/testnqn

    Replace testnqn with the subsystem name.

  2. Allow any host to connect to this target:

    # echo 1 > attr_allow_any_host
  3. Configure a namespace:

    # mkdir namespaces/10
    
    # cd namespaces/10

    Replace 10 with the namespace number

  4. Set a path to the NVMe device:

    # echo -n /dev/nvme0n1 > device_path
  5. Enable the namespace:

    # echo 1 > enable
  6. Create a directory with an NVMe port:

    # mkdir /sys/kernel/config/nvmet/ports/1
    
    # cd /sys/kernel/config/nvmet/ports/1
  7. Display the IP address of mlx5_ib0:

    # ip addr show mlx5_ib0
    
    8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256
        link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
        inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0
           valid_lft forever preferred_lft forever
        inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
  8. Set the transport address for the target:

    # echo -n 172.31.0.202 > addr_traddr
  9. Set RDMA as the transport type:

    # echo rdma > addr_trtype
    
    # echo 4420 > addr_trsvcid
  10. Set the address family for the port:

    # echo ipv4 > addr_adrfam
  11. Create a soft link:

    # ln -s /sys/kernel/config/nvmet/subsystems/testnqn   /sys/kernel/config/nvmet/ports/1/subsystems/testnqn

Verification

  • Verify that the NVMe target is listening on the given port and ready for connection requests:

    # dmesg | grep "enabling port"
    [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)

Additional resources

  • nvme(1) man page

4.3. Setting up the NVMe/RDMA target using nvmetcli

Use the nvmetcli utility to edit, view, and start an NVMe target. The nvmetcli utility provides a command line and an interactive shell option. Use this procedure to configure the NVMe/RDMA target by nvmetcli.

Prerequisites

  • Verify that you have a block device to assign to the nvmet subsystem.
  • Execute the following nvmetcli operations as a root user.

Procedure

  1. Install the nvmetcli package:

    # yum install nvmetcli
  2. Download the rdma.json file:

    # wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json
  3. Edit the rdma.json file and change the traddr value to 172.31.0.202.
  4. Setup the target by loading the NVMe target configuration file:

    # nvmetcli restore rdma.json
Note

If the NVMe target configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file.

Verification

  • Verify that the NVMe target is listening on the given port and ready for connection requests:

    # dmesg | tail -1
    [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)
  • Optional: Clear the current NVMe target:

    # nvmetcli clear

Additional resources

  • nvmetcli and nvme(1) man pages

4.4. Configuring an NVMe/RDMA client

Use this procedure to configure an NVMe/RDMA client using the NVMe management command line interface (nvme-cli) tool.

Procedure

  1. Install the nvme-cli tool:

    # yum install nvme-cli
  2. Load the nvme-rdma module if it is not loaded:

    # modprobe nvme-rdma
  3. Discover available subsystems on the NVMe target:

    # nvme discover -t rdma -a 172.31.0.202 -s 4420
    
    Discovery Log Number of Records 1, Generation counter 2
    =====Discovery Log Entry 0======
    trtype:  rdma
    adrfam:  ipv4
    subtype: nvme subsystem
    treq:    not specified, sq flow control disable supported
    portid:  1
    trsvcid: 4420
    subnqn:  testnqn
    traddr:  172.31.0.202
    rdma_prtype: not specified
    rdma_qptype: connected
    rdma_cms:    rdma-cm
    rdma_pkey: 0x0000
  4. Connect to the discovered subsystems:

    # nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home
    nvme0n1
    
    # cat /sys/class/nvme/nvme0/transport
    rdma

    Replace testnqn with the NVMe subsystem name.

    Replace 172.31.0.202 with the target IP address.

    Replace 4420 with the port number.

Verification

  • List the NVMe devices that are currently connected:

    # nvme list
  • Optional: Disconnect from the target:

    # nvme disconnect -n testnqn
    NQN:testnqn disconnected 1 controller(s)
    
    # lsblk
    NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda                            8:0    0 465.8G  0 disk
    ├─sda1                         8:1    0     1G  0 part /boot
    └─sda2                         8:2    0 464.8G  0 part
      ├─rhel_rdma--virt--03-root 253:0    0    50G  0 lvm  /
      ├─rhel_rdma--virt--03-swap 253:1    0     4G  0 lvm  [SWAP]
      └─rhel_rdma--virt--03-home 253:2    0 410.8G  0 lvm  /home

Additional resources