Chapter 7. Testing InfiniBand networks
7.1. Testing early InfiniBand RDMA operations
InfiniBand provides low latency and high performance for Remote Direct Memory Access (RDMA).
Apart from InfiniBand, if you use IP-based devices such as Internet Wide-area Remote Protocol(iWARP) or RDMA over Converged Ethernet (RoCE) or InfiniBand over Ethernet (IBoE) devices, see:
Prerequisites
-
You have configured the
rdma
service. -
You have installed the
libibverbs-utils
andinfiniband-diags
packages.
Procedure
List the available InfiniBand devices:
# ibv_devices device node GUID ------ ---------------- mlx4_0 0002c903003178f0 mlx4_1 f4521403007bcba0
Display the information of the
mlx4_1
device:# ibv_devinfo -d mlx4_1 hca_id: mlx4_1 transport: InfiniBand (0) fw_ver: 2.30.8000 node_guid: f452:1403:007b:cba0 sys_image_guid: f452:1403:007b:cba3 vendor_id: 0x02c9 vendor_part_id: 4099 hw_ver: 0x0 board_id: MT_1090120019 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 2048 (4) sm_lid: 2 port_lid: 2 port_lmc: 0x01 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet
Display the status of the
mlx4_1
device:# ibstat mlx4_1 CA 'mlx4_1' CA type: MT4099 Number of ports: 2 Firmware version: 2.30.8000 Hardware version: 0 Node GUID: 0xf4521403007bcba0 System image GUID: 0xf4521403007bcba3 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 2 LMC: 1 SM lid: 2 Capability mask: 0x0251486a Port GUID: 0xf4521403007bcba1 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0xf65214fffe7bcba2 Link layer: Ethernet
The
ibping
utility pings an InfiniBand address and runs as a client/server.To start server mode on a host, use the
-S
parameter on port number-P
with-C
InfiniBand certificate authority (CA) name:# ibping -S -C mlx4_1 -P 1
To start client mode on another host, send some packets
-c
on port number-P
using-C
InfiniBand certificate authority (CA) name with-L
Local Identifier (LID):# ibping -c 50 -C mlx4_0 -P 1 -L 2
Additional resources
-
ibping(8)
man page
7.2. Testing an IPoIB using the ping utility
After you configured IP over InfiniBand (IPoIB), use the ping
utility to send ICMP packets to test the IPoIB connection.
Prerequisites
- The two RDMA hosts are connected in the same InfiniBand fabric with RDMA ports
- The IPoIB interfaces in both hosts are configured with IP addresses within the same subnet
Procedure
Use the
ping
utility to send five ICMP packets to the remote host’s InfiniBand adapter:# ping -c5 192.0.2.1
7.3. Testing an RDMA network using qperf after IPoIB is configured
The qperf
utility measures RDMA and IP performance between two nodes in terms of bandwidth, latency, and CPU utilization.
Prerequisites
-
You have installed the
qperf
package on both hosts. - IPoIB is configured on both hosts.
Procedure
Start
qperf
on one of the hosts without any options to act as a server:# qperf
Use the following commands on the client. The commands use port
1
of themlx4_0
host channel adapter in the client to connect to IP address192.0.2.1
assigned to the InfiniBand adapter in the server.Display the configuration of the host channel adapter:
# qperf -v -i mlx4_0:1 192.0.2.1 conf conf: loc_node = rdma-dev-01.lab.bos.redhat.com loc_cpu = 12 Cores: Mixed CPUs loc_os = Linux 4.18.0-187.el8.x86_64 loc_qperf = 0.4.11 rem_node = rdma-dev-00.lab.bos.redhat.com rem_cpu = 12 Cores: Mixed CPUs rem_os = Linux 4.18.0-187.el8.x86_64 rem_qperf = 0.4.11
Display the Reliable Connection (RC) streaming two-way bandwidth:
# qperf -v -i mlx4_0:1 192.0.2.1 rc_bi_bw rc_bi_bw: bw = 10.7 GB/sec msg_rate = 163 K/sec loc_id = mlx4_0 rem_id = mlx4_0:1 loc_cpus_used = 65 % cpus rem_cpus_used = 62 % cpus
Display the RC streaming one-way bandwidth:
# qperf -v -i mlx4_0:1 192.0.2.1 rc_bw rc_bw: bw = 6.19 GB/sec msg_rate = 94.4 K/sec loc_id = mlx4_0 rem_id = mlx4_0:1 send_cost = 63.5 ms/GB recv_cost = 63 ms/GB send_cpus_used = 39.5 % cpus recv_cpus_used = 39 % cpus
Additional resources
-
qperf(1)
man page