Chapter 6. Testing InfiniBand networks
This section provides procedures how to test InfiniBand networks.
6.1. Testing early InfiniBand RDMA operations
This section describes how to test InfiniBand remote direct memory access (RDMA) operations.
This section applies only to InfiniBand devices. If you use iWARP or RoCE/IBoE devices, which are IP-based, see:
Prerequisites
- RDMA is configured.
-
The
libibverbs-utils
andinfiniband-diags
packages are installed.
Procedure
List the available InfiniBand devices:
# ibv_devices device node GUID ------ ---------------- mlx4_0 0002c903003178f0 mlx4_1 f4521403007bcba0
Display the information for a specific InfiniBand device. For example, to display the information of the
mlx4_1
device, enter:# ibv_devinfo -d mlx4_1 hca_id: mlx4_1 transport: InfiniBand (0) fw_ver: 2.30.8000 node_guid: f452:1403:007b:cba0 sys_image_guid: f452:1403:007b:cba3 vendor_id: 0x02c9 vendor_part_id: 4099 hw_ver: 0x0 board_id: MT_1090120019 phys_port_cnt: 2 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 2048 (4) sm_lid: 2 port_lid: 2 port_lmc: 0x01 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet
Display the basic status of an InfiniBand device. For example, to display the status of the
mlx4_1
device, enter:# ibstat mlx4_1 CA 'mlx4_1' CA type: MT4099 Number of ports: 2 Firmware version: 2.30.8000 Hardware version: 0 Node GUID: 0xf4521403007bcba0 System image GUID: 0xf4521403007bcba3 Port 1: State: Active Physical state: LinkUp Rate: 56 Base lid: 2 LMC: 1 SM lid: 2 Capability mask: 0x0251486a Port GUID: 0xf4521403007bcba1 Link layer: InfiniBand Port 2: State: Active Physical state: LinkUp Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x04010000 Port GUID: 0xf65214fffe7bcba2 Link layer: Ethernet
Use the
ibping
utility to ping from a client to a server using InfiniBand:On the host that acts as a server, start
ibping
in server mode:# ibping -S -C mlx4_1 -P 1
This command uses the following parameters:
-
-S
: Enables the server mode. -
-C InfiniBand_CA_name
: Set’s the CA name to use. -
-P port_number
: Sets the port number to use, if the InfiniBand provides multiple ports.
-
On the host that acts as client, use
ibping
as follows:# ibping -c 50 -C mlx4_0 -P 1 -L 2
-
-c number
: Sends these number of packets to the server. -
-C InfiniBand_CA_name
: Set’s the CA name to use. -
-P port_number
: Sets the port number to use, if the InfiniBand provides multiple ports. -
-L port_LID
: Sets the Local Identifier (LID) to use.
-
Additional resources
-
For further details about
ibping
parameters, see theibping(8)
man page.
6.2. Testing an IPoIB using the ping utility
After you configured IPoIB, use the ping
utility to send ICMP packets to test the IPoIB connection.
Prerequisites
- The two RDMA hosts are connected in the same InfiniBand fabric with RDMA ports.
- The IPoIB interfaces in both hosts are configured with IP addresses within the same subnet.
Procedure
Use the
ping
utility to send ICMP packets to the remote host’s InfiniBand adapter:# ping -c5 192.0.2.1
This command sends five ICMP packets to the IP address
192.0.2.1
.
6.3. Testing an RDMA network using qperf after IPoIB is configured
This procedure describes examples how to display the InfiniBand adapter configuration and measure the bandwidth and latency between two hosts using the qperf
utility.
Prerequisites
-
The
qperf
package is installed on both hosts. - IPoIB is configured on both hosts.
Procedure
Start
qperf
on one of the hosts without any options to act as a server:# qperf
Use the following commands on the client. The commands use port
1
of themlx4_0
host channel adapter in the client to connect to IP address192.0.2.1
assigned to the InfiniBand adapter in the server.To display the configuration, enter:
qperf -v -i mlx4_0:1 192.0.2.1 conf ------------------------- conf: loc_node = rdma-dev-01.lab.bos.redhat.com loc_cpu = 12 Cores: Mixed CPUs loc_os = Linux 4.18.0-187.el8.x86_64 loc_qperf = 0.4.11 rem_node = rdma-dev-00.lab.bos.redhat.com rem_cpu = 12 Cores: Mixed CPUs rem_os = Linux 4.18.0-187.el8.x86_64 rem_qperf = 0.4.11 -------------------------
To display the Reliable Connection (RC) streaming two-way bandwidth, enter:
# qperf -v -i mlx4_0:1 192.0.2.1 rc_bi_bw ------------------------- rc_bi_bw: bw = 10.7 GB/sec msg_rate = 163 K/sec loc_id = mlx4_0 rem_id = mlx4_0:1 loc_cpus_used = 65 % cpus rem_cpus_used = 62 % cpus -------------------------
To display the RC streaming one-way bandwidth, enter:
# qperf -v -i mlx4_0:1 192.0.2.1 rc_bw ------------------------- rc_bw: bw = 6.19 GB/sec msg_rate = 94.4 K/sec loc_id = mlx4_0 rem_id = mlx4_0:1 send_cost = 63.5 ms/GB recv_cost = 63 ms/GB send_cpus_used = 39.5 % cpus recv_cpus_used = 39 % cpus -------------------------
Additional resources
-
For further details about
qperf
, see theqperf(1)
man page.