How to use qperf to measure network bandwidth and latency performance?

Solution Verified - Updated -


  • Red Hat Enterprise Linux 6, 7, 8
  • Networking


  • How to use qperf to measure network bandwidth and latency performance?
  • Is there a supported alternative to iperf to measure network throughput?
  • How do I test performance of RDMA?



Install qperf from the RHEL server channel on both the qperf Server and qperf Client:

[root@yourQperfServer ~]# yum install qperf
[root@yourQperfClient ~]# yum install qperf

Checking for Bandwidth

Server (Receiving Data)

Have one system listen as a qperf server:

[root@yourQperfServer ~]# qperf

The server listens on TCP Port 19765 by default. This can be changed with the --listen_port option.

This port will need to be allowed in any firewall present. On iptables:

[root@yourQperfServer ~]# iptables -I INPUT -m tcp -p tcp --dport 19765 -j ACCEPT && iptables -I INPUT -m tcp -p tcp --dport 19766 -j ACCEPT

Or on firewalld, once qperf makes a connection, it will create a control port and data port , the default data port is 19765 but we also need to enable a data port.

[root@yourQperfServer ~]# firewall-cmd --permanent --add-port=19765/tcp --add-port=19766/tcp success [root@yourQperfServer ~]# firewall-cmd --reload success [root@yourQperfServer ~]#firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s25 sources: services: ssh dhcpv6-client http https ports: 19765/tcp 19766/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:


Have the other system connect to qperf server as a client:

[root@yourQperfClient ~]# qperf -ip 19766 -t 60 --use_bits_per_sec  <server hostname or ip address> tcp_bw


Results are printed on the client only, the following result shows throughput between these two systems is 16.1 gigabit per second:

    bw  =  16.1 Gb/sec

If the --use_bits_per_sec option is not used, the throughput is supplied in GiB per second (or other applicable IEC binary unit):

    bw  =  1.94 GB/sec

Checking for latency


[root@yourQperfClient ~]# qperf -vvs  <server hostname or ip address> tcp_lat


Results are printed on the client only, the following result shows latency value is 311 Microseconds and then there are few other details as well. loc_xx shows details from local system perspective and rem_xx shows the same from remote system perspective. Refer man qperf for more options / verbosity.

    latency         =    311 us
    msg_rate        =   3.22 K/sec
    loc_send_bytes  =   3.22 KB
    loc_recv_bytes  =   3.22 KB
    loc_send_msgs   =  3,218 
    loc_recv_msgs   =  3,217 
    rem_send_bytes  =   3.22 KB
    rem_recv_bytes  =   3.22 KB
    rem_send_msgs   =  3,217 
    rem_recv_msgs   =  3,217 

Other Tests

Other tests are available, including UDP bandwidth and latency, SCTP bandwidth and latency, and other protocols which run on RDMA.

See the TESTS section of man qperf for more details.

Root Cause

  • qperf is a network bandwidth and latency measurement tool which works over many transports including TCP/IP, RDMA, UDP, and SCTP.

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.


Would you tell me about firewalld configuration that works with qperf as server?

Great point about the firewall configuration. It doesn't work as outlined in this article/solution with a firewall in place.

I am stuck with firewalld config as well..

You need to allow port 19765 in the firewall, or whatever port you are specifying with the --listen-port option.

firewall-cmd --add-port=19765/tcp

The syntax is explained further on the manual page man firewall-cmd and in the product documentation:

I've added instructions for both iptables and firewalld to the solution.

Nice article!

This tool has proven to be horribly unreliable for testing bandwidth between RHEL6 hosts that have 10Gbit networking. I had been chasing my tail trying to figure out why throughput between two hosts on the same switch was fine for one pair and awful with another pair of hosts. I switched to iperf3 and ttcp for testing, the results with those are accurate and consistent.

Interesting result, we find the opposite. Can you tell which tests you were running, the results you got, and relevant system tunables? If that's a bit much to share over a comment, feel free to open a Sev 4 support case and attach a sosreport from one of your test systems. Feel free to mention me by name and mention this discussion, the case will find its way to me.

Try sockperf. It just works.

On client node, shouldn't we use 19765 port to connect to qperf server -ip 19765 ? We are specifying port 19766 whereas qperf uses default 19765 port to listen on.


Have the other system connect to qperf server as a client:

#  qperf -ip 19766 -t 60 --use_bits_per_sec  <server hostname or ip address> tcp_bw

Did anyone test bandwidth between 100GiB hosts using qperf? Is this single-threaded and due to CPU bottleneck not able to display bandwidth correctly as the case with iperf v3.0?

When I tested bandwidth between 100GB hosts qperf shows bandwidth in the range of 20 tp 30GiB, same as iperf. In iperf i overcame this limitation using multiple client-server processes. Any workaround here will be greatly appreciated?

Yes, likely the bottleneck is the same.

You could run multiple server processes and connect multiple client processes to them, eg:

server # qperf --listen_port 11111
server # qperf --listen_port 22222

client # qperf --listen_port 11111 -t 60 --use_bits_per_sec SERVERNAME tcp_bw
client # qperf --listen_port 22222 -t 60 --use_bits_per_sec SERVERNAME tcp_bw

Extend this out to more processes as needed, 4, 8, 16, etc. As you are aware, the idea is to get the entire interface up to 100 Gbps, not to get a single 100 Gbps stream.

Let us know how you go. If it works then I'll add it to the above.

Try sockperf to overcome such limitations: