Network performance using virtio nic
Hi all,
What kind of network speeds can I expect using the virtio nic? I have rhel 6.3 guests running under rhev 3.0 on HP BL460c Gen8 blades. The blades have 10G nics. In a rhel 6.3 bare-metal installation, I can get over 2.5gbps throughput between blades using udp (the speed is capped at 3gbps by the ethernet module (Flex-10) in the blade chassis, btw). On the same hardware, my rhel 6.3 guests can only achieve around 1.5gbps transfer rate between VMs before suffering packets drops. This is my max rate whether my VMs are on the same or different blades.
My guests are not constrained by CPU (~15% utilization at these rates), but the network utilization meters in rhevm are pegged at 100%. Is there some perfomance tuning I can do on the virtio driver or guest o/s nic interface?
As a point of comparison, I can get rates in excess of 2gbps on the same hardware under vmware esxi 5.1 using their paravirtulaized nic.
Cheers,
Steve.
Responses
2.5gbps bare metal, and 1.5gbp on VMs on 10GbE both sounds terrible to me. (oh... you were capped at 3 GB.., still, 2.5/1.5 doesn't sound very good)
On bare metal (directly on RHEV hypervisors) and on VMs on different hypervisors I get 9.5-9.6 Gbit/s with virtio-net with MTU=9000. This is with IBM HS22 blades, connected to 10GbE Nexus 4000i switch.
Bare metal:
[root@rhev8 tmp]# ./iperf -c rhev7.lysetele.net -f m -t 20 ------------------------------------------------------------ Client connecting to rhev7.lysetele.net, TCP port 5001 TCP window size: 0.08 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.133.108 port 51599 connected with 192.168.133.107 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 23004 MBytes 9647 Mbits/sec VMs on different hypervisors:
[janfrode@webedge1 ~]$ iperf -c betaproxy1 -f m -t 20 ------------------------------------------------------------ Client connecting to betaproxy1, TCP port 5001 TCP window size: 0.03 MByte (default) ------------------------------------------------------------ [ 3] local 109.247.114.200 port 47820 connected with 109.247.114.222 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 22831 MBytes 9576 Mbits/sec Quite impressive IMHO. If I drop down to 1500 MTU on the VMs, the performance drops to 2.5 Gbit/s: [janfrode@webedge1 ~]$ iperf -c betaproxy1 -f m -t 20 ------------------------------------------------------------ Client connecting to betaproxy1, TCP port 5001 TCP window size: 0.02 MByte (default) ------------------------------------------------------------ [ 3] local 109.247.114.200 port 44590 connected with 109.247.114.222 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 6168 MBytes 2587 Mbits/sec I also tested using the emulated e1000 adapter in VMs today, and that totally kills the performance: MTU=1500: [janfrode@webedge1 ~]$ iperf -c betaproxy1 -f m -t 20 ------------------------------------------------------------ Client connecting to betaproxy1, TCP port 5001 TCP window size: 0.02 MByte (default) ------------------------------------------------------------ [ 3] local 109.247.114.200 port 44592 connected with 109.247.114.222 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 1423 MBytes 597 Mbits/sec MTU=9000: [janfrode@webedge1 ~]$ iperf -c betaproxy1 -f m -t 20 ------------------------------------------------------------ Client connecting to betaproxy1, TCP port 5001 TCP window size: 0.03 MByte (default) ------------------------------------------------------------ [ 3] local 109.247.114.200 port 56686 connected with 109.247.114.222 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-20.0 sec 7523 MBytes 3155 Mbits/sec
Hi Sadique.
Do you have any update about bug that you commented ?
I think that I'm affected by this bug (HP BL685G7 - 10Gbps NIC)
Thanks in advance.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
