Why two interfaces are present in Azure VM with same MAC and one interface is Master of another interface ?
Environment
- Red Hat Enterprise Linux
- Azure VM
Issue
- Why two interfaces are present in Azure VM with same MAC address and one interface is master of another interface ?
Resolution
- This is an expected behavior with Microsoft Azure VM. Please check the below document provided by Microsoft for more details.
How Accelerated Networking works in Linux and FreeBSD VMs
- Discuss the issue further with Microsoft Vendor to know more about the behavior.
Disclaimer :- Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
Root Cause
- When a virtual machine (VM) is created in Azure, a synthetic network interface is created for each virtual NIC in its configuration. The synthetic interface is a VMbus device and uses the
netvsc
driver. The VF interface shows up in the Linux guest as a PCI device. It uses theMellanox mlx4 or mlx5
driver in Linux, because Azure hosts use physical NICs from Mellanox.
Diagnostic Steps
- Check the
ip addr
command output where we can see both eth0 and eth1 is having same MAC assigned and same IP address assigned.
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether `60:45:db:d9:06:30` brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 parentbus vmbus parentdev 6045bdd6-0730-6045-bdd6-07306045bdd6
inet 192.0.2.99/25 brd 192.0.2.127 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP group default qlen 1000
link/ether `60:45:db:d9:06:30` brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9978 numtxqueues 64 numrxqueues 16 gso_max_size 65536 gso_max_segs 65535 parentbus pci parentdev fede:00:02.0
altname enP65246p0s2
altname enP65246s1
inet 192.0.2.99/25 brd 192.0.2.127 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
- Drivers of one interface is netvsc and second one is mlx5_core
$ ethtool -i eth0
driver: hv_netvsc <--------
version: 4.18.0-477.13.1.el8_8.x86_64
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
$ ethtool -i eth1
driver: mlx5_core <--------
version: 4.18.0-477.13.1.el8_8.x86_64
firmware-version: 16.30.1238 (MSF0000000012)
expansion-rom-version:
bus-info: fede:00:02.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments