irqbalance does not works with 2nd numa_node

Latest response

hello

i have a problem, I installed a network card intel x740 4x10Gbit and bonded mode 4 all 4 interfaces. but it loads just 1cpu :( in ircbalance --debug i can see that irqbalance see 2 numa_nodes, and parameter (-1) use all available, but it loads just 1 numa node, and 2nd numa node is free and not loading. at /proc/interrupts also i can see that using just 1 cpu with 16cores, the 2nd cpu is free.

please, i do not know what to do, i tried smp_affinity manually, but i would like to understand whats wrong with irqbalance and how push it to use 2 numa-node

NOTEs
ethtool:
driver: i40e
version: 1.5.10-k
firmware-version: 4.53 0x8000206d 0.0.0

lspci:
09:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
09:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
09:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
09:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

Responses

The behaviour you describe is almost certainly what you want.

The NICs are likely in PCIe slots which are local to a particular NUMA Node. You don't want CPU cores servicing interrupt channels for a non-local device, you'll occur an inter-Node penalty which will reduce overall performance.

Even if the PCIe bus isn't local to any particular NUMA Node, it's better to keep all interrupt handling within one Node, as you'll hopefully be running your application within the same Node to take advantage of CPU cache affinity.

But I have 32 CPU cores and when I have huge network traffic about 24 Gbit I can see at htop that 16 cores are 100% of load and other 16 cores just 1%. is that ok?

I wanted to ask if I will put to smp_affinity for my IRQ like for every ethrnet irq(s) exactly cpu mask will it be better? will it use all my cores irqbalance I will shut down

What's the test you're running? Is the CPU usage because of the process doing the traffic, or is it ksoftirqd servicing the NIC?

Yes, if you disable irqbalance and manually set affinity across all cores then all cores will be used.

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.