Why does 'numactl --interleave=all' fail on some systems?

Solution Verified - Updated -

Environment

  • Red Hat Enterprise Linux (RHEL) 5.8
  • Intel CPUs Where two processors are present
    • X5570
    • X5672
    • X5675

Issue

After we upgraded from RHEL5.5 to 5.8, we encountered this with numactl:

$ numactl --interleave=all sleep 1
set_mempolicy: Invalid argument
setting interleave mask: Invalid argument

Resolution

Use a numactl --interleave=<<nodes>> setting that is not in contradiction with any cpusets already in place.
Alternatively, configure the CPU set with the correct memory node information before starting the numactl --interleave=all command.

To configure the CPU set with specific memory nodes:

# echo "0-1" > /dev/cpuset/<<cpuset_name>>/mems

or configure the CPU set with all memory nodes:

# cat /dev/cpuset/mems > /dev/cpuset/<<cpuset_name>>/mems

Root Cause

Analysis showed that the numactl was executed in a context where a constrained CPU set environment (through cpuset(7)) was already in place. In such a context, numactl cannot use all memory nodes as requested by --interleave=all, and RHEL5.8's numactl errors out accordingly. This is expected behaviour.

Diagnostic Steps

Check set_mempolicy(2)'s description for the EINVAL error condition.

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.