2.4. Interrupt and Process Binding

Real-time environments need to minimize or eliminate latency when responding to various events. Ideally, interrupts (IRQs) and user processes can be isolated from one another on different dedicated CPUs.
Interrupts are generally shared evenly between CPUs. This can delay interrupt processing through having to write new data and instruction caches, and often creates conflicts with other processing occurring on the CPU. In order to overcome this problem, time-critical interrupts and processes can be dedicated to a CPU (or a range of CPUs). In this way, the code and data structures needed to process this interrupt will have the highest possible likelihood to be in the processor data and instruction caches. The dedicated process can then run as quickly as possible, while all other non-time-critical processes run on the remainder of the CPUs. This can be particularly important in cases where the speeds involved are in the limits of memory and peripheral bus bandwidth available. Here, any wait for memory to be fetched into processor caches will have a noticeable impact in overall processing time and determinism.
In practice, optimal performance is entirely application specific. For example, in tuning applications for different companies which perform similar functions, the optimal performance tunings were completely different. For one firm, isolating 2 out of 4 CPUs for operating system functions and interrupt handling and dedicating the remaining 2 CPUs purely for application handling was optimal. For another firm, binding the network related application processes onto a CPU which was handling the network device driver interrupt yielded optimal determinism. Ultimately, tuning is often accomplished by trying a variety of settings to discover what works best for your organization.

Important

For many of the processes described here, you will need to know the CPU mask for a given CPU or range of CPUs. The CPU mask is typically represented as a 32-bit bitmask. It can also be expressed as a decimal or hexadecimal number, depending on the command you are using. For example: The CPU mask for CPU 0 only is 00000000000000000000000000000001 as a bitmask, 1 as a decimal, and 0x00000001 as a hexadecimal. The CPU mask for both CPU 0 and 1 is 00000000000000000000000000000011 as a bitmask, 3 as a decimal, and 0x00000003 as a hexadecimal.

Procedure 2.3. Disabling the irqbalance Daemon

This daemon is enabled by default and periodically forces interrupts to be handled by CPUs in an even, fair manner. However in real-time deployments, applications are typically dedicated and bound to specific CPUs, so the irqbalance daemon is not required.
  1. Check the status of the irqbalance daemon.
    ~]# systemctl status irqbalance
    irqbalance.service - irqbalance daemon
       Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled)
       Active: active (running) …
    
  2. If the irqbalance daemon is running, stop it.
    ~]# systemctl stop irqbalance
  3. Ensure that irqbalance does not restart on boot.
    ~]# systemctl disable irqbalance

Procedure 2.4. Excluding CPUs from IRQ Balancing

The /etc/sysconfig/irqbalance configuration file contains a setting that allows CPUs to be excluded from consideration by the IRQ balacing service. This parameter is named IRQBALANCE_BANNED_CPUS and is a 64-bit hexadecimal bit mask, where each bit of the mask represents a CPU core.
For example, if you are running a 16-core system and want to remove CPUs 8 to 15 from IRQ balancing, do the following:
  1. Open /etc/sysconfig/irqbalance in your preferred text editor and find the section of the file titled IRQBALANCE_BANNED_CPUS.
    # IRQBALANCE_BANNED_CPUS
    # 64 bit bitmask which allows you to indicate which cpu's should
    # be skipped when reblancing irqs. Cpu numbers which have their
    # corresponding bits set to one in this mask will not have any
    # irq's assigned to them on rebalance
    #
    #IRQBALANCE_BANNED_CPUS=
    
  2. Exclude CPUs 8 to 15 by uncommenting the variable IRQBALANCE_BANNED_CPUS and setting its value this way:
    IRQBALANCE_BANNED_CPUS=0000ff00
    
  3. This will cause the irqbalance process to ignore the CPUs that have bits set in the bitmask; in this case, bits 8 through 15.
  4. If you are running a system with up to 64 CPU cores, separate each group of eight hexadecimal digits with a comma:
    IRQBALANCE_BANNED_CPUS=00000001,0000ff00
    
    The above mask excludes CPUs 8 to 15 as well as CPU 33 from IRQ balancing.

Note

From Red Hat Enterprise Linux 7.2, the irqbalance tool automatically avoids IRQs on CPU cores isolated via the isolcpus= kernel parameter if IRQBALANCE_BANNED_CPUS is not set in the /etc/sysconfig/irqbalance file.

Procedure 2.5. Manually Assigning CPU Affinity to Individual IRQs

  1. Check which IRQ is in use by each device by viewing the /proc/interrupts file:
    ~]# cat /proc/interrupts
    This file contains a list of IRQs. Each line shows the IRQ number, the number of interrupts that happened in each CPU, followed by the IRQ type and a description:
             CPU0       CPU1
    0:   26575949         11         IO-APIC-edge  timer
    1:         14          7         IO-APIC-edge  i8042
    ...[output truncated]...
    
  2. To instruct an IRQ to run on only one processor, use the echo command to write the CPU mask, as a hexadecimal number, to the smp_affinity entry of the specific IRQ. In this example, we are instructing the interrupt with IRQ number 142 to run on CPU 0 only:
    ~]# echo 1 > /proc/irq/142/smp_affinity
  3. This change will only take effect once an interrupt has occurred. To test the settings, generate some disk activity, then check the /proc/interrupts file for changes. Assuming that you have caused an interrupt to occur, you will see that the number of interrupts on the chosen CPU have risen, while the numbers on the other CPUs have not changed.

Procedure 2.6. Binding Processes to CPUs Using the taskset Utility

The taskset utility uses the process ID (PID) of a task to view or set the affinity, or can be used to launch a command with a chosen CPU affinity. In order to set the affinity, taskset requires the CPU mask expressed as a decimal or hexadecimal number. The mask argument is a bitmask that specifies which CPU cores are legal for the command or PID being modified.
  1. To set the affinity of a process that is not currently running, use taskset and specify the CPU mask and the process. In this example, my_embedded_process is being instructed to use only CPU 3 (using the decimal version of the CPU mask).
    ~]# taskset 8 /usr/local/bin/my_embedded_process
  2. It is also possible to specify more than one CPU in the bitmask. In this example, my_embedded_process is being instructed to execute on processors 4, 5, 6, and 7 (using the hexadecimal version of the CPU mask).
    ~]# taskset 0xF0 /usr/local/bin/my_embedded_process
  3. Additionally, you can set the CPU affinity for processes that are already running by using the -p (--pid) option with the CPU mask and the PID of the process you wish to change. In this example, the process with a PID of 7013 is being instructed to run only on CPU 0.
    ~]# taskset -p 1 7013
  4. Lastly, using the -c parameter, you can specify a CPU list instead of a CPU mask. For example, in order to use CPU 0, 4 and CPUs 7 to 11, the command line would contain -c 0,4,7-11. This invocation is more convenient in most cases.

Important

The taskset utility works on a NUMA (Non-Uniform Memory Access) system, but it does not allow the user to bind threads to CPUs and the closest NUMA memory node. On such systems, taskset is not the preferred tool, and the numactl utility should be used instead for its advanced capabilities. See Section 3.6, “Non-Uniform Memory Access” for more information.
Related Manual Pages

For more information, or for further reading, the following man pages are related to the information given in this section.

  • chrt(1)
  • taskset(1)
  • nice(1)
  • renice(1)
  • sched_setscheduler(2) for a description of the Linux scheduling scheme.