RFE: Balancing instances among NUMA nodes

Solution In Progress - Updated -

Issue

  • After check in a test environment with OSP16, we tried to allocate instances among NUMA nodes using flavors with this option enabled (hw:numa_nodes='1' ).

  • So each instance will be isolated only on resources of every NUMA node. It works fine but we check how there is no way to choose among numa nodes selecting automatically to first NUMA node #0.

  • So as more instance are created, there is an overload of this selected NUMA node #0 meanwhile NUMA node #1 is completly idle. We would like to know if it could be possible to implement a fix or any way to avoid this situation

  • Check compute NUMA resources:

[root@overcloud-novacompute-1 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 3685 MB
node 0 free: 2739 MB
node 1 cpus: 2 3
node 1 size: 3936 MB
node 1 free: 3325 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10
  • Create flavor with hw:numa_nodes=1:
(overcloud) [stack@undercloud ~]$ openstack flavor create  tiny_numa.1   --property hw:numa_nodes=1   --ram 512 --disk 10 --vcpus 1
+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| description                | None                                 |
| disk                       | 10                                   |
| extra_specs                | {}                                   |
| id                         | 39ace5fa-fb24-42e2-a90e-f85ca9ca6c09 |
| name                       | tiny_numa.1                          |
| os-flavor-access:is_public | True                                 |
| properties                 | hw:numa_nodes='1'                    |
| ram                        | 512                                  |
| rxtx_factor                | 1.0                                  |
| swap                       | 0                                    |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+
  • Create 5 instances using previous flavor
(overcloud) [stack@undercloud ~]$ openstack server list
+--------------------------------------+----------------------+--------+----------------------+--------+--------+
| ID                                   | Name                 | Status | Networks             | Image  | Flavor |
+--------------------------------------+----------------------+--------+----------------------+--------+--------+
| 5aa26cae-441a-432b-8908-5aad48521f64 | test5-instance-numa  | ACTIVE | private=172.16.1.139 | cirros |        |
| 5d76ff37-b8c5-4559-a402-6acb21c3a167 | test4-instance-numa  | ACTIVE | private=172.16.1.12  | cirros |        |
| 290925cb-47f7-4ae1-a38f-35de42783934 | test3-instance-numa  | ACTIVE | private=172.16.1.128 | cirros |        |
| e8f0f030-8075-4540-b62d-41b467475ace | test2-instance-numa  | ACTIVE | private=172.16.1.173 | cirros |        |
| 4f657cec-90a6-44b7-b16d-c3a949bc32a6 | test-instance-numa   | ACTIVE | private=172.16.1.240 | cirros |        |
+--------------------------------------+----------------------+--------+----------------------+--------+--------+
  • On compute, check instances NUMA topology
[root@overcloud-novacompute-1 ~]# docker exec -ti 5cafb6777d95 bash
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
()[root@overcloud-novacompute-1 /]# virsh list
 Id   Name                State
-----------------------------------
 1    instance-00000016   running
 2    instance-00000019   running
 3    instance-00000022   running
 4    instance-00000028   running
 5    instance-0000002e   running
  • They all ended up on numa_nodeset 0:
()[root@overcloud-novacompute-1 /]# virsh numatune instance-0000002e
numa_mode      : strict
numa_nodeset   : 0

()[root@overcloud-novacompute-1 /]# virsh numatune instance-00000028
numa_mode      : strict
numa_nodeset   : 0

()[root@overcloud-novacompute-1 /]# virsh numatune instance-00000022
numa_mode      : strict
numa_nodeset   : 0

()[root@overcloud-novacompute-1 /]# virsh numatune instance-00000019
numa_mode      : strict
numa_nodeset   : 0

()[root@overcloud-novacompute-1 /]# virsh numatune instance-00000016
numa_mode      : strict
numa_nodeset   : 0
  • Check compute NUMA resources usage
[root@overcloud-novacompute-1 ~]# numastat -c qemu-kvm
Per-node process memory usage (in MBs)
PID              Node 0 Node 1 Total
---------------  ------ ------ -----
59310 (qemu-kvm)    278      4   282
59481 (qemu-kvm)    281      4   284
61329 (qemu-kvm)    276      4   280
61627 (qemu-kvm)    276      4   280
61926 (qemu-kvm)    276      4   280
---------------  ------ ------ -----
Total              1388     19  1407

Environment

  • Red Hat OpenStack Platform 16.0 (RHOSP)

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content