Chapter 3. Configuring Compute nodes for performance
As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC).
Use the following features to tune your instances for optimal performance:
- CPU pinning: Pin virtual CPUs to physical CPUs.
- Emulator threads: Pin emulator threads associated with the instance to physical CPUs.
- Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages).
Configuring any of these features creates an implicit NUMA topology on the instance if there is no NUMA topology already present.
3.1. Configuring CPU pinning on Compute nodes
You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node.
You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following:
- Designate Compute nodes for CPU pinning.
- Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes.
- Deploy the overcloud.
- Create a flavor for launching instances that require CPU pinning.
- Create a flavor for launching instances that use shared, or floating, CPUs.
3.1.1. Prerequisites
- You know the NUMA topology of your Compute node.
3.1.2. Designating Compute nodes for CPU pinning
To designate Compute nodes for instances with pinned CPUs, you must create a new role file to configure the CPU pinning role, and configure the bare metal nodes with a CPU pinning resource class to use to tag the Compute nodes for CPU pinning.
The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file:[stack@director ~]$ source ~/stackrc
Generate a new roles data file named
roles_data_cpu_pinning.yaml
that includes theController
,Compute
, andComputeCPUPinning
roles, along with any other roles that you need for the overcloud:(undercloud)$ openstack overcloud roles \ generate -o /home/stack/templates/roles_data_cpu_pinning.yaml \ Compute:ComputeCPUPinning Compute Controller
Open
roles_data_cpu_pinning.yaml
and edit or add the following parameters and sections:Section/Parameter Current value New value Role comment
Role: Compute
Role: ComputeCPUPinning
Role name
name: Compute
name: ComputeCPUPinning
description
Basic Compute Node role
CPU Pinning Compute Node role
HostnameFormatDefault
%stackname%-novacompute-%index%
%stackname%-novacomputepinning-%index%
deprecated_nic_config_name
compute.yaml
compute-cpu-pinning.yaml
-
Register the CPU pinning Compute nodes for the overcloud by adding them to your node definition template,
node.json
ornode.yaml
. For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware:
(undercloud)$ openstack overcloud node introspect \ --all-manageable --provide
For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide.
Tag each bare metal node that you want to designate for CPU pinning with a custom CPU pinning resource class:
(undercloud)$ openstack baremetal node set \ --resource-class baremetal.CPU-PINNING <node>
Replace
<node>
with the ID of the bare metal node.Add the
ComputeCPUPinning
role to your node definition file,overcloud-baremetal-deploy.yaml
, and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes:- name: Controller count: 3 - name: Compute count: 3 - name: ComputeCPUPinning count: 1 defaults: resource_class: baremetal.CPU-PINNING network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1
- 1
- You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the
network_config
property, then the default network definitions are used.
For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes. For an example node definition file, see Example node definition file.
Run the provisioning command to provision the new nodes for your role:
(undercloud)$ openstack overcloud node provision \ --stack <stack> \ [--network-config \] --output /home/stack/templates/overcloud-baremetal-deployed.yaml \ /home/stack/templates/overcloud-baremetal-deploy.yaml
-
Replace
<stack>
with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default isovercloud
. -
Include the
--network-config
optional argument to provide the network definitions to thecli-overcloud-node-network-config.yaml
Ansible playbook. If you do not define the network definitions by using thenetwork_config
property, then the default network definitions are used.
-
Replace
Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from
available
toactive
:(undercloud)$ watch openstack baremetal node list
If you did not run the provisioning command with the
--network-config
option, then configure the<Role>NetworkConfigTemplate
parameters in yournetwork-environment.yaml
file to point to your NIC template files:parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeCPUPinningNetworkConfigTemplate: /home/stack/templates/nic-configs/<cpu_pinning_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2
Replace
<cpu_pinning_net_top>
with the name of the file that contains the network topology of theComputeCPUPinning
role, for example,compute.yaml
to use the default network topology.
3.1.3. Configuring Compute nodes for CPU pinning
Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances.
This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning:
Table 3.1. Example of NUMA Topology
NUMA Node 0 | NUMA Node 1 | ||
Core 0 | Core 1 | Core 2 | Core 3 |
Core 4 | Core 5 | Core 6 | Core 7 |
The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning.
Procedure
-
Create an environment file to configure Compute nodes to reserve cores for pinned instances, floating instances, and host processes, for example,
cpu_pinning.yaml
. To schedule instances with a NUMA topology on NUMA-capable Compute nodes, add
NUMATopologyFilter
to theNovaSchedulerEnabledFilters
parameter in your Compute environment file, if not already present:parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter
For more information on
NUMATopologyFilter
, see Compute scheduler filters .To reserve physical CPU cores for the dedicated instances, add the following configuration to
cpu_pinning.yaml
:parameter_defaults: ComputeCPUPinningParameters: NovaComputeCpuDedicatedSet: 1,3,5,7
To reserve physical CPU cores for the shared instances, add the following configuration to
cpu_pinning.yaml
:parameter_defaults: ComputeCPUPinningParameters: ... NovaComputeCpuSharedSet: 2,6
To specify the amount of RAM to reserve for host processes, add the following configuration to
cpu_pinning.yaml
:parameter_defaults: ComputeCPUPinningParameters: ... NovaReservedHostMemory: <ram>
Replace
<ram>
with the amount of RAM to reserve in MB.To ensure that host processes do not run on the CPU cores reserved for instances, set the parameter
IsolCpusList
to the CPU cores you have reserved for instances:parameter_defaults: ComputeCPUPinningParameters: ... IsolCpusList: 1-3,5-7
Specify the value of the
IsolCpusList
parameter using a list, or ranges, of CPU indices separated by a comma.Add your new files to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -r /home/stack/templates/roles_data_cpu_pinning.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/cpu_pinning.yaml \ -e /home/stack/templates/overcloud-baremetal-deployed.yaml \ -e /home/stack/templates/node-info.yaml
3.1.4. Creating a dedicated CPU flavor for instances
To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances.
Prerequisites
- Simultaneous multithreading (SMT) is enabled on the host.
- The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes.
Procedure
Source the
overcloudrc
file:(undercloud)$ source ~/overcloudrc
Create a flavor for instances that require CPU pinning:
(overcloud)$ openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <no_reserved_vcpus> pinned_cpus
To request pinned CPUs, set the
hw:cpu_policy
property of the flavor todedicated
:(overcloud)$ openstack flavor set \ --property hw:cpu_policy=dedicated pinned_cpus
To place each vCPU on thread siblings, set the
hw:cpu_thread_policy
property of the flavor torequire
:(overcloud)$ openstack flavor set \ --property hw:cpu_thread_policy=require pinned_cpus
Note-
If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set
hw:cpu_thread_policy
toprefer
instead ofrequire
. Theprefer
policy is the default policy that ensures that thread siblings are used when available. -
If you use
hw:cpu_thread_policy=isolate
, you must have SMT disabled or use a platform that does not support SMT.
-
If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set
Verification
To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance:
(overcloud)$ openstack server create --flavor pinned_cpus \ --image <image> pinned_cpu_instance
3.1.5. Creating a shared CPU flavor for instances
To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances.
Prerequisites
- The Compute node is configured to reserve physical CPU cores for the shared CPUs. For more information, see Configuring CPU pinning on the Compute nodes.
Procedure
Source the
overcloudrc
file:(undercloud)$ source ~/overcloudrc
Create a flavor for instances that do not require CPU pinning:
(overcloud)$ openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus
To request floating CPUs, set the
hw:cpu_policy
property of the flavor toshared
:(overcloud)$ openstack flavor set \ --property hw:cpu_policy=shared floating_cpus
3.1.6. Creating a mixed CPU flavor for instances
To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances.
Procedure
Source the
overcloudrc
file:(undercloud)$ source ~/overcloudrc
Create a flavor for instances that require a mixed of dedicated and shared CPUs:
(overcloud)$ openstack flavor create --ram <size_mb> \ --disk <size_gb> --vcpus <number_of_reserved_vcpus> \ --property hw:cpu_policy=mixed mixed_CPUs_flavor
Specify which CPUs must be dedicated or shared:
(overcloud)$ openstack flavor set \ --property hw:cpu_dedicated_mask=<CPU_number> \ mixed_CPUs_flavor
Replace
<CPU_number>
with the CPUs that must be either dedicated or shared:-
To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to
2-3
to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. -
To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to
^0-1
to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated.
-
To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to
3.1.7. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT)
If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling.
For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings:
- Thread sibling 1: logical CPU cores 0 and 2
- Thread sibling 2: logical CPU cores 1 and 3
In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared.
The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list
, where N
is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings:
# grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u
The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core:
/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3
3.1.8. Additional resources
3.2. Configuring emulator threads
Compute nodes have overhead tasks associated with the hypervisor for each instance, known as emulator threads. By default, emulator threads run on the same CPUs as the instance, which impacts the performance of the instance.
You can configure the emulator thread policy to run emulator threads on separate CPUs to those the instance uses.
To avoid packet loss, you must never preempt the vCPUs in an NFV deployment.
Procedure
-
Log in to the undercloud as the
stack
user. - Open your Compute environment file.
To reserve physical CPU cores for instances that require CPU pinning, configure the
NovaComputeCpuDedicatedSet
parameter in the Compute environment file. For example, the following configuration sets the dedicated CPUs on a Compute node with a 32-core CPU:parameter_defaults: ... NovaComputeCpuDedicatedSet: 2-15,18-31 ...
For more information, see Configuring CPU pinning on the Compute nodes.
To reserve physical CPU cores for the emulator threads, configure the
NovaComputeCpuSharedSet
parameter in the Compute environment file. For example, the following configuration sets the shared CPUs on a Compute node with a 32-core CPU:parameter_defaults: ... NovaComputeCpuSharedSet: 0,1,16,17 ...
NoteThe Compute scheduler also uses the CPUs in the shared set for instances that run on shared, or floating, CPUs. For more information, see Configuring CPU pinning on Compute nodes
-
Add the Compute scheduler filter
NUMATopologyFilter
to theNovaSchedulerEnabledFilters
parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
Configure a flavor that runs emulator threads for the instance on a dedicated CPU, which is selected from the shared CPUs configured using
NovaComputeCpuSharedSet
:(overcloud)$ openstack flavor set --property hw:cpu_policy=dedicated \ --property hw:emulator_threads_policy=share \ dedicated_emulator_threads
For more information about configuration options for
hw:emulator_threads_policy
, see Emulator threads policy in Flavor metadata.
3.3. Configuring huge pages on Compute nodes
As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages.
Procedure
- Open your Compute environment file.
Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances:
parameter_defaults: ComputeParameters: NovaReservedHugePages: ["node:0,size:1GB,count:1","node:1,size:1GB,count:1"]
Replace the
size
value for each node with the size of the allocated huge page. Set to one of the following valid values:- 2048 (for 2MB)
- 1GB
-
Replace the
count
value for each node with the number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2.
Configure huge pages on the Compute nodes:
parameter_defaults: ComputeParameters: ... KernelArgs: "default_hugepagesz=1GB hugepagesz=1G hugepages=32"
NoteIf you configure multiple huge page sizes, you must also mount the huge page folders during first boot. For more information, see Mounting multiple huge page folders during first boot.
Optional: To allow instances to allocate 1GB huge pages, configure the CPU feature flags,
NovaLibvirtCPUModelExtraFlags
, to includepdpe1gb
:parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb'
Note- CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages.
- You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation.
-
You only need to set
NovaLibvirtCPUModelExtraFlags
topdpe1gb
whenNovaLibvirtCPUMode
is set tohost-model
orcustom
. -
If the host supports
pdpe1gb
, andhost-passthrough
is used as theNovaLibvirtCPUMode
, then you do not need to setpdpe1gb
as aNovaLibvirtCPUModelExtraFlags
. Thepdpe1gb
flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. - To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws.
To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags,
NovaLibvirtCPUModelExtraFlags
, to include+pcid
:parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'Haswell-noTSX' NovaLibvirtCPUModelExtraFlags: 'vmx, pdpe1gb, +pcid'
TipFor more information, see Reducing the performance impact of Meltdown CVE fixes for OpenStack guests with "PCID" CPU feature flag.
-
Add
NUMATopologyFilter
to theNovaSchedulerEnabledFilters
parameter, if not already present. Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
3.3.1. Creating a huge pages flavor for instances
To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size
extra spec key for launching instances.
Prerequisites
- The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes.
Procedure
Create a flavor for instances that require huge pages:
$ openstack flavor create --ram <size_mb> --disk <size_gb> \ --vcpus <no_reserved_vcpus> huge_pages
To request huge pages, set the
hw:mem_page_size
property of the flavor to the required size:$ openstack flavor set huge_pages --property hw:mem_page_size=1GB
Set
hw:mem_page_size
to one of the following valid values:-
large
- Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. -
small
- (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). -
any
- Selects the largest available huge page size, as determined by the libvirt driver. - <pagesize>: (String) Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB.
-
To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance:
$ openstack server create --flavor huge_pages \ --image <image> huge_pages_instance
The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a
NoValidHost
error.
3.3.2. Mounting multiple huge page folders during first boot
You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. The first boot process adds the heat template configuration to all nodes the first time you boot the nodes. Subsequent inclusion of these templates, such as updating the overcloud stack, does not run these scripts.
Procedure
Create a first boot template file,
hugepages.yaml
, that runs a script to create the mounts for the huge page folders. You can use theOS::TripleO::MultipartMime
resource type to send the configuration script:heat_template_version: <version> description: > Huge pages configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: hugepages_config} hugepages_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash hostname | grep -qiE 'co?mp' || exit 0 systemctl mask dev-hugepages.mount || true for pagesize in 2M 1G;do if ! [ -d "/dev/hugepages${pagesize}" ]; then mkdir -p "/dev/hugepages${pagesize}" cat << EOF > /etc/systemd/system/dev-hugepages${pagesize}.mount [Unit] Description=${pagesize} Huge Pages File System Documentation=https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt Documentation=https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems DefaultDependencies=no Before=sysinit.target ConditionPathExists=/sys/kernel/mm/hugepages ConditionCapability=CAP_SYS_ADMIN ConditionVirtualization=!private-users [Mount] What=hugetlbfs Where=/dev/hugepages${pagesize} Type=hugetlbfs Options=pagesize=${pagesize} [Install] WantedBy = sysinit.target EOF fi done systemctl daemon-reload for pagesize in 2M 1G;do systemctl enable --now dev-hugepages${pagesize}.mount done outputs: OS::stack_id: value: {get_resource: userdata}
The
config
script in this template performs the following tasks:-
Filters the hosts to create the mounts for the huge page folders on, by specifying hostnames that match
'co?mp'
. You can update the filter grep pattern for specific computes as required. -
Masks the default
dev-hugepages.mount systemd
unit file to enable new mounts to be created using the page size. - Ensures that the folders are created first.
-
Creates
systemd
mount units for eachpagesize
. -
Runs
systemd daemon-reload
after the first loop, to include the newly created unit files. - Enables each mount for 2M and 1G pagesizes. You can update this loop to include additional pagesizes, as required.
-
Filters the hosts to create the mounts for the huge page folders on, by specifying hostnames that match
Optional: The
/dev
folder is automatically bind mounted to thenova_compute
andnova_libvirt
containers. If you have used a different destination for the huge page mounts, then you need to pass the mounts to the thenova_compute
andnova_libvirt
containers:parameter_defaults NovaComputeOptVolumes: - /opt/dev:/opt/dev NovaLibvirtOptVolumes: - /opt/dev:/opt/dev
Register your heat template as the
OS::TripleO::NodeUserData
resource type in your~/templates/firstboot.yaml
environment file:resource_registry: OS::TripleO::NodeUserData: ./hugepages.yaml
ImportantYou can only register the
NodeUserData
resources to one heat template for each resource. Subsequent usage overrides the heat template to use.Add your first boot environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/firstboot.yaml \ ...
3.4. Configuring Compute nodes to use file-backed memory for instances
You can use file-backed memory to expand your Compute node memory capacity, by allocating files within the libvirt memory backing directory as instance memory. You can configure the amount of host disk that is available for instance memory, and the location on the disk of the instance memory files.
The Compute service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory.
To use file-backed memory for instances, you must enable file-backed memory on the Compute node.
Limitations
- You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled.
- File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled.
- File-backed memory is not compatible with memory overcommit.
-
You cannot reserve memory for host processes using
NovaReservedHostMemory
. When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory.
Prerequisites
-
NovaRAMAllocationRatio
must be set to "1.0" on the node and any host aggregate the node is added to. -
NovaReservedHostMemory
must be set to "0".
Procedure
- Open your Compute environment file.
Configure the amount of host disk space, in MiB, to make available for instance RAM, by adding the following parameter to your Compute environment file:
parameter_defaults: NovaLibvirtFileBackedMemory: 102400
Optional: To configure the directory to store the memory backing files, set the
QemuMemoryBackingDir
parameter in your Compute environment file. If not set, the memory backing directory defaults to/var/lib/libvirt/qemu/ram/
.NoteYou must locate your backing store in a directory at or above the default directory location,
/var/lib/libvirt/qemu/ram/
.You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk.
- Save the updates to your Compute environment file.
Add your Compute environment file to the stack with your other environment files and deploy the overcloud:
(undercloud)$ openstack overcloud deploy --templates \ -e [your environment files] \ -e /home/stack/templates/<compute_environment_file>.yaml
3.4.1. Changing the memory backing directory host disk
You can move the memory backing directory from the default primary disk location to an alternative disk.
Procedure
Create a file system on the alternative backing device. For example, enter the following command to create an
ext4
filesystem on/dev/sdb
:# mkfs.ext4 /dev/sdb
Mount the backing device. For example, enter the following command to mount
/dev/sdb
on the default libvirt memory backing directory:# mount /dev/sdb /var/lib/libvirt/qemu/ram
NoteThe mount point must match the value of the
QemuMemoryBackingDir
parameter.