SAP HANA on Red Hat Virtualization: Best Practices Deployment Guide

Updated -


Virtualization can vastly improve efficiency, free up resources, cut costs and optimize existing investments. Use cases range from small, server consolidations to large-scale optimization projects that require the highest levels of application to server density. To this end, the hypervisors that power virtualization play a key role in how data-driven applications are managed today and as part of the next generation of IT as hybrid-cloud technologies continue to evolve.

Red Hat Virtualization (RHV) is a tested, tried, and trusted virtualization solution from Red Hat that addresses and solves the problem of virtualization and the management of virtualized applications and environments. For virtualization, RHV provides a stable, scalable, and fully supported version of the Kernel-based Virtual Machine (KVM) hypervisor by way of the lightweight RHV host. Both host options can be used to run and manage business-critical applications and workloads across public, financial services, insurance, retail and enterprise use cases that lend themselves well to consolidated, hybrid-cloud deployments. Many of these same deployments also have SLAs and requirements for transactional and analytic-based throughput that can only be serviced by high-powered database engines that perform equally well on bare metal or virtualized deployments.

This guide provides an instructional path for those deploying SAP HANA as a supported workload on RHV. It includes information on SAP HANA hardware requirements and best practices and examples of SAP HANA and RHV specific configuration settings and deployment options to consider when using the two products together.

To install the RHV environment, refer to:

SAP HANA Hardware Requirements

SAP HANA on Red Hat Virtualization has been tested by Red Hat for deployment on SAP Certified and Supported SAP HANA Hardware platforms. Before deploying SAP HANA for use with Red Hat Virtualization, check the current list of certified platforms to ensure the planned deployment will be supported according to SAP Notes. For more information, see SAP Note 2399995.

System Requirements for the RHV Hosts

Installing the RHV Host

The RHV 4.1 host includes the qemu-kvm-rhev package version 2.6. To install the RHV host, follow the instructions in the “Installing Hosts” section of the Red Hat Virtualization Installation Guide.

Verifying the RHV Host is Fully Enabled for Virtualization

The RHV host requires specific hardware extensions are enabled to provide full virtualization functionality and performance. To determine whether a RHV host has the required hardware virtualization extensions enabled, run the checks described in the “KVM Hypervisor Requirements” section of the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.

Verifying System Requirements for the RHV Host

See the “Host Requirements” section of the Planning and Prerequisites Guide for the dedicated disk space and RAM specifications needed to run the RHV host.

Setting Kernel Boot Options

For optimal network performance, Red Hat recommends the use of SR-IOV, which requires specific IOMMU settings for the kernel. Accordingly, you must ensure that IOMMU functionality is enabled in your server’s BIOS. If you are unsure how to enable IOMMU in the BIOS, contact your hardware vendor for support.

Because qemu-kvm-rhev is a userspace process, it benefits from the use of static hugepages. Using static hugepages will reduce TLB misses and speed up the virtual machine (VM) memory management, which is essential for SAP HANA performance. Using 1GB hugepages is required for optimal performance of SAP HANA VMs running on RHV.

In addition, it is recommended that CPU power management state be disabled to improve overall CPU performance.


  1. Calculate the number of hugepages required based on the amount of memory you want to use for the SAP HANA VM. You must be able to evenly divide the number of 1 GB hugepages by the number of sockets or NUMA nodes on the system you will be using for the guest.

    For example, to run a 128GB SAP HANA VM, you must configure at least 128 1GB static hugepages. If you have four sockets or NUMA nodes on your RHV hypervisor, each virtual NUMA node for the guest would have 32GB of memory.

  2. Add the following parameters to the RHV host kernel command line, adjusting the values as required for your configuration:

    hugepages=[# hugepages]
  3. To limit a CPU to cstate 1, thus effectively disabling the CPU power management states, add the following parameter to the host kernel command line:


    For more information about CPU power management states, see

  4. For enabling IOMMU, add the following parameter to the RHV host kernel command line:

  5. Add the parameters to the “kernel” tab during host deploy. (If added or changed later, the RHV host needs to be redeployed.) See “Adding a Host to the Red Hat Virtualization Manager” in the Installation Guide.

    To deploy a new RHV host, do the following in the RHV Manager:

    a. Click Hosts.
    b. Click New.
    c. Navigate to the Kernel tab.
    d. Add the parameters, separated by spaces, to the kernel-command-line, e.g.: default_hugepagesz=1GB hugepagesz=1GB hugepages=128 intel_idle.max_cstate=1 intel_iommu=on
    e. Continue with deployment of the RHV host.
    f. After deployment has finished, select Management -> Maintenance to put the host into maintenance.
    g. Once in maintenance, reboot the RHV host by selecting Management -> SSH Management -> Restart.
    h. After the reboot has finished, activate the RHV host by selecting Management -> Activate.

    To change an existing RHV host, do the following in the RHV Manager:

    a. Click Hosts.
    b. Highlight the relevant RHV host.
    c. Click Edit.
    d. Navigate to the Kernel tab.
    e. Add the parameters, separated by spaces, to the kernel command-line, e.g.: default_hugepagesz=1GB hugepagesz=1GB hugepages=128 intel_idle.max_cstate=1 intel_iommu=on
    f. Click OK.
    See the example below:

    Editing kernel parameters

    g. Select Management -> Maintenance to put the RHV host into maintenance.
    h. Once in maintenance, select Installation -> Reinstall to apply the new kernel parameters to the RHV host.
    i. After the re-installation has finished, set the RHV host into maintenance again by selecting Management -> Maintenance.
    j. Once in maintenance, reboot the RHV host by selecting Management -> SSH Management -> Restart.
    k. After the reboot has finished, activate the RHV host by selecting Management -> Activate.

As the RHV 4.1 hypervisor is based on RHEL 7, please refer to the RHEL 7 Administration Guide for more details on kernel command line parameters. You can verify that the parameters have been correctly applied by checking the current kernel command line on the RHV host:

[root@rhvh01 ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-693.11.1.el7.x86_64 root=/dev/mapper/rhel_rhvh01-root ro crashkernel=auto rhgb quiet LANG=en_US.UTF-8 default_hugepagesz=1GB hugepagesz=1GB hugepages=128 intel_idle.max_cstate=1 intel_iommu=on

Configuring the SAP HANA VM storage pool

The appliance hardware vendors provide storage within their pre-built SAP HANA systems. If you use the Tailored Data Center Integration (TDI) approach, SAP HANA requires a SAP HANA TDI certified storage subsystem (see SAP Certified and Supported SAP HANA Hardware for the list of certified storage systems).

It is necessary to apply the file system layout/partitioning outlined in the SAP HANA Server Installation and Update Guide and the SAP HANA – Storage Requirements Guide. Check the TDI documentation for hardware vendors for vendor-specific storage system setup requirements. Choose the appropriate chapters for your deployment. The mount points described in this document apply to scale-up and shared-file system deployments. Mount points for shared disk deployments can be found in the SAP HANA Server Installation and Update Guide or from the hardware vendor’s documentation.

For more information about file system layout, partitioning, and sizing, see the “Recommended File System Layout” section in the SAP HANA Server Installation and Update Guide and the SAP HANA – Storage Requirements Guide.

For the chosen deployment option, ensure that the underlying storage meets the performance requirements or Key Performance Indicators (KPIs) as described in SAP Note 1943937. The Hardware Configuration Check Tool (HWCCT) described in SAP Note 1943937 provides tests and reports for additional scenarios to help you determine whether the hardware you plan to use meets the minimum performance criteria required to run SAP HANA in production deployments. SAP requires that you use HWCCT to check the hardware infrastructure setup according to the SAP HANA TDI approach.

Proper configuration of the SAP HANA VM storage pool can greatly improve performance. To configure the storage pool, do the following:

Create and add direct LUNs for /hana/data, /hana/log and /hana/shared to the SAP HANA VM or use NFS-shares for /hana/data, /hana/log and /hana/shared.

For the recommended size of disk space for every logical volume, refer to the SAP HANA TDI-Storage Requirements.

To attach storage to RHV, refer to the “Attaching Storage” section in the Red Hat Virtualization Installation Guide. To extend a storage domain and add direct attached LUNs, refer to the “Storage” chapter in the Red Hat Virtualization Administration Guide.

For getting best performance please also apply SAP Note 2538561 - High Amount of Disk Write I/O in SAP HANA 2.0 SPS00 - SPS02 as this will significantly reduce disk writes.

Configuring a RHV Cluster running SAP HANA

The RHV Manager allows users to administer and manage all RHV hosts and guests deployed on a virtualized infrastructure from a centralized, web browser-based console. All hosts belonging to a RHV cluster share some settings.

To avoid over committing memory resources, disable Memory Overcommit, Memory Balloon Optimization, and Kernel Samepage Merging (KSM) as follows:

  1. Click Clusters.
  2. Select the cluster where the RHV hypervisor running the SAP HANA VM is located.
  3. Click Edit.
  4. Select the Optimization tab.
  5. Click the None radio button for Memory Optimization.
  6. Uncheck the Enable Memory Balloon Optimization checkbox.
  7. Uncheck the Enable KSM checkbox.
  8. Click OK.

Memory options

Because RHV 4.1 does not recognize static hugepages, you must disable the Memory filter, which is enabled by default, in order to start the VM. The Memory filter is responsible for monitoring the following on a scheduled interval:

  • Free reserved memory: All started VMs must fit into this formula:
    (physical memory - host overhead) * overcommit ratio.
  • Free physical memory: qemu-kvm-rhev must be able to call malloc with the
    full requested memory.
  • Swap usage: RHV Manager will not use a host if it is swapping.

The Memory filter ensures that memory is not overcommitted.

If the memory filter is not disabled when starting the SAP HANA VM, an error message similar to the following will pop up when the VM is started:

Operation Canceled. Error while executing action.

The Memory filter is part of a scheduling policy, which must be cloned and modified from an existing policy as follows:


  1. Click Configure in the top right corner.

    The Configure button

  2. Click the Scheduling Policies tab.

  3. Select the current policy that your cluster is using (typically evenly_distributed, which ensures evenly distributed memory and CPU resource usage).
  4. Click Copy.

    Setting scheduling policies

  5. Give the new scheduling policy a descriptive name (for example, evenly_distributed_for_SAP_HANA).

  6. In the Filter Modules section, locate the Memory filter in the Enabled Filters list, then move it to the Disabled Filters area as shown below:

    Clone scheduling policy

  7. Click OK.

To apply this scheduling policy to your cluster:

  1. Click Clusters and select the cluster your SAP HANA systems will be running in.
  2. Click Edit.
  3. Click the Scheduling Policy tab.
  4. Select the newly created scheduling policy from the drop down list.
  5. Click OK.

Add scheduling policy

For more information about scheduling policies, see the “Scheduling Policies” section of the Red Hat Virtualization Administration Guide.

NOTE: Once the SAP HANA VM is started, the memory filter must be re-enabled so that other VMs in that Cluster are scheduled appropriately.

Enabling SAP HANA VMs to use hugepages

Because RHV 4.1 cannot create VMs using hugepages, the following configuration hook -- written, tested, and supported by Red Hat -- is needed to add the hugepages configuration to the VM at startup. RHV executes the existing hooks at the events you specify -- in this case before the VM starts -- to adjust the configuration generated by the RHV Manager.


  1. Log in to the RHV host and run the following commands as root to create the hook using the python script as shown (as indentation might not be copied correctly, please use the attached script) :

    cd /usr/libexec/vdsm/hooks/before_vm_start
    cat > 50_highperf << EOF
    import os
    import sys
    import traceback
    import hooking
    highperf=1 (value doesn't matter)
    The 1GB hugepages must be already defined during boot-time of the
    hypervisor, e.g. like
    "default_hugepagesz=1GB hugepagesz=1GB hugepages=[# hugepages needed]"
    The VM also needs to have iothreads enabled in the RHV-M Web-UI.
    The number of threads need to be set to "1"
    As invariant tsc is needed, this flag is explicitly passed through to
    the guest. Therefore CPU hostpassthrough needs to be enabled in the
    RHV-M Web-UI.
    if 'highperf' in os.environ:
         domxml = hooking.read_domxml()
         domain = domxml.getElementsByTagName('domain')[0]
         if len(domain.getElementsByTagName('memoryBacking')):
         sys.stderr.write('hugepages: VM already have hugepages\n')
         memoryBacking = domxml.createElement('memoryBacking')
         hugepages = domxml.createElement('hugepages')
         page = domxml.createElement('page')
         page.setAttribute('size', '1048576')
         sys.stderr.write('hugepages: adding hugepages tag\n')
         if len(domain.getElementsByTagName('iothreads')):
         iothreadids = domxml.createElement('iothreadids')
         ids = domxml.createElement('iothread')
         ids.setAttribute('id', '1')
         if len(domain.getElementsByTagName('cputune')):
         cputune = domain.getElementsByTagName('cputune')[0]
         cputune = domxml.createElement('cputune')
         iothreadpin = domxml.createElement('iothreadpin')
         iothreadpin.setAttribute('iothread', '1')
         iothreadpin.setAttribute('cpuset', '0')
         emulatorpin = domxml.createElement('emulatorpin')
         emulatorpin.setAttribute('cpuset', '0')
         if not len(domain.getElementsByTagName('cputune')):
         if len(domain.getElementsByTagName('cpu')):
         cpu = domain.getElementsByTagName('cpu')[0]
         feature_tsc = domxml.createElement('feature')
         feature_tsc.setAttribute('policy', 'require')
         feature_tsc.setAttribute('name', 'invtsc')
         feature_rdt = domxml.createElement('feature')
         feature_rdt.setAttribute('policy', 'require')
         feature_rdt.setAttribute('name', 'rdtscp')
         feature_l3 = domxml.createElement('cache')
         feature_l3.setAttribute('level', '3')
         feature_l3.setAttribute('mode', 'emulate')
     except Exception:
         sys.stderr.write('highperf hook: [unexpected error]: %s\n' %
    chmod +x 50_highperf
  2. Log into the system where the RHV Manager is running and run the following command to trigger the hook:

    [root@rhv-m ~]# engine-config -m UserDefinedVMProperties='highperf=^[0-9]+$' --cver=4.1
  3. Optionally, verify that the property is set:

    [root@rhv-m ~]# engine-config -g UserDefinedVMProperties
    UserDefinedVMProperties: version: 3.6
    UserDefinedVMProperties: version: 4.0
    UserDefinedVMProperties: highperf=^[0-9]+$ version: 4.1
  4. Restart the ovirt-engine service for the changes to take effect:

    [root@rhv-m ~] /bin/systemctl restart ovirt-engine.service 

Virtual Machine Requirements

Configuring Virtual NUMA and Virtual CPU Pinning

For optimal performance, the VM configuration must be aligned to the CPU and memory resources of the underlying physical hardware. Virtual CPU pinning ensures that the NUMA configuration shown in the VM is identical to the one on the RHV host, which avoids any performance degradation caused by cross NUMA node calls.

CPU pinning ensures a virtual CPU thread is assigned to a specific physical CPU. To align the SAP HANA VM with the hardware it is running on, configure virtual CPU pinning and virtual NUMA as described below.

Disabling unneeded virtual devices

Minimize the memory overhead by disabling virtual devices that are not required as described below. If you require a graphical console, use VNC rather than SPICE.

Creating an SAP HANA VM

To create an SAP HANA VM configured to leverage the CPU pinning and virtual NUMA settings described previously in this section, follow the steps described below:


  1. Log in to the RHV host and run the following command for obtaining the hardware topology:

    # lscpu

    Note the following parameters from the ‘lscpu’ command output:

    • CPU(s)
    • Thread(s) per core
    • Core(s) per socket
    • Socket(s)
    • Numa node(s)
    • CPUs per NUMA node

    The following example shows the output from a 4-socket Intel E5-8890 server with 1.5TB of total memory:

    # lscpu
    Architecture:         x86_64
    CPU op-mode(s):       32-bit, 64-bit
    Byte Order:           Little Endian
    CPU(s):               144
    On-line CPU(s) list:  0-143
    Thread(s) per core:   2
    Core(s) per socket:   18
    Socket(s):            4
    NUMA node(s):         4
    Vendor ID:            GenuineIntel
    CPU family:           6
    Model:                63
    Model name:           Intel(R) Xeon(R) CPU E5-8890 @ 2.50GHz
    Stepping:             3
    CPU MHz:              2899.902
    BogoMIPS:             5020.78
    Virtualization:       VT-x
    L1d cache:            32K
    L1i cache:            32K
    L2 cache:             256K
    L3 cache:             46080K
    NUMA node0 CPU(s):    0-17,72-89
    NUMA node1 CPU(s):    18-35,90-107
    NUMA node2 CPU(s):    36-53,108-125
    NUMA node3 CPU(s):    54-71,126-143

    NOTE: The output of the lscpu command varies depending on the hardware vendor. Users will need an understanding of the format used by their hardware vendor to properly interpret the output of this query.

  2. In the RHV Manager click Virtual Machines tab.

  3. Click New VM.
  4. In the lower left corner, click Show Advanced Options as shown below.

    Show advanced options

  5. At a minimum, fill in the following parameters in the General tab as follows:

    a. Operating System: Red Hat Enterprise Linux 7.x x86_64.
    b. Optimized for: Server.
    c. Name: < Name of your SAP HANA VM >.


  6. Click the System tab and set the following parameters:

    a. Enter the Memory Size for your SAP HANA VM.

    NOTE: The memory allocated for the SAP HANA VM will be distributed equally across the virtual NUMA nodes. Accordingly, the total memory in gigabytes needs to be evenly divisible by the number of virtual NUMA Nodes.

    b. Enter the Maximum Memory equal to Memory Size.
    c. Enter the number of Total Virtual CPUs, which needs to be lower than or equal to the CPU(s) from the lscpu command output obtained earlier from the RHV host.
    d. Click Advanced Parameters.
    e. Adjust the CPU topology according to the lscpu command output:

    • Virtual Sockets equal to Socket(s) from the lscpu command output.
    • Cores per Virtual Socket equal to Core(s) per socket minus 1 (-1) from the lscpu command output for reserving some resources for the RHV host.
    • Threads per Core equal to Thread(s) per core from the lscpu command output.

    VM system settings

  7. Click the Console tab and set the following:

    a. Check Headless Mode.
    b. Check Enable VirtIO serial console.

    Or, if you prefer a graphical console:

    a. Make sure Headless Mode is unchecked.
    b. Select VNC for Graphics protocol.
    c. Uncheck Soundcard enabled.
    d. Uncheck Enable SPICE file transfer.

  8. Click the Host tab and set the following parameters:

    a. Select the Specific host radio button in the Start running on section.
    b. Select the RHV host from the dropdown list that appears.
    c. Select Do not allow migration from the dropdown list for Migration Mode.
    d. Check Pass-Through Host CPU.
    e. Set the NUMA Node Count equal to the Numa node(s) from the lscpu output.
    f. Select Strict from the dropdown list for Tune Mode.

    New VM host settings

    g. Click NUMA Pinning.
    h. Drag & drop the virtual NUMA nodes according to the physical NUMA nodes. As shown below the NUMA-node numbers for physical and virtual NUMA nodes must match.
    i. Click OK.

    Arrows directing NUMA pinning

    NUMA nodes pinned

  9. Click the Resource Allocation tab and set the following:

    a. Ensure Disabled is selected from the dropdown list for CPU shares.
    b. Enter the virtual CPU pinning to the CPU pinning topology as follows:

    vcpu0#pcpu1_vcpu1#pcpu2_vcpu2#pcpu3….. etc.



    NUMA node0 CPU(s):     0-17,72-89
    NUMA node1 CPU(s):     18-35,90-107
    NUMA node2 CPU(s):     36-53,108-125
    NUMA node3 CPU(s):     54-71,126-143

    Corresponding CPU pinning topology line:


    NOTE: Be sure to reserve the first core/thread pair of each physical socket for the RHV host as shown in the previous example.

    NOTE: See the script in the Appendix that can calculate the required topology.

    c. Ensure the Memory Balloon Device Enabled checkbox is not checked.
    d. Check the IO Threads Enabled checkbox.

    Resource Allocation

  10. Click the Custom Properties tab and set the following:

    a. Select highperf from the Dropdown list.
    b. Enter 1 in the textbox that appears right of the dropdown list.

    Custom Properties

  11. Click OK to create the SAP HANA VM.

Attach required storage as described in Creating a Linux Virtual Machine in the Virtual Machine Management Guide.

Setting up the Network for the VM

To achieve optimum network performance, SAP HANA VM requires SR-IOV enabled network interfaces. With SR-IOV devices you can achieve performance levels that are close to bare metal, because SR-IOV does not require any virtual bridges to communicate with the network.


  • SR-IOV capable network cards
  • iommu enabled as described earlier in this document.
  • iommu enabled in the BIOS of the Server

The following procedure describes how to create a SR-IOV enabled virtual network. For more information about how to do this, see the “Enabling Passthrough on a vNIC Profile” section of the Red Hat Virtualization Administration Guide.


  1. Click the Networks tab.
  2. Click New to create a new virtual network.
  3. Enter the network name.
  4. Ensure that VM-Network is checked.
  5. Click OK.

    VM Network

  6. Highlight the just-created virtual network.

  7. Click vNIC Profile in the lower tab section.
  8. Click New.

    NOTE: You cannot change the automatically created vNIC profile.

  9. Enter a descriptive name into the Name field.

  10. Select No Network Filter for Network Filter.
  11. Check Passthrough.
  12. Check Migratable.
  13. Click OK.

New vNIC profile

After the virtual network has been created, it needs to be added to a SR-IOV device. For that virtual functions need to be created on the network card where the new virtual network will be attached. This is described in the following procedure:


  1. Click the Hosts tab.
  2. Highlight the SAP HANA host.
  3. Click the Network Interfaces tab in the lower tab section.
  4. Click Setup Hosts Networks.
  5. Click the pencil icon for the required SR-IOV network card.
  6. Enter the number of required virtual functions to Number of VFs.
  7. Click OK.

    NOTE: SR-IOV capable Networks card have a “SR-IOV” icon next to them.

  8. Click OK in the Network Interface dialog box.

    NOTE: The virtual functions, and supporting interfaces, will appear for the host.

    Edit number of VFs

  9. Select Setup Host Networks again.

  10. Drag and drop the previously created virtual network to one of the newly created virtual functions of the network card.

    NOTE: Virtual Functions have a “VFunc” icon next to them.

  11. Click OK after you are finished setting up the network for the host.

Set up host network

After the virtual network for SR-IOV has been assigned to the host, the last step is to add the virtual function to the VM. To achieve this, follow this procedure:


  1. Click the Virtual Machines tab.
  2. Highlight the previously created SAP HANA VM.
  3. Click the Network Interfaces tab in the lower tab section.
  4. Click New to add a new interface.
  5. In the popup window specify the following settings:

    a. Profile: The Virtual NIC Profile, created earlier.

    NOTE: Be aware that the default profile for this network is also listed. Please make sure you select the profile you added earlier.

    b. Type: PCI-Passthrough

  6. Click OK.

New network interface

If required, repeat the steps above to add more virtual network interfaces to your environment.

After attaching the network, start the VM and install the Red Hat Enterprise Linux (RHEL) operating system on it.

SAP HANA on Red Hat Enterprise Linux Installation Requirements

This chapter describes how to configure and optimize a RHEL guest on RHV for SAP HANA.

Installing SAP HANA on Red Hat Enterprise Linux

Review the required documentation and guides at the links below before starting an SAP HANA deployment. The documentation contains information about supportability, configuration, recommended operating system settings, and guidelines for running SAP HANA on Red Hat Enterprise Linux.

SAP HANA notes, settings and required information

  • SAP Note 2009879 - SAP HANA Guidelines for Red Hat Enterprise Linux Operating System (includes the Installation Guide in the attachment)
  • SAP Note 2292690 - SAP HANA DB: Recommended OS settings for Red Hat Enterprise Linux 7.2
  • SAP Note 1943937 - Hardware Configuration Check Tool - Central Note (contains the user guide for HWCCT)
  • SAP Note 1788665 - SAP HANA Support for virtualized / partitioned (multi-tenant) environments
  • SAP Note 1943937 - Hardware Configuration Check Tool - Central Note

Red Hat KnowledgeBase Articles

Performance Optimization for SAP HANA running on a guest

To ensure optimal integration to RHV of the SAP HANA VM, install and enable the ovirt-guest-agent from the rhel-7-server-rh-common-rpms repository:

yum install --enablerepo rhel-7-server-rh-common-rpms -y ovirt-guest-agent-common
systemctl start ovirt-guest-agent
systemctl enable ovirt-guest-agent

Activating sap-hana Tuned Profile on SAP HANA VM

The virtual-guest profile is automatically selected when you install Red Hat Enterprise Linux 7 in a RHV guest as it is generally recommended for virtual machines, but for SAP HANA, the sap-hana profile is required. To enable this profile run the following command:

tuned-adm profile sap-hana

For more information about tuned profiles, see the “tuned and tuned-adm” section of the Red Hat Enterprise Linux Virtualization Tuning and Optimization Guide.

RHV Hypervisor/KVM Guest Timing Management

SAP HANA performance benefits from the use of the RDTSC hardware timer. To verify that RDTSC is being used, follow the procedure described below.


  1. Switch to the following directory: /hana/shared/<SID>/HDB00/<systemname>/trace/DB_<TanentSID>/
  2. Check the indexserver trace file for the timer as described below:

    # grep Timer.cpp indexserver_<system name>.<30003>.<001>.trc    

If the hardware timer is in use, output similar to the following will be displayed:

[4548]{-1}[-1/-1] 2017-03-16 08:43:36.137096 i Basis Timer.cpp(00642) : Using RDTSC for HR timer

If the hardware timer is not in use, the log will indicate that a software fallback is being used, resulting in a significant performance impact:

[4330]{-1}[-1/-1] 2017-04-18 10:50:59.068073 w Basis Timer.cpp(00718) : Fallback to system call for HR timer

For more details on guest timing management, see the “KVM Guest Timing Management” section of the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide.

Verify CPU/NUMA Settings

To verify the vCPU/vNUMA topology, run the following command on the RHV host and the SAP HANA guest:


The output should look similar to the following:


# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                144
On-line CPU(s) list:   0-143
Thread(s) per core:    2
Core(s) per socket:    18
Socket(s):             4
NUMA node(s):          4

L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-17,72-89
NUMA node1 CPU(s):     18-35,90-107
NUMA node2 CPU(s):     36-53,108-125
NUMA node3 CPU(s):     54-71,126-143


# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                136
On-line CPU(s) list:   0-135
Thread(s) per core:    2
Core(s) per socket:    17  (NOTE: 1 core per socket is used for IO, admin)
Socket(s):             4
NUMA node(s):          4

L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
L3 cache:              16384K
NUMA node0 CPU(s):     0-33
NUMA node1 CPU(s):     34-67
NUMA node2 CPU(s):     68-101
NUMA node3 CPU(s):     102-135

Check the following values from the output:

  1. CPU(s) in the VM must be lower or equal to the host (in this example 136<144).
  2. Thread(s) per Core must be identical (in this case 2=2).
  3. Core(s) per socket in the VM must be lower or equal to the host (in this example 17<18).
  4. Socket(s) must be identical (in this case 4=4).
  5. NUMA node(s) must be identical (in this case 4=4).
  6. L3 cache must be present in the VM (in this example 16384K)
  7. NUMA node# CPU(s) need to match the CPU Pinning done earlier (each vCPU pinned to a physical CPU must reside in the same NUMA node). In the above example, vCPUs #0-33 have been pinned to physical CPUs #1-17 and #73-89. If the CPUs are not pinned as described in this step, recheck the NUMA and CPU pinning outlined in “Configuring Virtual NUMA and Virtual CPU Pinning”.

Further Considerations

Virtualization Limits for RHV

Refer to the guidelines detailed in the previous sections of this guide and in the following document for resource planning, sizing and dedicated memory for the RHV host and guest VMs:

Virtualization limits for Red Hat Virtualization


Example libvirt XML file for a SAP HANA VM running on a RHV host

The example below illustrates a libvirt XML file generated by RHV for a 32 vCPU based SAP HANA VM on a 64 CPU RHV host. This can be obtained by running virsh -r dumpxml on the RHV host that is running the SAP HANA VM:

<domain type='kvm' id='68'>
  <metadata xmlns:ovirt="">
  <memory unit='KiB'>134217728</memory>
  <currentMemory unit='KiB'>134217728</currentMemory>
      <page size='1048576' unit='KiB'/>
  <vcpu placement='static' current='32'>128</vcpu>
    <iothread id='1'/>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='33'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='34'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='35'/>
    <vcpupin vcpu='6' cpuset='4'/>
    <vcpupin vcpu='7' cpuset='36'/>
    <vcpupin vcpu='8' cpuset='9'/>
    <vcpupin vcpu='9' cpuset='41'/>
    <vcpupin vcpu='10' cpuset='10'/>
    <vcpupin vcpu='11' cpuset='42'/>
    <vcpupin vcpu='12' cpuset='11'/>
    <vcpupin vcpu='13' cpuset='43'/>
    <vcpupin vcpu='14' cpuset='12'/>
    <vcpupin vcpu='15' cpuset='44'/>
    <vcpupin vcpu='16' cpuset='17'/>
    <vcpupin vcpu='17' cpuset='49'/>
    <vcpupin vcpu='18' cpuset='18'/>
    <vcpupin vcpu='19' cpuset='50'/>
    <vcpupin vcpu='20' cpuset='19'/>
    <vcpupin vcpu='21' cpuset='51'/>
    <vcpupin vcpu='22' cpuset='20'/>
    <vcpupin vcpu='23' cpuset='52'/>
    <vcpupin vcpu='24' cpuset='25'/>
    <vcpupin vcpu='25' cpuset='57'/>
    <vcpupin vcpu='26' cpuset='26'/>
    <vcpupin vcpu='27' cpuset='58'/>
    <vcpupin vcpu='28' cpuset='27'/>
    <vcpupin vcpu='29' cpuset='59'/>
    <vcpupin vcpu='30' cpuset='28'/>
    <vcpupin vcpu='31' cpuset='60'/>
    <emulatorpin cpuset='0'/>
    <iothreadpin iothread='1' cpuset='0'/>
    <memnode cellid='0' mode='strict' nodeset='0'/>
    <memnode cellid='1' mode='strict' nodeset='1'/>
    <memnode cellid='2' mode='strict' nodeset='2'/>
    <memnode cellid='3' mode='strict' nodeset='3'/>
  <sysinfo type='smbios'>
      <entry name='manufacturer'>Red Hat</entry>
      <entry name='product'>RHEV Hypervisor</entry>
      <entry name='version'>7.4-25.el7rhgs</entry>
      <entry name='serial'>80CE89B1-719D-E211-8E94-001E673F08F8</entry>
      <entry name='uuid'>75114a01-8bbb-4886-99d6-a478f295f93b</entry>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <bios useserial='yes'/>
    <smbios mode='sysinfo'/>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='16' cores='4' threads='2'/>
    <cache level='3' mode='emulate'/>
    <feature policy='require' name='invtsc'/>
    <feature policy='require' name='rdtscp'/>
      <cell id='0' cpus='0-7' memory='33554432' unit='KiB'/>
      <cell id='1' cpus='8-15' memory='33554432' unit='KiB'/>
      <cell id='2' cpus='16-23' memory='33554432' unit='KiB'/>
      <cell id='3' cpus='24-31' memory='33554432' unit='KiB'/>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <target dev='hdc' bus='ide'/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-00000000032b/9183be0d-4a07-4ac0-88de-3abbbd82bfff/images/bf503613-335e-4a1f-9268-83570d9eb187/2bea51b0-56b7-4075-af0e-ce25e9c3a8d8'/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <driver iothread='1'/>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    <interface type='hostdev'>
      <mac address='00:1a:4a:60:00:92'/>
      <driver name='vfio'/>
        <address type='pci' domain='0x0000' bus='0x04' slot='0x10' function='0x6'/>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    <serial type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/75114a01-8bbb-4886-99d6-a478f295f93b.sock'/>
      <target port='0'/>
      <alias name='serial0'/>
    <console type='unix'>
      <source mode='bind' path='/var/run/ovirt-vmconsole-console/75114a01-8bbb-4886-99d6-a478f295f93b.sock'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    <memballoon model='none'>
      <alias name='balloon0'/>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
  <seclabel type='dynamic' model='dac' relabel='yes'>

Calculate CPU Pinning

The following script will assist you in generating the appropriate vCPU pinning. Copy and paste the output of this script into the CPU pinning field in the Resource Allocation tab.


usage() {
cat << EOR
Usage: $0 [num_vcpu]

This script creates CPU pinning according to the host CPU topology up to the max
number of CPUs, always leaving the first core/thread pair of each physical
socket to the hypervisor for iothreads and management tasks.

exit 1

if [[ "$1" = "--help" || "$1" = "-h" ]]; then
  exit 0

num_sockets=$(lscpu | awk -F: '/^Socket\(s\)/ { print $2}' | tr -d " ")
num_threads=$(lscpu | awk -F: '/^Thread\(s\) per core/ { print $2}'| tr -d " ")
num_cpus=$(lscpu | awk -F: '/^CPU\(s\)/ { print $2}'| tr -d " ")
if [[ -n "$1" ]]; then
  if [[ $1 -gt $num_cpus ]]; then
    echo "Requested vCPUs ($1) are higher than physical CPU Count ($num_cpus)!"
    exit 2
  max_cnt=$(( $1 / $num_sockets / $num_threads ));
  PARM="-v CNT=$max_cnt"

numactl --hardware | awk -F: -v THREADS=$num_threads -v SOCKETS=$num_sockets $PARM '
       BEGIN { num_threads=THREADS;
           if ( CNT ) { cnt = CNT;
                print ("limiting to "cnt" cpus/numa node. " );
           else {
                cnt = 0;
                print ("Using all cpus/numa node. " );
       /node [0-9]+ cpus/ {
        # split $2 by " " in array cores
        num_vcpu=split($2,threadnum," ");
        num_core=num_vcpu / num_threads;
        if ( cnt == 0 ) { cnt=num_core-1 }
        for (i=2;i<=cnt+1 ;i++) {
            for (t=0;t<num_threads;t+=2) {
                printf (vcpu++"#"threadnum[i]"_"vcpu++"#"threadnum[i+num_core]"_");
    END { printf("\n"); }' |  sed 's/.$//'
exit $?


  • All CPUs in use for SAP HANA VM:

    # ./print_cpu_pinning 
    Using all cpus/numa node.
  • This example uses 32 vCPUs (the RHV host needs to have at least 32 physical cores):

    ./print_cpu_pinning 32
    limiting to 4 cpus/numa node.

NOTE: The first physical CPU core of each physical socket is not pinned and as a result is available for the hypervisor and the iothreads.

Network Configuration and Monitoring

The network configuration can have a large effect on overall system performance. Generally speaking, poor network performance will cause increased latency between the application server(s) and the HANA database server and can result in poor response time and/or longer time to complete a given transaction.

For example, overall system performance can be affected if the physical distance between the database server and application server(s) is too far or there are too many hops between these systems. Poor performance can also result from other network traffic consuming bandwidth needed by SAP HANA.

To achieve consistently good performance, the network segment between the application server(s) and database server should not exceed 50% of theoretical capacity. For example, a 1Gbit NIC and related network segment should not exceed 500 Megabits per second. If the amount of network traffic exceeds that value, a more capable network (10Gbit or more) is required for best performance.

Monitoring the total network traffic on a given network segment is typically done with tools provided by the network switch vendor or a network sniffer.

One tool that can be used to monitor network traffic for your system is sar. Running

sar -n <NIC>

can provide utilization data that can help you decide whether the network traffic to your server is causing a performance bottleneck. See the man page for sar for a description of the fields in the sar command output.


Is there a mailing list (or something similar) where one gets updates about SAP on RHEL/RHV?

Hi Klaas,

unfortunately not. The only way is checking the Recommended Setup document, the News and of course the SAP Notes. Nevertheless, I will bring this up internally to see if a sort of announce-list would be reasonable.

Thanks! Martin

The indentation of the 50_highperf snippet is not right for example: if len(domain.getElementsByTagName('memoryBacking')): sys.stderr.write('hugepages: VM already have hugepages\n' have the same indentation but the part that's inside that if needs to be indented further

same problem with many more ifs in that snippet

Hi Klaas,

that is probably due to the Markup. It should be correct. We will add a link to download the hook directly.

Thanks for the feedback, Martin

I have some more questions about the cpu setup: The CPU pinning topology of the script does not reflect the one in #9 vcpu0#pcpu1_vcpu1#pcpu2_vcpu2#pcpu3 vs vcpu0#pcpu1,htcpu1_.... which is also what you entered into the screenshot ( are they both correct or does the script need fixing?

And another note: doesn't reflect your testsetup numa node count should be 4

I also noticed that in the earlier version of the cpu pinning script this happens: 1#1,41_2#2,42_3#3,43_4#4,44_5#5,45_6#6,46_7#7,47_8#8,48_9#9,49_10#10,50_11#11,51_12#12,52_13#13,53_14#14,54_15#15,55_16#16,56_17#17,57_18#18,58_19#19,59_20#21,61_21#22,62_22#23,63_23#24,64_24#25,65_25#26,66_26#27,67_27#28,68_28#29,69_29#30,70_30#31,71_31#32,72_32#33,73_33#34,74_34#35,75_35#36,76_36#37,77_37#38,78_38#39,79 and the new cpu pinning script shows this for the same host: 0#1,41_1#1,41_2#2,42_3#2,42_4#3,43_5#3,43_6#4,44_7#4,44_8#5,45_9#5,45_10#6,46_11#6,46_12#7,47_13#7,47_14#8,48_15#8,48_16#9,49_17#9,49_18#10,50_19#10,50_20#11,51_21#11,51_22#12,52_23#12,52_24#13,53_25#13,53_26#14,54_27#14,54_28#15,55_29#15,55_30#16,56_31#16,56_32#17,57_33#17,57_34#18,58_35#18,58_36#19,59_37#19,59_38#21,61_39#21,61_40#22,62_41#22,62_42#23,63_43#23,63_44#24,64_45#24,64_46#25,65_47#25,65_48#26,66_49#26,66_50#27,67_51#27,67_52#28,68_53#28,68_54#29,69_55#29,69_56#30,70_57#30,70_58#31,71_59#31,71_60#32,72_61#32,72_62#33,73_63#33,73_64#34,74_65#34,74_66#35,75_67#35,75_68#36,76_69#36,76_70#37,77_71#37,77_72#38,78_73#38,78_74#39,79_75#39,79 is the old way incorrect and I need to change it on my running vms?

Hi Klaas,

as stated in my previous update, both should work fine. Please also note that this is the setup that achieves most performance. Having no pinning, will reduce the overall performance for SAP HANA, but it will have no further effect.

Thanks! Martin

Hi, I looked at the libvirt docs and the 'old' script is definitely wrong. Edit: It also starts with vcpu1 instead of vcpu0 :) cpuinfo within the vm shows cpus and their hyperthreads are ordered like this:

cat /proc/cpuinfo |egrep "processor|physical id|core id" | sed 's/^processor/\nprocessor/g'

processor       : 0
physical id     : 0
core id         : 0

processor       : 1
physical id     : 0
core id         : 0

processor       : 2
physical id     : 0
core id         : 1

processor       : 3
physical id     : 0
core id         : 1

this means I need to pin vcpu0 and vcpu1 to the same physical core, old script did not do that (1#cpu1,htcpu1_2#cpu2,htcpu2)

so I asked in the libvirt irc channel and I was told a 1:1 pin would be advisable. This means I would pin like this: 0#cpu1_1#htcpu1_...

that is what you use in text (but is wrong in screenshot and the print_cpu_pinning bash script). Maybe you could correct those two for future reference :)

For me this means I have to change all VMs I built using the old guide :)

Greetings Klaas

Hi Klaas,

thanks for the valuable feedback. We will check the script and add a 1-on-1 mapping to hyperthread and vCPU. As said this is for optimum performance, so I agree a 1-on-1 mapping might get another promille performance. We did several tests with different kind of pinnings and there was no real measurable difference in performance for these kinds of different pinnings.

Anyways the changed script (I will update here) has a 1-on-1 mapping:

[root@inf21 ~]# ./print_cpu_pinning2 
Using all cpus/numa node.
[root@inf21 ~]# 

Does this satisfy your observations? Thanks! Martin

Hello Martin, that seems like the output I was expecting :) that's the output for a 4*8core system, right?


PS: Will you publish your performance testing? did you also compare sr-iov network performance vs virtio network performance?

Hi Klaas,

Indeed. The system is a 4 x 8 core system.

Unfortunately we are contractually with SAP not allowed to publish these tests. We could do an official SAPS test, but we haven't done this yet. All the other testing for performance, needed for certification are not allowed to be published.

Thanks! Martin

Hi Klaas,

thanks for the detailed review here. To the CPU pinning: The important thing is that you do not pin any vCPU to the pCPU that is doing the iothread work. And the vCPU/pCPU pinning should also reflect the NUMA boundaries.

So in short: Both are fine.

Hope that explains it. Thanks! Martin

I have another question ( I'm trying to figure out how I could stop the manager with spamming used memory exceeded threshold messages. I have a hypervisor that has 1,5TB memory and it's using about 98% of that with hugepages for sap hana vms. It still has like 40-50GB memory left for the hypervisor which should be plenty.

So the option I could figure out on my own: I can adjust the global limit of LogMaxPhysicalMemoryUsedThresholdInPercentage but I'm not sure if that is a good idea for my "normal" hypervisors that often have way less memory.

My questions are: - can I adjust the limit on a per hypervisor basis? - can I skip the memory check for certain hypervisors or skip all checks for certain hypervisors? - would a check for free memory in gb make more sense than a percentage check?

Greetings Klaas

Hi Klaas,

unfortunately we do not have such a feature (yet). I believe this makes sense, but probably rather on a per Cluster basis than on a per host basis probably.

Would you mind contacting Support to get this RFE filed?

Thanks, Martin

Hi, I'm currently planning my move to RHV 4.2. Are there any plans to update this guide to 4.2 (and use the new "High Performance" VMs)? Is there a upgrade path for existing VMs/Hypervisors configured to fit this guide?

Side note: I think this document is missing a link to "2599726 - SAP HANA on Red Hat Virtualization in production"

Greetings Klaas

Hi Klaas,

we are currently planning the certification for RHV 4.2 with SAP. Indeed the idea is using the High performance profile, which will make the complete setup simpler. During that process the Guide will be updated or there will be a new version for RHV 4.2 onwards specifically.

Our current goal is having the Single VM certification for RHV 4.2 completed by end of this year. Please note that this is completely tentative and subject to change.

Cheers, Martin

If you need a beta tester reach out to me - I don't have any production systems in rhv so I don't need to wait for the certification :)

Any update on a new version of this best practices document for rhv 4.2?

Not really yet. There is the High performance Profile in RHV 4.2, which mostly replaces the hook. The setup for Hugepages and pinning will remain the same probably.

For being able to use invtsc CPU flags (note that this currently disables live migration), you can use the "new" hook from here:

That should be the main changes for running it on RHV 4.2.

Please note that this is not final yet and things might change, but this is the best guidance I can give for now. Also note that RHV 4.2 is not yet officially supported for SAP HANA workloads from SAP.

I have another point that I would add to this guide (or even give as feedback to kernel team). This guide asks you to allocate hugepages at boot time. If you allocate a lot of hugepages it will take a long time to boot and it will not show anything on screen during that time (example: I allocate 1450 hugepages on one server, that takes around 6 minutes). We had a technician change hardware and he thought the system was hanging at boot because of this. So maybe add a warning to the guide :)

It would be even nicer if kernel showed a message like "allocating xxx 1GB hugepages" at boot so one knows it will take some time :)

During a current case I've noticed that the l3 cache is not working. The support traced the problem to the hardware version. Default is pc-i440fx-rhel7.3.0 for a 4.1 cluster. The l3 cache needs pc-i440fx-rhel7.4.0.

So RHV 4.3 is going to be release any day now -- what about SAP HANA on 4.3? :)

Ideally we will have 4.3 certification soon after the release, as the certification itself should not be much different in this case to 4.2.

As RHV 4.2 will have EUS support and RHV 4.3 probably won't, we expect that RHV 4.2 and the RHV 4.4 will be used for SAP HANA mostly.

I thought RHV does not have EUS for minor releases? "Support for Extended Life Phase is provided only for the latest minor release of the product (example 3.6.x where x stands for the latest version)." (also url is still using rhev :D )

That'll change with 4.2 (hypervisors will have EUS, while Engine will not). Updates to are in the works and will be released together with RHV 4.3 GA.

Does that also mean new licenses? My current rhv licenses don't include eus as far as I know

That depends on the subscription you are currently using. Premium Subscriptions include EUS while Standard subscriptions don't. In case of a Standard subscription for RHV, you could get an Addon for being able to access EUS content. Note that in case you are not using RHVH, you also need RHEL EUS to stay on the correct minor release for the hypervisors. Also note that EUS only includes the RHV 4.2 hypervisor / management agents but it does not include RHV-M itself, as the management piece can manage 4.2 clusters the same way as before.


Sockets and NUMA Nodes should be the same on Hypervisor and VM. My system (Fujitsu Primequest 3800B) has 8 Sockets and 16 NUMA Nodes. What do I have to configure in my VM? 16 Sockets and 16 NUMA Nodes?

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                448
On-line CPU(s) list:   0-447
Thread(s) per core:    2
Core(s) per socket:    28
Socket(s):             8
NUMA node(s):          16
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
Stepping:              4
CPU MHz:               3502.807
CPU max MHz:           3800.0000
CPU min MHz:           1000.0000
BogoMIPS:              5000.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              39424K
NUMA node0 CPU(s):     0-3,7-9,14-17,21-23,224-227,231-233,238-241,245-247
NUMA node1 CPU(s):     4-6,10-13,18-20,24-27,228-230,234-237,242-244,248-251
NUMA node2 CPU(s):     28-31,35-37,42-45,49-51,252-255,259-261,266-269,273-275
NUMA node3 CPU(s):     32-34,38-41,46-48,52-55,256-258,262-265,270-272,276-279
NUMA node4 CPU(s):     56-59,63-65,70-73,77-79,280-283,287-289,294-297,301-303
NUMA node5 CPU(s):     60-62,66-69,74-76,80-83,284-286,290-293,298-300,304-307
NUMA node6 CPU(s):     84-87,91-93,98-101,105-107,308-311,315-317,322-325,329-331
NUMA node7 CPU(s):     88-90,94-97,102-104,108-111,312-314,318-321,326-328,332-335
NUMA node8 CPU(s):     112-115,119-121,126-129,133-135,336-339,343-345,350-353,357-359
NUMA node9 CPU(s):     116-118,122-125,130-132,136-139,340-342,346-349,354-356,360-363
NUMA node10 CPU(s):    140-143,147-149,154-157,161-163,364-367,371-373,378-381,385-387
NUMA node11 CPU(s):    144-146,150-153,158-160,164-167,368-370,374-377,382-384,388-391
NUMA node12 CPU(s):    168-171,175-177,182-185,189-191,392-395,399-401,406-409,413-415
NUMA node13 CPU(s):    172-174,178-181,186-188,192-195,396-398,402-405,410-412,416-419
NUMA node14 CPU(s):    196-199,203-205,210-213,217-219,420-423,427-429,434-437,441-443
NUMA node15 CPU(s):    200-202,206-209,214-216,220-223,424-426,430-433,438-440,444-447

Regards, Daniel

Hi Daniel,

you should have the VM configured similar to the hardware, meaning the same CPU-architecture/layout and the same amount of NUMA Nodes with the same vCPU-CPU pinning.

The pinning for the (v)CPUs can be obtained from the script in the addendum, the CPU architecture can should be setup in the UI. Same for the NUMA mapping.

In case you don't want to use the complete host, you can even specify that less CPUs will be used. I would suggest to check the XML file once finished with the setup. For the complete host it should look like this (assuming Hyperthreading stays on):

  <vcpu placement='static' current='440'>448</vcpu>
    <vcpupin vcpu='0' cpuset='1,225'/>
    <vcpupin vcpu='1' cpuset='1,225'/>
    <vcpupin vcpu='2' cpuset='2,226'/>
    <emulatorpin cpuset='0,224'/>
    <iothreadpin iothread='1' cpuset='0,224'/>
    <memnode cellid='0' mode='strict' nodeset='0'/>
    <memnode cellid='1' mode='strict' nodeset='1'/>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='8' cores='28' threads='2'/>
      <cell id='0' cpus='0-31' memory='$AMOUNT' unit='KiB'/>
      <cell id='1' cpus='32-63' memory='$AMOUNT' unit='KiB'/>

If unsure feel free to open a Support Ticket and ask for verification of the setup.

Cheers, Martin