How to use, monitor, and disable transparent hugepages in Red Hat Enterprise Linux 6 ,7 and 8?

Solution Verified - Updated -


  • Red Hat Enterprise Linux (RHEL) 6
  • Red Hat Enterprise Linux (RHEL) 7
  • Red Hat Enterprise Linux (RHEL) 8


  • How do transparent hugepages work in RHEL 6?
  • How are transparent hugepages activated by a process?
  • Do we still need to preallocate some amount of memory for use as (transparent) hugepages?
  • How can I see the number of transparent hugepages that are actually in use on the system (either globally or by individual processes)?
  • How do I enable HugePages on Red Hat Enterprise Linux?
  • How do I disable HugePages on Red Hat Enterprise Linux?
  • Freezing issue while streaming out from RHEL 6.2 server, We are providing a streaming platform (RTSP, HLS) to our customers, based on RHEL 6.2 x86_64
    While streaming out, we experiencing delays (more than one to ten seconds) because of a process freeze and at the same point in time a bunch of memory is being freed
  • Need assistance to Disable Transparent Huge Pages (THP) in RHEL 7.4


Transparent Huge Pages are not available on the 32-bit version of RHEL 6.
For RHEL 7 see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux 7
For RHEL 8 see How to disable transparent hugepages (THP) on Red Hat Enterprise Linux 8

Transparent Huge Pages (THP) are enabled by default in RHEL 6 for all applications. The kernel attempts to allocate hugepages whenever possible and any Linux process will receive 2MB pages if the mmap region is 2MB naturally aligned. The main kernel address space itself is mapped with hugepages, reducing TLB pressure from kernel code. For general information on Hugepages, see: What are Huge Pages and what are the advantages of using them?

The kernel will always attempt to satisfy a memory allocation using hugepages. If no hugepages are available (due to non availability of physically continuous memory for example) the kernel will fall back to the regular 4KB pages. THP are also swappable (unlike hugetlbfs). This is achieved by breaking the huge page to smaller 4KB pages, which are then swapped out normally.

But to use hugepages effectively, the kernel must find physically continuous areas of memory big enough to satisfy the request, and also properly aligned. For this, a khugepaged kernel thread has been added. This thread will occasionally attempt to substitute smaller pages being used currently with a hugepage allocation, thus maximizing THP usage.

In userland, no modifications to the applications are necessary (hence transparent). But there are ways to optimize its use. For applications that want to use hugepages, use of posix_memalign() can also help ensure that large allocations are aligned to huge page (2MB) boundaries.

Also, THP is only enabled for anonymous memory regions. There are plans to add support for tmpfs and page cache. THP tunables are found in the /sys tree under /sys/kernel/mm/transparent_hugepage in RHEL 7 and later RHEL 6 releases. The directory for earlier RHEL 6 releases is /sys/kernel/mm/redhat_transparent_hugepage. The first location will be used further in this document.

The values for /sys/kernel/mm/transparent_hugepage/enabled can be one of the following:

always   -  always use THP
never    -  disable THP

khugepaged will be automatically started when transparent_hugepage/enabled is set to "always" or "madvise", and it'll be automatically shutdown if it's set to "never". The transparent_hugepage/defrag parameter takes the same values and it controls whether the kernel should make aggressive use of memory compaction to make more hugepages available.

Check system-wide THP usage

Run the following command to check system-wide THP usage:

# grep AnonHugePages /proc/meminfo 
AnonHugePages:    632832 kB

Note: Red Hat Enterprise Linux 6.2 or later publishes additional THP monitoring via /proc/vmstat:

# egrep 'trans|thp' /proc/vmstat
nr_anon_transparent_hugepages 2018
thp_fault_alloc 7302
thp_fault_fallback 0
thp_collapse_alloc 401
thp_collapse_alloc_failed 0
thp_split 21
thp_fault_alloc: is incremented every time a huge page is successfully allocated to handle a page fault.

thp_collapse_alloc: is incremented by `khugepaged` when it has found a range of pages to collapse into one huge                  page and has successfully allocated a new huge page to store the data.

thp_fault_fallback: is incremented if a page fault fails to allocate a huge page and instead falls back to using                  small pages.

thp_collapse_alloc_failed: is incremented if khugepaged found a range of pages that should be collapsed into one                  huge page but failed the allocation.

thp_split_page: is incremented every time a huge page is split into base pages. This can happen for a variety of                  reasons but a common reason is that a huge page is old and is being reclaimed.

thp_zero_page_alloc: is incremented every time a huge zero page is successfully allocated. It includes allocations which where dropped due race with other allocation. 
Note, it doesn’t count every map of the huge zero page, only its allocation.

thp_zero_page_alloc_failed: is incremented if kernel fails to allocate huge zero page and falls back to using small pages.

Check THP usage per process

Run the following command to check which processes are using THP:

    # awk  '/AnonHugePages/ { if($2>4){print FILENAME " " $0; system("ps -fp " gensub(/.*\/([0-9]+).*/, "\\1", "g", FILENAME))}}' /proc/*/smaps
/proc/7519/smaps:AnonHugePages:    305152 kB
qemu      7519     1  1 08:53 ?        00:00:48 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name rhel7 -S -machine pc-i440fx-1.6,accel=kvm,usb=of
/proc/7610/smaps:AnonHugePages:    491520 kB
qemu      7610     1  2 08:53 ?        00:01:30 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name util6vm -S -machine pc-i440fx-1.6,accel=kvm,usb=
/proc/7788/smaps:AnonHugePages:    389120 kB
qemu      7788     1  1 08:54 ?        00:00:55 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name rhel64eus -S -machine pc-i440fx-1.6,accel=kvm,us

To disable THP at boot time

Append the following to the kernel command line in grub.conf:


Note: Certain ktune and/or tuned profiles specify to enable THP when they are applied. If the transparent_hugepage=never parameter is set at boot time, but THP does not appear to be disabled after the system is fully booted. Refer to the following article:

Disabling transparent hugepages (THP) on Red Hat Enterprise Linux 6 is not taking effect

Disable Transparent Huge Pages(THP) persistently not working on RHEL7

To disable THP at run time

Run the following commands to disable THP on-the-fly:

# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# echo never > /sys/kernel/mm/transparent_hugepage/defrag
  • NOTE: Running the above commands will stop only creation and usage of the new THP. The THP which were created and used at the moment the above commands were run would not be disassembled into the regular memory pages. To get rid of THP completely the system should be rebooted with THP disabled at boot time.
  • NOTE: Some third party application install scripts check value of above files and complain even if THP is disabled at boot time using transparent_hugepage=never, this is due to the fact when THP is disabled at boot time, the value of /sys/kernel/mm/transparent_hugepage/defrag will not be changed, however this is expected and system will never go in THP defragmentation code path when it is disabled at boot and THP defrag need not to be disabled separately.

How to tell if Explicit HugePages is enabled or disabled

There can be two types of HugePages in the system: Explicit Huge Pages which are allocated explicitly by vm.nr_hugepages sysctl parameter and Transparent Huge Pages which are allocated automatically by the kernel. See below on how to tell if Explicit HugePages is enabled or disabled.

  • Explicit HugePages DISABLED:

    • If the value of HugePages_Total is "0" it means HugePages is disabled on the system.

      # grep -i HugePages_Total /proc/meminfo 
      HugePages_Total:       0
    • Similarly, if the value in /proc/sys/vm/nr_hugepages file or vm.nr_hugepages sysctl parameter is "0" it means HugePages is disabled on the system:

      # cat /proc/sys/vm/nr_hugepages 
      # sysctl vm.nr_hugepages
      vm.nr_hugepages = 0
  • Explicit HugePages ENABLED:

    • If the value of HugePages_Total is greater than "0", it means HugePages is enabled on the system:

      # grep -i HugePages_Total /proc/meminfo 
      HugePages_Total:       1024
    • Similarly if the value in /proc/sys/vm/nr_hugepages file or vm.nr_hugepages sysctl parameter is greater than "0", it means HugePages is enabled on the system:

      # cat /proc/sys/vm/nr_hugepages 
      # sysctl vm.nr_hugepages
      vm.nr_hugepages = 1024


  • RHEL 6 disables THP on systems with < 1G of ram. Refer to Red Hat Bug 618444 - disable transparent hugepages by default on small systems for more information.
  • Disadvantages of using the explicit hugepages (libhugetlbfs): Using hugetlbfs requires significant work from both application developers and system administrators; explicit hugepages must be set aside at boot time, and applications must map them explicitly. The process is fiddly enough that use of hugetlbfs is restricted to those who really care and who have the time to mess with it. Hugetlbfs is often seen as a feature for large, proprietary database management systems and little else.

Diagnostic Steps

Links contained herein to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.


  1. Transparent huge pages in 2.6.38
  2. Documentation/vm/transhuge.txt
  3. Andrea Arcangeli's presentation "Transparent Hugepage Support" : KVM Forum 2010 - Presentations

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.