Appendix B. Partitioning reference
B.1. Supported device types
- Standard partition
A standard partition can contain a file system or swap space. Standard partitions are most commonly used for
EFI System partitions. LVM logical volumes are recommended for most other uses.
LVM(or Logical Volume Management) as the device type creates an LVM logical volume. LVM can improve performance when using physical disks, and it allows for advanced setups such as using multiple physical disks for one mount point, and setting up software RAID for increased performance, reliability, or both.
- LVM thin provisioning
- Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can dynamically expand the pool when needed for cost-effective allocation of storage space.
The installation program does not support overprovisioned LVM thin pools.
B.2. Supported file systems
This section describes the file systems available in Red Hat Enterprise Linux.
XFSis a highly scalable, high-performance file system that supports file systems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes), and directory structures containing tens of millions of entries.
XFSalso supports metadata journaling, which facilitates quicker crash recovery. The maximum supported size of a single XFS file system is 500 TB.
XFSis the default and recommended file system on Red Hat Enterprise Linux. The XFS filesystem cannot be shrinked to get free space.
ext4file system is based on the
ext3file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The maximum supported size of a single
ext4file system is 50 TB.
ext3file system is based on the
ext2file system and has one main advantage - journaling. Using a journaling file system reduces the time spent recovering a file system after it terminates unexpectedly, as there is no need to check the file system for metadata consistency by running the fsck utility every time.
ext2file system supports standard Unix file types, including regular files, directories, or symbolic links. It provides the ability to assign long file names, up to 255 characters.
- Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing.
VFATfile system is a Linux file system that is compatible with Microsoft Windows long file names on the FAT file system.Note
VFATfile system is not available for Linux system partitions. For example,
/usrand so on.
- BIOS Boot
- A very small partition required for booting from a device with a GUID partition table (GPT) on BIOS systems and UEFI systems in BIOS compatibility mode.
- EFI System Partition
- A small partition required for booting a device with a GUID partition table (GPT) on a UEFI system.
This small boot partition is located on the first partition of the hard drive. The
PRePboot partition contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux.
B.3. Supported RAID types
RAID stands for Redundant Array of Independent Disks, a technology which allows you to combine multiple physical disks into logical units. Some setups are designed to enhance performance at the cost of reliability, while others improve reliability at the cost of requiring more disks for the same amount of available space.
This section describes supported software RAID types which you can use with LVM and LVM Thin Provisioning to set up storage on the installed system.
- RAID 0
- Performance: Distributes data across multiple disks. RAID 0 offers increased performance over standard partitions and can be used to pool the storage of multiple disks into one large virtual device. Note that RAID 0 offers no redundancy and that the failure of one device in the array destroys data in the entire array. RAID 0 requires at least two disks.
- RAID 1
- Redundancy: Mirrors all data from one partition onto one or more other disks. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two disks.
- RAID 4
- Error checking: Distributes data across multiple disks and uses one disk in the array to store parity information which safeguards the array in case any disk in the array fails. As all parity information is stored on one disk, access to this disk creates a "bottleneck" in the array’s performance. RAID 4 requires at least three disks.
- RAID 5
- Distributed error checking: Distributes data and parity information across multiple disks. RAID 5 offers the performance advantages of distributing data across multiple disks, but does not share the performance bottleneck of RAID 4 as the parity information is also distributed through the array. RAID 5 requires at least three disks.
- RAID 6
- Redundant error checking: RAID 6 is similar to RAID 5, but instead of storing only one set of parity data, it stores two sets. RAID 6 requires at least four disks.
- RAID 10
- Performance and redundancy: RAID 10 is nested or hybrid RAID. It is constructed by distributing data over mirrored sets of disks. For example, a RAID 10 array constructed from four RAID partitions consists of two mirrored pairs of striped partitions. RAID 10 requires at least four disks.
B.4. Recommended partitioning scheme
Red Hat recommends that you create separate file systems at the following mount points. However, if required, you can also create the file systems at
/tmp mount points.
This partition scheme is recommended for bare metal deployments and it does not apply to virtual and cloud deployments.
/bootpartition - recommended size at least 1 GiB
The partition mounted on
/bootcontains the operating system kernel, which allows your system to boot Red Hat Enterprise Linux 9, along with files used during the bootstrap process. Due to the limitations of most firmwares, creating a small partition to hold these is recommended. In most scenarios, a 1 GiB boot partition is adequate. Unlike other mount points, using an LVM volume for
/bootis not possible -
/bootmust be located on a separate disk partition.Warning
/bootpartition is created automatically by the installation program. However, if the
/(root) partition is larger than 2 TiB and (U)EFI is used for booting, you need to create a separate
/bootpartition that is smaller than 2 TiB to boot the machine successfully.Note
If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In such a case, the
/bootpartition must be created on a partition outside of the RAID array, such as on a separate hard drive.
root- recommended size of 10 GiB
This is where "
/", or the root directory, is located. The root directory is the top-level of the directory structure. By default, all files are written to this file system unless a different file system is mounted in the path being written to, for example,
While a 5 GiB root file system allows you to install a minimal installation, it is recommended to allocate at least 10 GiB so that you can install as many package groups as you want.Important
Do not confuse the
/directory with the
/rootdirectory is the home directory of the root user. The
/rootdirectory is sometimes referred to as slash root to distinguish it from the root directory.
/home- recommended size at least 1 GiB
To store user data separately from system data, create a dedicated file system for the
/homedirectory. Base the file system size on the amount of data that is stored locally, number of users, and so on. You can upgrade or reinstall Red Hat Enterprise Linux 9 without erasing user data files. If you select automatic partitioning, it is recommended to have at least 55 GiB of disk space available for the installation, to ensure that the
/homefile system is created.
swappartition - recommended size at least 1 GB
Swap file systems support virtual memory; data is written to a swap file system when there is not enough RAM to store the data your system is processing. Swap size is a function of system memory workload, not total system memory and therefore is not equal to the total system memory size. It is important to analyze what applications a system will be running and the load those applications will serve in order to determine the system memory workload. Application providers and developers can provide guidance.
When the system runs out of swap space, the kernel terminates processes as the system RAM memory is exhausted. Configuring too much swap space results in storage devices being allocated but idle and is a poor use of resources. Too much swap space can also hide memory leaks. The maximum size for a swap partition and other additional information can be found in the
The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and if you want sufficient memory for your system to hibernate. If you let the installation program partition your system automatically, the swap partition size is established using these guidelines. Automatic partitioning setup assumes hibernation is not in use. The maximum size of the swap partition is limited to 10 percent of the total size of the hard drive, and the installation program cannot create swap partitions more than 1TiB. To set up enough swap space to allow for hibernation, or if you want to set the swap partition size to more than 10 percent of the system’s storage space, or more than 1TiB, you must edit the partitioning layout manually.
Table B.1. Recommended system swap space
|Amount of RAM in the system||Recommended swap space||Recommended swap space if allowing for hibernation|
Less than 2 GB
2 times the amount of RAM
3 times the amount of RAM
2 GB - 8 GB
Equal to the amount of RAM
2 times the amount of RAM
8 GB - 64 GB
4 GB to 0.5 times the amount of RAM
1.5 times the amount of RAM
More than 64 GB
Workload dependent (at least 4GB)
Hibernation not recommended
/boot/efipartition - recommended size of 200 MiB
- UEFI-based AMD64, Intel 64, and 64-bit ARM require a 200 MiB EFI system partition. The recommended minimum size is 200 MiB, the default size is 600 MiB, and the maximum size is 600 MiB. BIOS systems do not require an EFI system partition.
At the border between each range, for example, a system with 2 GB, 8 GB, or 64 GB of system RAM, discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space can lead to better performance.
Distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance.
Many systems have more partitions and volumes than the minimum required. Choose partitions based on your particular system needs.
- Only assign storage capacity to those partitions you require immediately. You can allocate free space at any time, to meet needs as they occur.
- If you are unsure about how to configure partitions, accept the automatic default partition layout provided by the installation program.
PRePboot partition - recommended size of 4 to 8 MiB
When installing Red Hat Enterprise Linux on IBM Power System servers, the first partition of the hard drive should include a
PRePboot partition. This contains the GRUB2 boot loader, which allows other IBM Power Systems servers to boot Red Hat Enterprise Linux.
B.5. Advice on partitions
There is no best way to partition every system; the optimal setup depends on how you plan to use the system being installed. However, the following tips may help you find the optimal layout for your needs:
- Create partitions that have specific requirements first, for example, if a particular partition must be on a specific disk.
Consider encrypting any partitions and volumes which might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the
/homepartition, which contains user data.
In some cases, creating separate mount points for directories other than
/homemay be useful; for example, on a server running a
MySQLdatabase, having a separate mount point for
/var/lib/mysqlallows you to preserve the database during a re-installation without having to restore it from backup afterward. However, having unnecessary separate mount points will make storage administration more difficult.
Some special restrictions apply to certain directories with regards on which partitioning layouts can they be placed. Notably, the
/bootdirectory must always be on a physical partition (not on an LVM volume).
- If you are new to Linux, consider reviewing the Linux Filesystem Hierarchy Standard for information about various system directories and their contents.
- Each kernel requires approximately: 60MB (initrd 34MB, 11MB vmlinuz, and 5MB System.map)
- For rescue mode: 100MB (initrd 76MB, 11MB vmlinuz, and 5MB System map)
kdumpis enabled in system it will take approximately another 40MB (another initrd with 33MB)
The default partition size of 1 GB for
/bootshould suffice for most common use cases. However, it is recommended that you increase the size of this partition if you are planning on retaining multiple kernel releases or errata kernels.
/vardirectory holds content for a number of applications, including the Apache web server, and is used by the YUM package manager to temporarily store downloaded package updates. Make sure that the partition or volume containing
/varhas at least 3 GB.
/usrdirectory holds the majority of software on a typical Red Hat Enterprise Linux installation. The partition or volume containing this directory should therefore be at least 5 GB for minimal installations, and at least 10 GB for installations with a graphical environment.
/varis partitioned separately from the rest of the root volume, the boot process becomes much more complex because these directories contain boot-critical components. In some situations, such as when these directories are placed on an iSCSI drive or an FCoE location, the system may either be unable to boot, or it may hang with a
Device is busyerror when powering off or rebooting.
This limitation only applies to
/var, not to directories under them. For example, a separate partition for
/var/wwwworks without issues.Important
Some security policies require the separation of
/var, even though it makes administration more complex.
Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other volumes. You can also select the
LVM Thin Provisioningdevice type for the partition to have the unused space handled automatically by the volume.
- The size of an XFS file system cannot be reduced - if you need to make a partition or volume with this file system smaller, you must back up your data, destroy the file system, and create a new, smaller one in its place. Therefore, if you plan to alter your partitioning layout later, you should use the ext4 file system instead.
Use Logical Volume Management (LVM) if you anticipate expanding your storage by adding more hard drives or expanding virtual machine hard drives after the installation. With LVM, you can create physical volumes on the new drives, and then assign them to any volume group and logical volume as you see fit - for example, you can easily expand your system’s
/home(or any other directory residing on a logical volume).
- Creating a BIOS Boot partition or an EFI System Partition may be necessary, depending on your system’s firmware, boot drive size, and boot drive disk label. Note that you cannot create a BIOS Boot or EFI System Partition in graphical installation if your system does not require one - in that case, they are hidden from the menu.
B.6. Supported hardware storage
It is important to understand how storage technologies are configured and how support for them may have changed between major versions of Red Hat Enterprise Linux.
Any RAID functions provided by the mainboard of your computer, or attached controller cards, need to be configured before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux.
On systems with more than one hard drive, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than the dedicated hardware.
When a pre-existing RAID array’s member devices are all unpartitioned disks/drives, the installation program treats the array as a disk and there is no method to remove the array.
You can connect and configure external USB storage after installation. Most devices are recognized by the kernel, but some devices may not be recognized. If it is not a requirement to configure these disks during installation, disconnect them to avoid potential problems.
To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied:
- The architecture of the system is Intel 64 or AMD64.
- The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode.
- The device must be supported by the nd_pmem driver.
Booting from an NVDIMM device is possible under the following additional conditions:
- The system uses UEFI.
- The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself.
- The device must be made available under a namespace.
To take advantage of the high performance of NVDIMM devices during booting, place the
/boot/efi directories on the device.
The Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory.
Considerations for Intel BIOS RAID Sets
Red Hat Enterprise Linux uses
mdraid for installing on Intel BIOS RAID sets. These sets are automatically detected during the boot process and their device node paths can change across several booting processes. It is recommended that you replace device node paths (such as
/dev/sda) with file system labels or device UUIDs. You can find the file system labels and device UUIDs using the