What are the advantages and disadvantages to using partitioning on LUNs, either directly or with LVM in between?

Solution Verified - Updated -

Environment

  • Red Hat Enterprise Linux (RHEL) 5, 6, 7, 8

Issue

  • What are the advantages and disadvantages of using partitioning on LUNs, either directly or with LVM in between? What should be taken into consideration in terms of administration, performance, and resilience in this context?
  • Is it supported if we create Physical Volumes directly on disks, instead of creating a single partition on the disk?
    For example, is the below configuration supported?

    # pvcreate /dev/sdb  OR
    # pvcreate /dev/mapper/mpatha
    

Resolution

Creating physical volumes on the entire disk instead of partition on it when you want to use the entire disk is completely supported. The following are the advantages/disadvantages of creating the physical volume on an entire disk or a partition on the disk.

Administrative Considerations

Partitioning reduces the likelihood of administrative oversights. Having no partition, no /proc/partitions entity, etc., increases the possibility that a systems administrator may overlook a Physical Volume (PV) while searching for unused drives.

Partitioned drives are recognized by the system's partitioning interfaces and tools (i.e /proc/partitions, parted, etc) and thus are less prone to administrative errors (where the drive is seen as unused). This is not the case when whole drives are used as PVs.

Performance Considerations

Alignment

If partitioning is used in order to reduce the administrative risk, then partitions need to be created with a full understanding as to how to most effectively set out partition alignment. If not, introducing partitioning might cause alignment (and thus performance) issues which may not be seen in alternative configurations where no partition is created before pvcreate is executed, for example

This is especially true of kernel versions that lack kernel support for auto-alignment. Alignment may be difficult to get right depending on which version of the Linux kernel is running. Storage I/O alignment support has been added to recent kernels and tools in order to try to help with this. The aim is to try to avoid the need to hand-craft alignment.

The following chapter of the RHEL 6 Storage Administration Guide provides more details on auto-alignment:

⁠Chapter 23. Storage I/O Alignment and Size

"Recent enhancements to the SCSI and ATA standards allow storage devices to indicate their preferred (and in some cases, required) I/O alignment and I/O size. This information is particularly useful with newer disk drives that increase the physical sector size from 512 bytes to 4k bytes. This information may also be beneficial for RAID devices, where the chunk size and stripe size may impact performance.

The Linux I/O stack has been enhanced to process vendor-provided I/O alignment and I/O size information, allowing storage management tools (parted, lvm, mkfs.*, and the like) to optimize data placement and access. If a legacy device does not export I/O alignment and size data, then storage management tools in Red Hat Enterprise Linux 6 will conservatively align I/O on a 4k (or larger power of 2) boundary. This will ensure that 4k-sector devices operate correctly even if they do not indicate any required/preferred I/O alignment and size."

The following section of the RHEL 6.0 Release Notes also provides some detail on that topic: RHEL 6.0 Release Notes - 4.1. Storage Input/Output Alignment and Size

Depending on the properties of the drive (SSDs, high-end storage using cache upfront, etc.) it is necessary to know about the particular first-level storage alignment properties being exposed to the operating system. As such, it is generally easier to align correctly when partitioning is not used, in cases where alignment support in the kernel is not available.

Influence from frequent open/close operations

Under certain circumstances, when applications repeatedly open (RDWR) and then close disk devices without partitions, invalidations of the whole blocks device caches have been seen, leading to performance degradation.

In reproducers for this scenario, a udev event drops the device's cache when it rescans the partition table. This causes the reproducer to be slower because the cache is getting dropped.

A real-world scenario where this occurs requires:

  • an application that uses the whole block device to read and write the data. The application sometimes experiences the I/O performance degradation.
  • another application that repeatedly opens and closes the whole block device in read-write mode without updating the contents.

This is not a normal situation. The more common scenario would be some application like certain DBs directly opening and closing devices. But here it's usually not the cache flush which is the problem since such applications typically use direct I/O but the extra udev processing.

To solve the performance degradation in this situation, partitions can be used. Alternatively, udev's nowatch option could be used.

Overall Considerations:

Using (one) partition on any single drive, with proper alignment, reduces administrative oversight and helps maintain acceptable performance levels.

This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.

Comments