Chapter 8. Configuring LVM on shared storage

Shared storage is storage that can be accessed by multiple nodes at the same time. You can use LVM to manage shared storage. Shared storage is commonly used in cluster and high-availability setups and there are two common scenarios for how shared storage appears on the system:

  • LVM devices are attached to a host and passed to a guest VM to use. In this case, the device is never intended to be used by the host, only by the guest VM.
  • Machines are attached to a storage area network (SAN), for example using Fiber Channel, and the SAN LUNs are visible to multiple machines:

8.1. Configuring LVM for VM disks

To prevent VM storage from being exposed to the host, you can configure LVM device access and LVM system ID. You can do this by excluding the devices in question from the host, which ensures that the LVM on the host doesn’t see or use the devices passed to the guest VM. You can protect against accidental usage of the VM’s VG on the host by setting the LVM system ID in the VG to match the guest VM.

Procedure

  1. In the lvm.conf file, check if the system.devices file is enabled:

    use_devicesfile=1
  2. Exclude the devices in question from the host’s devices file:

    $ lvmdevices --deldev <device>
  3. Optional: You can further protect LVM devices:

    1. Set the LVM system ID feature in both the host and the VM in the lvm.conf file:

      system_id_source = "uname"
    2. Set the VG’s system ID to match the VM system ID. This ensures that only the guest VM is capable of activating the VG:

      $ vgchange --systemid <VM_system_id> <VM_vg_name>

8.2. Configuring LVM to use SAN disks on one machine

To prevent the SAN LUNs from being used by the wrong machine, exclude the LUNs from the devices file on all machines except the one machine which is meant to use them.

You can also protect the VG from being used by the wrong machine by configuring a system ID on all machines, and setting the system ID in the VG to match the machine using it.

Procedure

  1. In the lvm.conf file, check if the system.devices file is enabled:

    use_devicesfile=1
  2. Exclude the devices in question from the host’s devices file:

    $ lvmdevices --deldev <device>
  3. Set the LVM system ID feature in the lvm.conf file:

    system_id_source = "uname"
  4. Set the VG’s system ID to match the system ID of the machine using this VG:

    $ vgchange --systemid <system_id> <vg_name>

8.3. Configuring LVM to use SAN disks for failover

You can configure LUNs to be moved between machines, for example for failover purposes. You can set up the LVM by configuring the LVM devices file and including the LUNs in the devices file on all machines that may use the devices and by configuring the LVM system ID on each machine.

The following procedure describes the initial LVM configuration, to finish setting up LVM for failover and move the VG between machines, you need to configure pacemaker and LVM-activate resource agent that will automatically modify the VG’s system ID to match the system ID of the machine where the VG can be used. For more information see Configuring and managing high availability clusters.

Procedure

  1. In the lvm.conf file, check if the system.devices file is enabled:

    use_devicesfile=1
  2. Include the devices in question in the host’s devices file:

    $ lvmdevices --adddev <device>
  3. Set the LVM system ID feature in all machines in the lvm.conf file:

    system_id_source = "uname"

8.4. Configuring LVM to share SAN disks among multiple machines

Using the lvmlockd daemon and a lock manager such as dlm or sanlock, you can enable access to a shared VG on the SAN disks from multiple machines. The specific commands may differ based on the lock manager and operating system used. The following procedure describes the overview of the required steps to configure LVM to share SAN disks among multiple machines.

Warning

When using pacemaker, the system must be configured and started using the pacemaker steps shown in Configuring and managing high availability clusters instead.

Procedure

  1. In the lvm.conf file, check if the system.devices file is enabled:

    use_devicesfile=1
  2. For each machine that will use the shared LUN, add the LUN in the machines devices file:

    $ lvmdevices --adddev <device>
  3. Configure the lvm.conf file to use the lvmlockd daemon on all machines:

    use_lvmlockd=1
  4. Start the lvmlockd daemon file on all machines.
  5. Start a lock manager to use with lvmlockd, such as dlm or sanlock on all machines.
  6. Create a new shared VG using the command vgcreate --shared.
  7. Start and stop access to existing shared VGs using the commands vgchange --lockstart and vgchange --lockstop on all machines.

Additional resources

  • lvmlockd(8) man page

8.5. Creating shared LVM devices using the storage RHEL system role

You can use the storage RHEL system role to create shared LVM devices if you want your multiple systems to access the same storage at the same time.

This can bring the following notable benefits:

  • Resource sharing
  • Flexibility in managing storage resources
  • Simplification of storage management tasks

Prerequisites

Procedure

  1. Create a playbook file, for example ~/playbook.yml, with the following content:

    ---
    - name: Create shared LVM device
      hosts: managed-node-01.example.com
      become: true
      tasks:
        - name: Create shared LVM device
          ansible.builtin.include_role:
            name: rhel-system-roles.storage
          vars:
            storage_pools:
              - name: vg1
                disks: /dev/vdb
                type: lvm
                shared: true
                state: present
                volumes:
                  - name: lv1
                    size: 4g
                    mount_point: /opt/test1
            storage_safe_mode: false
            storage_use_partitions: true
  2. Validate the playbook syntax:

    $ ansible-playbook --syntax-check ~/playbook.yml

    Note that this command only validates the syntax and does not protect against a wrong but valid configuration.

  3. Run the playbook:

    $ ansible-playbook ~/playbook.yml

Additional resources

  • /usr/share/ansible/roles/rhel-system-roles.storage/README.md file
  • /usr/share/doc/rhel-system-roles/storage/ directory