Chapter 8. Red Hat Storage Volumes

A Red Hat Storage volume is a logical collection of bricks, where each brick is an export directory on a server in the trusted storage pool. Most of the Red Hat Storage Server management operations are performed on the volume.

Warning

Red Hat does not support writing data directly into the bricks. Read and write data only through the Native Client, or through NFS or SMB mounts.

Note

Red Hat Storage supports IP over Infiniband (IPoIB). Install Infiniband packages on all Red Hat Storage servers and clients to support this feature. Run the yum groupinstall "Infiniband Support" to install Infiniband packages:
Red Hat Storage support for RDMA over Infiniband is a technology preview feature.

Volume Types

Distributed
Distributes files across bricks in the volume.
Use this volume type where scaling and redundancy requirements are not important, or provided by other hardware or software layers.
See Section 8.3, “Creating Distributed Volumes” for additional information about this volume type.
Replicated
Replicates files across bricks in the volume.
Use this volume type in environments where high-availability and high-reliability are critical.
See Section 8.4, “Creating Replicated Volumes ” for additional information about this volume type.
Distributed Replicated
Distributes files across replicated bricks in the volume.
Use this volume type in environments where high-reliability and scalability are critical. This volume type offers improved read performance in most environments.
See Section 8.5, “Creating Distributed Replicated Volumes ” for additional information about this volume type.

Important

Striped, Striped-Replicated, Distributed-Striped, and Distributed-Striped-Replicated volume types are under technology preview. Technology Preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.

Striped
Stripes data across bricks in the volume.
Use this volume type only in high-concurrency environments where accessing very large files is required.
See Section 8.6, “Creating Striped Volumes” for additional information about this volume type.
Striped Replicated
Stripes data across replicated bricks in the trusted storage pool.
Use this volume type only in highly-concurrent environments, where there is parallel access to very large files, and performance is critical.
This volume type is supported for Map Reduce workloads only. See Section 8.8, “Creating Striped Replicated Volumes ” for additional information about this volume type, and restriction.
Distributed Striped
Stripes data across two or more nodes in the trusted storage pool.
Use this volume type where storage must be scalable, and in high-concurrency environments where accessing very large files is critical.
See Section 8.7, “Creating Distributed Striped Volumes ” for additional information about this volume type.
Distributed Striped Replicated
Distributes striped data across replicated bricks in the trusted storage pool.
Use this volume type only in highly-concurrent environments where performance, and parallel access to very large files is critical.
This volume type is supported for Map Reduce workloads only. See Section 8.9, “Creating Distributed Striped Replicated Volumes ” for additional information about this volume type.

8.1. Formatting and Mounting Bricks

To create a Red Hat Storage volume, specify the bricks that comprise the volume. After creating the volume, the volume must be started before it can be mounted.

Important

Red Hat supports formatting a Logical Volume using the XFS file system on the bricks. Bricks created by formatting a raw disk partition with the XFS file system is not supported by Red Hat.

Formatting and Mounting Bricks

Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly.

Important

Allocate 15% - 20% of free space to take advantage of Red Hat Storage volume Snapshotting feature. This feature is planned for a future release of Red Hat Storage.
  1. Run # mkfs.xfs -i size=512 DEVICE to format the bricks to the supported XFS file system format. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Storage.
  2. Run # blkid DEVICE to obtain the Universally Unique Identifier (UUID) of the device.
  3. Run # mkdir /mountpoint to create a directory to link the brick to.
  4. Add an entry in /etc/fstab using the obtained UUID from the blkid command:
    UUID=uuid    /mountpoint      xfs     defaults   1  2
  5. Run # mount /mountpoint to mount the brick.
  6. Run the df -h command to verify the brick is successfully mounted:
    # df -h
    /dev/vg_bricks/lv_exp1   16G  1.2G   15G   7% /exp1

Using Subdirectory as the Brick for Volume

You can create an XFS file system, mount them and point them as bricks while creating Red Hat Storage volume. If the mount point is unavailable, the data will be directly written to the root file system in the unmounted directory.
For example, the /exp directory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point becomes unavailable, any write will continue to happen in the /exp directory, but now this will be under root file system.
To overcome this issue, you can perform the below procedure.
During Red Hat Storage setup, create an XFS file system and mount it. After mounting it, create a subdirectory and use this subdirectory as the brick for volume creation. Here, the XFS file system is mounted as /bricks. After the file system is available, create a directory called /bricks/bricksrv1 and use it for volume creation. This approach has the following advantages:
  • When the /bricks file system is unavailable, there is no longer /bricks/bricksrv2 directory available in the system. Hence, there will be no data loss by writing to a different location.
  • This does not require any additional file system for nesting.
Perform the following to use subdirectories as bricks for creating a volume:
  1. Run the pvcreate command to initialize the partition.
    The following example initializes the partition /dev/sdb1 as an LVM (Logical Volume Manager) physical volume for later use as part of an LVM logical volume
    # pvcreate /dev/sdb1
  2. Run the following command to create the volume group:
    # vgcreate datavg /dev/sdb1
  3. Create the logical volume lvol1 from the volume group datavg.
    # lvcreate -l <number of extents> --name lvol1 datavg
  4. Create an XFS file system on the logical volume.
    # mkfs -t xfs -i size=512 /dev/mapper/datavg-lvol1
  5. Create /bricks mount point using mkdir command.
    # mkdir /bricks
  6. Mount the XFS file system.
    # mount -t xfs /dev/datavg/lvol1 /bricks
  7. Create the bricksrv1 subdirectory in the mounted file system.
    # mkdir /bricks/bricksrv1
    Repeat the above steps on all nodes.
  8. Create the Red Hat Storage volume using the subdirectories as bricks.
    # gluster volume create distdata01 ad-rhs-srv1:/bricks/bricksrv1 ad-rhs-srv2:/bricks/bricksrv2
  9. Start the Red Hat Storage volume.
    # gluster volume start distdata01
  10. Verify the status of the volume.
    # gluster  volume status distdata01

Reusing a Brick from a Deleted Volume

Bricks can be reused from deleted volumes, however some steps are required to make the brick reusable.
Brick with a File System Suitable for Reformatting (Optimal Method)
Run # mkfs.xfs -f -i size=512 device to reformat the brick to supported requirements, and make it available for immediate reuse in a new volume.

Note

All data will be erased when the brick is reformatted.
File System on a Parent of a Brick Directory
If the file system cannot be reformatted, remove the whole brick directory and create it again.

Procedure 8.1. Cleaning An Unusable Brick

If the file system associated with the brick cannot be reformatted, and the brick directory cannot be removed, perform the following steps:
  1. Delete all previously existing data in the brick, including the .glusterfs subdirectory.
  2. Run # setfattr -x trusted.glusterfs.volume-id brick and # setfattr -x trusted.gfid brick to remove the attributes from the root of the brick.
  3. Run # getfattr -d -m . brick to examine the attributes set on the volume. Take note of the attributes.
  4. Run # setfattr -x attribute brick to remove the attributes relating to the glusterFS file system.
    The trusted.glusterfs.dht attribute for a distributed volume is one such example of attributes that need to be removed.