5.3. Formatting and Mounting Bricks
5.3.1. Creating Bricks Manually
- Red Hat supports formatting a Logical Volume using the XFS file system on the bricks.
- Red Hat supports heterogeneous subvolume sizes for distributed volumes (either pure distributed, distributed-replicated or distributed-dispersed). Red Hat does not support heterogeneous brick sizes for bricks of the same subvolume.For example, you can have a distributed-replicated 3x3 volume with 3 bricks of 10GiB, 3 bricks of 50GiB and 3 bricks of 100GiB as long as the 3 10GiB bricks belong to the same replicate and similarly the 3 50GiB and 100GiB bricks belong to the same replicate set. In this way you will have 1 subvolume of 10GiB, another of 50GiB and 100GiB. The distributed hash table balances the number of assigned files to each subvolume so that the subvolumes get filled proportionally to their size.
188.8.131.52. Creating a Thinly Provisioned Logical Volume
- Create a physical volume(PV) by using the
# pvcreate --dataalignment alignment_value deviceFor example:
# pvcreate --dataalignment 1280K /dev/sdbHere,
/dev/sdbis a storage device.Use the correct
dataalignmentoption based on your device. For more information, see Section 19.2, “Brick Configuration”
NoteThe device name and the alignment value will vary based on the device you are using.
- Create a Volume Group (VG) from the PV using the
# vgcreate --physicalextentsize alignment_value volgroup deviceFor example:
# vgcreate --physicalextentsize 1280K rhs_vg /dev/sdb
- Create a thin-pool using the following commands:
# lvcreate --thin volgroup/poolname --size pool_sz --chunksize chunk_sz --poolmetadatasize metadev_sz --zero nFor example:
# lvcreate --thin rhs_vg/rhs_pool --size 2T --chunksize 1280K --poolmetadatasize 16G --zero nEnsure you read Chapter 19, Tuning for Performance to select appropriate values for
- Create a thinly provisioned volume that uses the previously created pool by running the
lvcreatecommand with the
# lvcreate --virtualsize size --thin volgroup/poolname --name volnameFor example:
# lvcreate --virtualsize 1G --thin rhs_vg/rhs_pool --name rhs_lvIt is recommended that only one LV should be created in a thin pool.
- Format bricks using the supported XFS configuration, mount the bricks, and verify the bricks are mounted correctly. To enhance the performance of Red Hat Gluster Storage, ensure you read Chapter 19, Tuning for Performance before formatting the bricks.
ImportantSnapshots are not supported on bricks formatted with external log devices. Do not use
-l logdev=deviceoption with
mkfs.xfscommand for formatting the Red Hat Gluster Storage bricks.
# mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 deviceDEVICE is the created thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Gluster Storage.
# mkdir /mountpointto create a directory to link the brick to.
# mkdir /rhgs
- Add an entry in
/dev/volgroup/volname /mountpoint xfs rw,inode64,noatime,nouuid,x-systemd.device-timeout=10min 1 2For example:
/dev/rhs_vg/rhs_lv /rhgs xfs rw,inode64,noatime,nouuid,x-systemd.device-timeout=10min 1 2
mount /mountpointto mount the brick.
- Run the
df -hcommand to verify the brick is successfully mounted:
# df -h /dev/rhs_vg/rhs_lv 16G 1.2G 15G 7% /rhgs
- If SElinux is enabled, then the SELinux labels that has to be set manually for the bricks created using the following commands:
# semanage fcontext -a -t glusterd_brick_t /rhgs/brick1 # restorecon -Rv /rhgs/brick1
5.3.2. Using Subdirectory as the Brick for Volume
/rhgsdirectory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point is unavailable, any write continues to happen in the
/rhgsdirectory, but now this is under root file system.
/bricks. After the file system is available, create a directory called
/rhgs/brick1and use it for volume creation. Ensure that no more than one brick is created from a single mount. This approach has the following advantages:
- When the
/rhgsfile system is unavailable, there is no longer
/rhgs/brick1directory available in the system. Hence, there will be no data loss by writing to a different location.
- This does not require any additional file system for nesting.
- Create the
brick1subdirectory in the mounted file system.
# mkdir /rhgs/brick1Repeat the above steps on all nodes.
- Create the Red Hat Gluster Storage volume using the subdirectories as bricks.
# gluster volume create distdata01 ad-rhs-srv1:/rhgs/brick1 ad-rhs-srv2:/rhgs/brick2
- Start the Red Hat Gluster Storage volume.
# gluster volume start distdata01
- Verify the status of the volume.
# gluster volume status distdata01
# df -h /dev/rhs_vg/rhs_lv1 16G 1.2G 15G 7% /rhgs1 /dev/rhs_vg/rhs_lv2 16G 1.2G 15G 7% /rhgs2
# gluster volume create test-volume server1:/rhgs1/brick1 server2:/rhgs1/brick1 server1:/rhgs2/brick2 server2:/rhgs2/brick2
5.3.3. Reusing a Brick from a Deleted Volume
# mkfs.xfs -f -i size=512 deviceto reformat the brick to supported requirements, and make it available for immediate reuse in a new volume.
5.3.4. Cleaning An Unusable Brick
- Delete all previously existing data in the brick, including the
# setfattr -x trusted.glusterfs.volume-id brickand
# setfattr -x trusted.gfid brickto remove the attributes from the root of the brick.
# getfattr -d -m . brickto examine the attributes set on the volume. Take note of the attributes.
# setfattr -x attribute brickto remove the attributes relating to the glusterFS file system.The
trusted.glusterfs.dhtattribute for a distributed volume is one such example of attributes that need to be removed.