5.4. Formatting and Mounting Bricks
5.4.1. Creating Bricks Manually
- Red Hat supports formatting a Logical Volume using the XFS file system on the bricks.
22.214.171.124. Creating a Thinly Provisioned Logical Volume
- Create a physical volume(PV) by using the
# pvcreate --dataalignment 1280K /dev/sdbHere,
/dev/sdbis a storage device.Use the correct
dataalignmentoption based on your device. For more information, see Section 20.2, “Brick Configuration”
NoteThe device name and the alignment value will vary based on the device you are using.
- Create a Volume Group (VG) from the PV using the
# vgcreate --physicalextentsize 1280K rhs_vg /dev/sdb
- Create a thin-pool using the following commands:
# lvcreate --thin VOLGROUP/thin_pool --size pool_sz --chunksize chunk_sz --poolmetadatasize metadev_sz --zero nFor example:
# lvcreate --thin rhs_vg/rhs_pool --size 2T --chunksize 1280K --poolmetadatasize 16G --zero nTo enhance the performance of Red Hat Gluster Storage, ensure you read Chapter 20, Tuning for Performance chapter.
- Create a thinly provisioned volume that uses the previously created pool by running the
lvcreatecommand with the
# lvcreate --virtualsize size --thin volgroup/poolname --name volnameFor example:
# lvcreate --virtualsize 1G --thin rhs_vg/rhs_pool --name rhs_lvIt is recommended that only one LV should be created in a thin pool.
126.96.36.199. Creating a Thickly Provisioned Logical Volume
-l logdev=deviceoption with
mkfs.xfscommand for formatting the Red Hat Gluster Storage bricks.
# mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 DEVICEto format the bricks to the supported XFS file system format. Here, DEVICE is the created thin LV. The inode size is set to 512 bytes to accommodate for the extended attributes used by Red Hat Gluster Storage.
# mkdir /mountpointto create a directory to link the brick to.
- Add an entry in
/dev/rhs_vg/rhs_lv /mountpoint xfs rw,inode64,noatime,nouuid 1 2
# mount /mountpointto mount the brick.
- Run the
df -hcommand to verify the brick is successfully mounted:
# df -h /dev/rhs_vg/rhs_lv 16G 1.2G 15G 7% /rhgs
- If SElinux is enabled, then the SELinux labels that has to be set manually for the bricks created using the following commands:
# semanage fcontext -a -t glusterd_brick_t /rhgs/brick1 # restorecon -Rv /rhgs/brick1
5.4.2. Using Subdirectory as the Brick for Volume
/rhgsdirectory is the mounted file system and is used as the brick for volume creation. However, for some reason, if the mount point is unavailable, any write continues to happen in the
/rhgsdirectory, but now this is under root file system.
/bricks. After the file system is available, create a directory called
/rhgs/brick1and use it for volume creation. Ensure that no more than one brick is created from a single mount. This approach has the following advantages:
- When the
/rhgsfile system is unavailable, there is no longer
/rhgs/brick1directory available in the system. Hence, there will be no data loss by writing to a different location.
- This does not require any additional file system for nesting.
- Create the
brick1subdirectory in the mounted file system.
# mkdir /rhgs/brick1Repeat the above steps on all nodes.
- Create the Red Hat Gluster Storage volume using the subdirectories as bricks.
# gluster volume create distdata01 ad-rhs-srv1:/rhgs/brick1 ad-rhs-srv2:/rhgs/brick2
- Start the Red Hat Gluster Storage volume.
# gluster volume start distdata01
- Verify the status of the volume.
# gluster volume status distdata01
# df -h /dev/rhs_vg/rhs_lv1 16G 1.2G 15G 7% /rhgs1 /dev/rhs_vg/rhs_lv2 16G 1.2G 15G 7% /rhgs2
# gluster volume create test-volume server1:/rhgs1/brick1 server2:/rhgs1/brick1 server1:/rhgs2/brick2 server2:/rhgs2/brick2
5.4.3. Reusing a Brick from a Deleted Volume
# mkfs.xfs -f -i size=512 deviceto reformat the brick to supported requirements, and make it available for immediate reuse in a new volume.
5.4.4. Cleaning An Unusable Brick
- Delete all previously existing data in the brick, including the
# setfattr -x trusted.glusterfs.volume-id brickand
# setfattr -x trusted.gfid brickto remove the attributes from the root of the brick.
# getfattr -d -m . brickto examine the attributes set on the volume. Take note of the attributes.
# setfattr -x attribute brickto remove the attributes relating to the glusterFS file system.The
trusted.glusterfs.dhtattribute for a distributed volume is one such example of attributes that need to be removed.