Chapter 4. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes
4.1. Configuring Volumes Using the Command Line Interface
Procedure 4.1. To Configure Volumes Using the Command Line Interface
Configure the rhgs-random-io tuned profileInstall the tuned tuning daemon and configure Red Hat Gluster Storage servers to use the
# yum install tuned # tuned-adm profile rhgs-random-ioFor more information on available tuning profiles, refer to the
tuned-admman page, or see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/.
Review virt volume group configuration detailsThe settings stored in the
/var/lib/glusterd/groups/virtfile are used to configure volumes in the
ImportantWhen you upgrade, a new virt file may be created in
/var/lib/glusterd/groups/virt.rpmnew. Ensure to apply the new
virtfile on the existing volumes by renaming the
virt, along with the customized settings.By default, the
/var/lib/glusterd/groups/virtfile contains the following recommended settings.
performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.stat-prefetch=off performance.low-prio-threads=32 network.remote-dio=enable cluster.eager-lock=enable cluster.quorum-type=auto cluster.server-quorum-type=server cluster.data-self-heal-algorithm=full cluster.locking-scheme=granular cluster.shd-max-threads=8 cluster.shd-wait-qlength=10000 features.shard=on user.cifs=offWith the exception of
cluster.data-self-heal-algorithm, these settings prevent caching within GlusterFS client stack, as it is the preferred mode for attaching disks to a virtual machine. The
cluster.eager-lockoption optimizes write performance with synchronous replication when there is a single writer to a file. The
features.shardoption enables sharding behavior. The
cluster.data-self-heal-algorithmoption specifies how self-heal operations are performed. For more information about any of these settings, see the Red Hat Gluster Storage 3.2 Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.2/html/Administration_Guide/chap-Managing_Red_Hat_Storage_Volumes.html#Configuring_Volume_Options
NoteServer-Side and Client-Side Quorum are enabled by default in the
/var/lib/glusterd/groups/virtfile to minimize split-brain scenarios. If Server-Side Quorum is not met, then the Red Hat Gluster Storage volumes become unavailable causing the Virtual Machines (VMs) to move to a paused state. If Client-Side Quorum is not met, although a replica pair in a Red Hat Gluster Storage volume is available in the read-only mode, the VMs move to a paused state.Manual intervention is required to make the VMs resume the operations after the quorum is restored. Consistency is achieved at the cost of fault tolerance. If fault tolerance is preferred over consistency, disable server-side and client-side quorum with the commands:
# gluster volume reset <vol-name> server-quorum-type # gluster volume reset <vol-name> quorum-typeFor more information on these configuration settings, see the following sections in the Red Hat Gluster Storage Administration Guide:
Assign volumes to virt groupRed Hat recommends assigning volumes that store virtual machine images to the
virtvolume group so that these volumes can use common configuration details for their common use case. This has the same effect as the Optimize for Virt Store option in the management console.
# gluster volume set VOLNAME group virt
ImportantAfter tagging the volume as
group virt, use the volume for storing virtual machine images only and always access the volume through the glusterFS native client.
Allow KVM and VDSM brick accessSet the brick permissions for
kvm. If you do not set the required brick permissions, creation of virtual machines fails.
- Set the user and group permissions using the following commands:
# gluster volume set VOLNAME storage.owner-uid 36 # gluster volume set VOLNAME storage.owner-gid 36
Configure granular healingRed Hat recommends setting
enablefor this use case. To configure granular healing, run the following commands.
# gluster volume set VOLNAME cluster.granular-entry-heal enable