2.5. Usage Considerations
2.5.1. Mount Options: noatime and nodiratime
nodiratimearguments. This allows GFS2 to spend less time updating disk inodes for every access. For more information on the effect of these arguments on GFS2 file system performance, see Section 2.9, “GFS2 Node Locking”.
2.5.2. VFS Tuning Options: Research and Experiment
sysctl(8) command. For example, the values for
vfs_cache_pressuremay be adjusted depending on your situation. To fetch the current values, use the following commands:
sysctl -n vm.dirty_background_ratio#
sysctl -n vm.vfs_cache_pressure
sysctl -w vm.dirty_background_ratio=20#
sysctl -w vm.vfs_cache_pressure=500
2.5.3. SELinux on GFS2
seclabelelement on each file system object by using one of the
contextoptions as described on the
mount(8) man page; SELinux will assume that all content in the file system is labeled with the
seclabelelement provided in the
contextmount options. This will also speed up processing as it avoids another disk read of the extended attribute block that could contain
mountcommand to mount the GFS2 file system if the file system is going to contain Apache content. This label will apply to the entire file system; it remains in memory and is not written to disk.
mount -t gfs2 -o context=system_u:object_r:httpd_sys_content_t:s0 /dev/mapper/xyz/mnt/gfs2
public_content_t, or you could define a new label altogether and define a policy around it.
2.5.4. Setting Up NFS Over GFS2
localflocksoption. The effect of this is to force both POSIX locks and flocks from each server to be local: non-clustered, independent of each other. This is necessary because a number of problems exist if GFS2 attempts to implement POSIX locks from NFS across the nodes of a cluster. For applications running on NFS clients, localized POSIX locks means that two clients can hold the same lock concurrently if the two clients are mounting from different servers. For this reason, when using NFS over GFS2, it is always safest to specify the
-o localflocksmount option so that NFS can coordinate both POSIX locks and the flocks among all clients mounting NFS.
localflocks, so that GFS2 will manage the POSIX locks and flocks between all the nodes in the cluster (on a cluster-wide basis). If you specify
localflocksand do not use NFS, the other nodes in the cluster will not have knowledge of each other's POSIX locks and flocks, thus making them unsafe in a clustered environment
- Red Hat supports only Red Hat High Availability Add-On configurations using NFSv3 with locking in an active/passive configuration with the following characteristics:
This configuration provides High Availability (HA) for the file system and reduces system downtime since a failed node does not result in the requirement to execute the
- The back-end file system is a GFS2 file system running on a 2 to 16 node cluster.
- An NFSv3 server is defined as a service exporting the entire GFS2 file system from a single cluster node at a time.
- The NFS server can fail over from one cluster node to another (active/passive configuration).
- No access to the GFS2 file system is allowed except through the NFS server. This includes both local GFS2 file system access as well as access through Samba or Clustered Samba.
- There is no NFS quota support on the system.
fsckcommand when failing the NFS server from one node to another.
fsid=NFS option is mandatory for NFS exports of GFS2.
- If problems arise with your cluster (for example, the cluster becomes inquorate and fencing is not successful), the clustered logical volumes and the GFS2 file system will be frozen and no access is possible until the cluster is quorate. You should consider this possibility when determining whether a simple failover solution such as the one defined in this procedure is the most appropriate for your system.
2.5.5. Samba (SMB or Windows) File Serving Over GFS2
2.5.6. Configuring Virtual Machines for GFS2
libvirtdomain should allow GFS2 to behave as expected.
<driver name='qemu' type='raw' cache='none' io='native'/>
shareableattribute within the device element. This indicates that the device is expected to be shared between domains (as long as hypervisor and OS support this). If
cache='no'should be used for that device.