Red Hat Training

A Red Hat training course is available for Red Hat Enterprise Linux

5.3. GFS Configuration

GFS file systems are certified for use with specific versions of Oracle RAC. For Oracle customers, see the Oracle Support Document 329530.1 for all currently certified combinations.
Clustered GFS requires that the Distributed Lock Manager (DLM) and the Clustered LVM services be configured and started. The DLM, if present will be started by CMAN. The RPM group should have installed all relevant components.
CLVMD only requires 1 change to the /etc/lvm/lvm.conf file; you must set locking_type to 3:
# Type of locking to use. Defaults to local file-based locking (1).
# Turn locking off by setting to 0 (dangerous: risks metadata corruption
# if LVM2 commands get run concurrently).
# Type 2 uses the external shared library locking_library.
# Type 3 uses built-in clustered locking.
locking_type = 3

Note

The GFS service will not start up if the fenced service has not started.

Note

Host-based mirroring (or more importantly host-based RAID) is not recommended for use with RAC (especially mission critical databases). RAC requires a storage array and any storage array worthy of running an Oracle RAC cluster will have superior RAID and RAIDSET management capabilities. Concatenating volumes does not involve RAID management, so that is less bug prone than using multiple layers of RAID.

Warning

GFS volumes can be grown if the file system requires more capacity. The gfs_grow command is used to expand the file system, once the LUN has been expanded. By keeping the filesystem mapping to single LUNs, it reduces an errors (or bugs) that might arise during gfs_grow operations. There is no performance difference between using the DDM inode, or subsequent CLVMD created logical volumes, built on these inodes. However, it must be stressed that you should perform a backup of your data before attempting this command as there is a potential to render you data unusable.

5.3.1. GFS File System Creation

For RAC, the file system must be created with arguments that are specific to the locking mechanism (always DLM), and the name of the cluster (HA585, in our case).
$ sudo gfs_mkfs -r 512 -j 4 -p lock_dlm -t rac585:gg /dev/mapper/ohome

$ sudo gfs_mkfs-j 4 -p lock_dlm -t rac585:gg /dev/mapper/db
Oracle manages data files with transaction redo logs, and with Oracle configuration in AIO/DIO mode, the writes always go to disk. The default journal is usually sufficient. The increased size of resource groups for GFS file systems is recommended for ORACLE_HOME, where the $OH/diag directory can contain thousands of trace files, spanning tens of GBs.

Note

Oracle Clusterware HOME is not supported on GFS clustered volumes at this time. For most installations, this will not be an imposition. There are several advantages (including, async rolling upgrades) to placing ORA_CRS_HOME on the node’s local file system, and most customers follow this practice.

5.3.2. /etc/fstab Entries

/dev/mapper/ohome   /mnt/ohome      gfs     _netdev 0 0
/dev/mapper/db      /mnt/db         gfs     _netdev 0 0
The _netdev mount option is also useful as it ensures the file systems are unmounted before cluster services shut down.

5.3.3. Context Dependent Pathnames (CDPN)

When ORACLE_HOME ($OH) is located on a GFS clustered volume, certain directories need to appear the same to each node (including names of files, such as listener.ora), but have node-specific contents.
To enable CDPN for $OH/network/admin, perform the following steps.
  1. Change to the OH/network directory:
    $ cd $OH/network
  2. Create directories that correspond to the hostnames:
    $ mkdir rac7
    $ mkdir rac8
  3. Create the admin directory in each directory:
    $ mkdir rac7/admin
    $ mkdir rac8/admin
  4. Create the CPDN link (from each host).
    ON RAC7, in $OH/network:
    $ ln –s @hostname admin
    On RAC8, in $OH/network:
    $ ln –s @hostname admin