Configuration Example - NFS Over GFS
Configuring NFS over GFS in a Red Hat Cluster
Edition 3
Abstract
Chapter 1. Introduction
1.1. About This Guide
1.2. Audience
1.3. Software Versions
Table 1.1. Software Versions
| Software | Description |
|---|---|
|
RHEL5
|
refers to RHEL5 and higher
|
|
GFS
|
refers to GFS for RHEL5 and higher
|
1.4. Related Documentation
- Red Hat Enterprise Linux Installation Guide — Provides information regarding installation of Red Hat Enterprise Linux 5.
- Red Hat Enterprise Linux Deployment Guide — Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 5.
- Red Hat Cluster Suite Overview — Provides a high level overview of the Red Hat Cluster Suite.
- Configuring and Managing a Red Hat Cluster — Provides information about installing, configuring and managing Red Hat Cluster components.
- LVM Administrator's Guide: Configuration and Administration — Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
- Global File System: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System).
- Global File System 2: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2).
- Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.
- Using GNBD with Global File System — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS.
- Linux Virtual Server Administration — Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS).
- Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat Cluster Suite.
Chapter 2. NFS over GFS in a Red Hat Cluster
- There are 5 nodes in the cluster.
- The NFS service runs over a GFS file system.
- There are five NFS clients.

Figure 2.1. NFS over GFS in a 5-Node Cluster
- Chapter 3, Prerequisite Configuration describes the prequisite configuration components that have been set up before the procedure documented in this manual beings.
- Chapter 4, Components to Configure summarizes the cluster resources that this procedure configures.
- Chapter 5, Configuring the Cluster Resources provides the procedures for configuring the cluster resources needed for an NFS service.
- Chapter 6, Configuring an NFS Cluster Service provides the procedure for configuring an NFS service in a Red Hat Cluster Suite.
- Chapter 7, Testing the NFS Cluster Service provides a procedure to check that the NFS service is working and that it will continue to work as expected if one of the nodes goes down.
- Chapter 8, Troubleshooting provides some guidelines to follow when your configuration does not behave as expected.
- Chapter 9, The Cluster Configuration File shows the cluster configuration file as it appears before configuring the NFS service and after configuration the NFS service in a Red Hat Cluster Suite.
- Chapter 10, Configuration Considerations summarizes some general concerns to consider when configuring an NFS service over a GFS file system in a Red Hat Cluster Suite.
Chapter 3. Prerequisite Configuration
Table 3.1. Configuration Prerequisities
| Component | Name | Comment |
|---|---|---|
| cluster | nfsclust | five-node cluster |
| cluster node | clusternode1.example.com | node in cluster nfsclust configured with a fencing device of nfs-apc |
| cluster node | clusternode2.example.com | node in cluster nfsclust configured with a fencing device of nfs-apc |
| cluster node | clusternode3.example.com | node in cluster nfsclust configured with a fencing device of nfs-apc |
| cluster node | clusternode4.example.com | node in cluster nfsclust configured with a fencing device of nfs-apc |
| cluster node | clusternode5.example.com | node in cluster nfsclust configured with a fencing device of nfs-apc |
| LVM volume | /dev/myvg/myvol | The LVM device on which the GFS file system is created |
| GFS file system | The GFS file system to export by means of NFS, built on LVM volume /dev/myvg/myvol, mounted at /mnt/gfs, and shared among the members of cluster nfsclust | |
| IP address | 10.15.86.96 | The IP address for the NFS service |
| NFS Client | nfsclient1.example.com | System that will access the NFS service |
| NFS Client | nfsclient2.example.com | System that will access the NFS service |
| NFS Client | nfsclient3.example.com | System that will access the NFS service |
| NFS Client | nfsclient4.example.com | System that will access the NFS service |
| NFS Client | nfsclient5.example.com | System that will access the NFS service |
Chapter 4. Components to Configure
nfssvc.
Table 4.1. Cluster Resources to Configure
| Resource Type | Resource Name | Description |
|---|---|---|
| IP Address | 10.15.86.96 | The IP address for the NFS service |
| GFS | mygfs | The GFS file system that will be exported through the NFS service |
| NFS Export | mynfs | |
| NFS Client | nfsclient1 | The NFS client system nfsclient1.example.com |
| NFS Client | nfsclient2 | The NFS client system nfsclient2.example.com |
| NFS Client | nfsclient3 | The NFS client system nfsclient3.example.com |
| NFS Client | nfsclient4 | The NFS client system nfsclient4.example.com |
| NFS Client | nfsclient5 | The NFS client system nfsclient5.example.com |
nfssvc. Table 4.2, “Parameters to Configure for NFS Cluster Service nfssvc” summarizes the resource configuration of nfssvc. The names of the resources are those that you assign when you define them, as noted in Table 4.1, “Cluster Resources to Configure”
Table 4.2. Parameters to Configure for NFS Cluster Service nfssvc
| Resource | Name | Comment |
|---|---|---|
| IP Address Resource | ||
| GFS Resource | mygfs | |
| NFS Export Resource | mynfs | NFS Export resource mynfs is a child of GFS resource mygfs. |
| NFS Client Resource | nfsclient1 | NFS Client resource nfsclient1 is a child of NFS Export resource mynfs. |
| NFS Client Resource | nfsclient2 | NFS Client resource nfsclient2 is a child of NFS Export resource mynfs. |
| NFS Client Resource | nfsclient3 | NFS Client resource nfsclient3 is a child of NFS Export resource mynfs. |
| NFS Client Resource | nfsclient4 | NFS Client resource nfsclient4 is a child of NFS Export resource mynfs. |
| NFS Client Resource | nfsclient5 | NFS Client resource nfsclient5 is a child of NFS Export resource mynfs. |
Chapter 5. Configuring the Cluster Resources
- The IP address for the NFS service, as described in Section 5.1, “Configuring an IP Address Resource”.
- The GFS file system, as described in Section 5.2, “Configuring a GFS Resource”.
- The NFS export, as described in Section 5.3, “Configuring an NFS Export Resource”.
- The NFS clients, as described in Section 5.4, “Configuring NFS Client Resources”.
- As an administrator of luci Select the cluster tab.
- From the Choose a cluster to administer screen, select the cluster to which you will add resources. In this example, that is the cluster with the name
nfsclust. - At the menu for cluster
nfsclust(below the menu), click . This causes the display of menu items for resource configuration: and . - Click . Clicking causes the Add a Resource page to be displayed.
5.1. Configuring an IP Address Resource
nfsclust.
- At the Add a Resource page for cluster
nfsclust, click the drop-down box under and select - For , enter 10.15.86.96.
- Leave the checkbox selected to enable link status monitoring of the IP address resource.
- Click . Clicking displays a verification page. Verifying that you want to add this resource displays a progress page followed by the display of Resources page, which displays the resources that have been configured for the cluster.
5.2. Configuring a GFS Resource
mygfs to cluster nfsclust.
- At the Add a Resource page for cluster
nfsclust, click the drop-down box under and select - For , enter
mygfs. - For , enter
/mnt/gfs. This is the path to which the GFS file system is mounted. - For , enter
/dev/myvg/myvol. The is the LVM logical volume on which the GFS file system was is created. - The field specifies the mount options for the GFS file system. For this example, we are mounting the file system with the
rw(read-write) andlocalflocksoption. - Leave the field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click at the dialog box.
- Leave the checkbox unchecked. kills all processes using the mount point to free up the mount when it tries to unmount. With GFS resources, the mount point is not unmounted at service tear-down unless this box is checked.
- Click and accept the verification screen.
5.3. Configuring an NFS Export Resource
mynfs to cluster nfsclust.
- At the Add a Resource page for cluster
nfsclust, click the drop-down box under and select - For , enter
mynfs. - Click and accept the verification screen.
5.4. Configuring NFS Client Resources
nfsclust. The procedure for configuring the first two clients only is laid out explicitly.
nfsclient1 to cluster nfsclust.
- At the Add a Resource page for cluster
nfsclust, click the drop-down box under and select - For , enter
nfsclient1. - For , enter
nfsclient1.example.com. This is the first NFS client system. - This field species additional client access rights. Specify
rw(read-write) in this field. For more information, refer to the General Options section of theexports(5) man page. - Check the checkbox. This indicates that if someone removes the export from the export list, the system will recover the export inline without taking down the NFS service.
- Click and accept the verification screen.
nfsclient2 to cluster nfsclust.
- At the Add a Resource page for cluster
nfsclust, click the drop-down box under and select - For , enter
nfsclient2. - For , enter
nfsclient2.example.com. This is the second NFS client system. - Leave the field blank.
- Check the checkbox.
- Click and accept the verification screen.
nfsclient3, nfsclient4, and nfsclient5 as the names of the resources and using nfsclient3.example.com, nfsclient4.example.com, and nfsclient5.example.com as the targets.
Chapter 6. Configuring an NFS Cluster Service
- Add a service to the cluster and provide a name for the service, as described in Section 6.1, “Add a Service to the Cluster”.
- Add an IP address resource to service, as described in Section 6.2, “Adding an IP Address Resource to an NFS Service”.
- Add a GFS resource to the service, as described in Section 6.3, “Adding a GFS Resource to an NFS Service”.
- Add an NFS export resource to the service, as described in Section 6.4, “Adding an NFS Export Resource to an NFS Service”.
- Add the NFS client resources to the services, as described in Section 6.5, “Adding NFS Client Resources to an NFS Service”.
6.1. Add a Service to the Cluster
- As an administrator of luci Select the cluster tab.
- From the Choose a cluster to administer screen, select the cluster to which you will add resources. In this example, that is the cluster with the name
nfsclust. - At the menu for cluster
nfsclust(below the menu), click . This causes the display of menu items for service configuration: and . - Click . Clicking causes the Add a Service page to be displayed.
- For , enter
nfssvc. - Leave the checkbox labeled checked, which is the default setting. When the checkbox is checked, the service is started automatically when a cluster is started and running. If the checkbox is not checked, the service must be started manually any time the cluster comes up from the stopped state.
- Leave the checkbox unchecked. The checkbox sets a policy wherein the service only runs on nodes that have no other services running on them. Since an NFS service consumes few resources, two services could run together on the same node without contention for resources and you do not need to check this.
- For , leave the drop-down box default value of . In this configuration, all of the nodes in the cluster may be used for failover.
- For , the drop-down box displays . Click the drop-down box and select . This policy indicates that the system should relocate the service before restarting; it should not restart the node where the service is currently located.
- Add the NFS service resources to this resource, as described in the following sections.
- After you have added the NFS resources to the service, click . The system prompts you to verify that you want to create this service. Clicking causes a progress page to be displayed followed by the display of Services page for the cluster. That page displays the services that have been configured for the cluster.
6.2. Adding an IP Address Resource to an NFS Service
nfssvc.
- At the Add a Service page for cluster
nfsclust, click . Clicking causes the display of two drop-down boxes: and .For this example, we will use global resources, which are resources that were previously added as global resources. Adding a new local resource would add a resource that is available only to this service. - In the drop-down box underneath the display, click on the display. This displays the resources that have been defined for this cluster.
- Select . This returns you to the Add a Service page with the IP Address resource displayed.Leave the checkbox selected, which is the default value. This enables link status monitoring of the IP address resource.
6.3. Adding a GFS Resource to an NFS Service
nfssvc.
- At the Add a Service page for cluster
nfsclust, click . - In the drop-down box underneath the display, click on the display.
- Select . This returns you to the Add a Service page with the GFS resource displayed, with the parameters that you defined in Section 5.2, “Configuring a GFS Resource” displayed.
6.4. Adding an NFS Export Resource to an NFS Service
- At the Add a Service page for cluster
nfsclust, below the display, click . This causes the display of two drop-down boxes: and . - In the drop-down box underneath the display, click on the display.
- Select . This returns you to the Add a Service page with the NFS Export resource displayed.
6.5. Adding NFS Client Resources to an NFS Service
- At the Add a Service page for cluster
nfsclust, below the display, click . This causes the display of two drop-down boxes: and . - Click on the display in the drop-down box underneath the display.
- Select . This returns you to the Add a Service page with the NFS client resource displayed with the parameters you defined in Section 5.4, “Configuring NFS Client Resources”.
Chapter 7. Testing the NFS Cluster Service
- If the GFS file system in the
nfsclustcluster is currently empty, populate the file system with test data. - Log in to one of the client systems you defined as a target.
- Mount the NFS file system on the client system, and check to see if the data on that file system as available.
- On the
Luciserver, select Nodes from the menu fornfsclust. This displays the nodes innfsclustand indicates which node is running thenfssvcservice. - The drop-down box for each node displays . For the node on which the
nfssvcservice is running, select . - Refresh the screen. The
nfssvcservice should now be running in a different node. - On the client system, check whether the file system you mounted is still available. Even though the NFS service is now running on a different node in the cluster, the client system should detect no difference.
- Restore the system to its previous state:
- Unmount the file system from the client system.
- Delete any test data you created in the GFS file system.
- Click on in the drop-down box for the node which you fenced and select .
Note
netconsole and kdump services on a system. You may find it useful to implement and test these tools before a system goes into production, to help in troubleshooting down the line.
Chapter 8. Troubleshooting
- Connect to one of the nodes in the cluster and execute the
clustat(8) command. This command runs a utility that displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services.The following example shows the output of theclustat(8) command.[root@clusternode4 ~]#
clustatCluster Status for nfsclust @ Wed Dec 3 12:37:22 2008 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ clusternode5.example.com 1 Online, rgmanager clusternode4.example.com 2 Online, Local, rgmanager clusternode3.example.com 3 Online, rgmanager clusternode2.example.com 4 Online, rgmanager clusternode1.example.com 5 Online, rgmanager Service Name Owner (Last) State ------- --- ----- ------ ----- service:nfssvc clusternode2.example.com startingIn this example,clusternode4is the local node since it is the host from which the command was run. Ifrgmanagerdid not appear in theStatuscategory, it could indicate that cluster services are not running on the node. - Connect to one of the nodes in the cluster and execute the
group_tool(8) command. This command provides information that you may find helpful in debugging your system. The following example shows the output of thegroup_tool(8) command.[root@clusternode1 ~]#
group_tooltype level name id state fence 0 default 00010005 none [1 2 3 4 5] dlm 1 clvmd 00020005 none [1 2 3 4 5] dlm 1 rgmanager 00030005 none [3 4 5] dlm 1 mygfs 007f0005 none [5] gfs 2 mygfs 007e0005 none [5]The state of the group should benone. The numbers in the brackets are the node ID numbers of the cluster nodes in the group. Theclustatshows which node IDs are associated with which nodes. If you do not see a node number in the group, it is not a member of that group. For example, if a node ID is not in dlm/rgmanager group, it is not using the rgmanager dlm lock space (and probably is not running rgmanager).The level of a group indicates the recovery ordering. 0 is recovered first, 1 is recovered second, and so forth. - Connect to one of the nodes in the cluster and execute the
cman_tool nodes -fcommand This command provides information about the cluster nodes that you may want to look at. The following example shows the output of thecman_tool nodes -fcommand.[root@clusternode1 ~]#
cman_tool nodes -fNode Sts Inc Joined Name 1 M 752 2008-10-27 11:17:15 clusternode5.example.com 2 M 752 2008-10-27 11:17:15 clusternode4.example.com 3 M 760 2008-12-03 11:28:44 clusternode3.example.com 4 M 756 2008-12-03 11:28:26 clusternode2.example.com 5 M 744 2008-10-27 11:17:15 clusternode1.example.comTheStsheading indicates the status of a node. A status of M indicates the node is a member of the cluster. A status of X indicates that the node is dead. TheIncheading indicating the incarnation number of a node, which is for debugging purposes only. - Check whether the
cluster.confis identical in each node of the cluster. If you configure your system with Conga, as in the example provided in this document, these files should be identical, but one of the files may have accidentally been deleted or altered. - In addition to using Conga to fence a node in order to test whether failover is working properly as described in Chapter 7, Testing the NFS Cluster Service, you could disconnect the ethernet connection between cluster members. You might try disconnecting one, two, or three nodes, for example. This could help isolate where the problem is.
- If you are having trouble mounting or modifying an NFS volume, check whether the cause is one of the following:
- The network between server and client is down.
- The storage devices are not connected to the system.
- More than half of the nodes in the cluster have crashed, rendering the cluster inquorate. This stops the cluster.
- The GFS file system is not mounted on the cluster nodes.
- The GFS file system is not writable.
- The IP address you defined in the
cluster.confis not bounded to the correct interface / NIC (sometimes theip.shscript does not perform as expected).
- Execute a
showmount -ecommand on the node running the cluster service. If it shows up the right 5 exports, check your firewall configuration for all necessary ports for using NFS. - If SELinux is currently in
enforcingmode on your system, check your/var/log/audit.logfile for any relevant messages. If you are using NFS to serve home directories, check whether the correct SELinux boolean value fornfs_home_dirshas been set to 1; this is required if you want to use NFS-based home directories on a client that is running SELinux. If you do not set this value on, you can mount the directories as root but cannot use them as home directories for your users. - Check the
/var/log/messagesfile for error messages from the NFS daemon. - If you see the expected results locally at the cluster nodes and between the cluster nodes but not at the defined clients, check the firewall configuration at the clients.
Chapter 9. The Cluster Configuration File
<?xml version="1.0"?>
<cluster alias="nfsclust" config_version="1" name="nfsclust">
<fence_daemon post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="clusternode1.example.com" nodeid="1" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="1"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode2.example.com" nodeid="2" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="2"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode3.example.com" nodeid="3" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="3"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode4.example.com" nodeid="4" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="4"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode5.example.com" nodeid="5" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="5"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice name="apc1" agent="fence_apc" ipaddr="link-apc" login="apc"
passwd="apc"/>
</fencedevices>
<rm/>
</cluster>
<?xml version="1.0"?>
<cluster alias="nfsclust" config_version="10" name="nfsclust">
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="clusternode1.example.com" nodeid="1" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="1"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode2.example.com" nodeid="2" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="2"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode3.example.com" nodeid="3" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="3"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode4.example.com" nodeid="4" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="4"/>
</method>
</fence>
</clusternode>
<clusternode name="clusternode5.example.com" nodeid="5" votes="1">
<fence>
<method name="apc-nfs">
<device name="apc1" switch="3" port="5"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice name="apc1" agent="fence_apc" ipaddr="link-apc" login="apc"
passwd="apc"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources>
<ip address="10.15.86.96" monitor_link="1"/>
<clusterfs device="/dev/myvg/myvol" force_unmount="0"
fsid="39669" fstype="gfs" mountpoint="/mnt/gfs"
name="mygfs" options="rw,localflocks"/>
<nfsexport name="mynfs"/>
<nfsclient allow_recover="1" name="nfsclient1" options="rw"
target="nfsclient1.example.com"/>
<nfsclient allow_recover="1" name="nfsclient2" options="rw"
target="nfsclient2.example.com"/>
<nfsclient allow_recover="1" name="nfsclient3" options="rw"
target="nfsclient3.example.com"/>
<nfsclient allow_recover="1" name="nfsclient4" options="rw"
target="nfsclient4.example.com"/>
<nfsclient allow_recover="1" name="nfsclient5" options="rw"
target="nfsclient5.example.com"/>
</resources>
<service autostart="1" exclusive="0" name="nfssvc" recovery="relocate">
<ip ref="10.15.86.96"/>
<clusterfs ref="mygfs">
<nfsexport ref="mynfs">
<nfsclient ref="nfsclient1"/>
<nfsclient ref="nfsclient2"/>
<nfsclient ref="nfsclient3"/>
<nfsclient ref="nfsclient4"/>
<nfsclient ref="nfsclient5"/>
</nfsexport>
</clusterfs>
</service>
</rm>
</cluster>
Chapter 10. Configuration Considerations
10.1. Locking Considerations
Warning
localflocks option. The intended effect of this is to allow the NFS server to manage locks on the GFS or GFS2 filesystem without the extra overhead of passing through the GFS and GFS2 locking layers.
localflocks mount option and when it may be required, see the Global File System and Global File System 2 manuals.
10.2. Additional Configuration Considerations
- Red Hat supports only Red Hat Cluster Suite configurations using NFSv3 with locking in an active/passive configuration with the following characteristics:
- The backend file system is a GFS or GFS2 file system running on a 2 to 16 node cluster.
- An NFSv3 server is defined as a service exporting the entire GFS/GFS2 file system from a single cluster node at a time.
- The NFS server can fail over from one cluster node to another (active/passive configuration).
- No access to the GFS/GFS2 file system is allowed except through the NFS server. This includes both local GFS/GFS2 file system access as well as access through Samba or Clustered Samba.
- The GFS or GFS2 file system must be mounted with the
localflocksoption. - There is no NFS quota support on the system.
This configuration provides HA for the file system and reduces system downtime since a failed node does not result in the requirement to execute thefsckcommand when failing the NFS server from one node to another. - The
fsid=NFS option is mandatory for NFS exports of GFS/GFS2. - There is currently an issue with failover and failback when using NFSv3 over GFS with TCP when the following scenario comes into play:
- Client A mounts from server 1.
- The system administrator moves NFS service from server 1 to server 2.
- The client resumes I/O operations.
- The system administrator moves NFS service from server 2 to server 1.
In this situation, the NFS service on server 1 does not get shut down because this would render other NFS services inoperable.Should this situation arise, you should move all NFS services off of server 1 and run theservice nfs restart. After this you can safely migrate your NFS services back to server 1. - If problems arise with your cluster (for example, the cluster becomes inquorate and fencing is not successful), the clustered logical volumes and the GFS/GFS2 file system will be frozen and no access is possible until the cluster is quorate. You should consider this possibility when determining whether a simple failover solution such as the one defined in this procedure is the most appropriate for your system.
Appendix A. Revision History
| Revision History | ||||||
|---|---|---|---|---|---|---|
| Revision 3-17.33.400 | 2013-10-31 | |||||
| ||||||
| Revision 3-17.33 | July 24 2012 | |||||
| ||||||
| Revision 1.0-2 | Thu Jul 21 2011 | |||||
| ||||||
| Revision 1.0-1 | Tue Aug 3 2010 | |||||
| ||||||
| Revision 1.0-0 | Thu Jan 29 2009 | |||||
| ||||||
Index
A
- Allow Recover checkbox, Configuring NFS Client Resources
C
- child resource
- clustat command, Troubleshooting
- cluster nodes, prerequisite, Prerequisite Configuration
- cluster resources, Components to Configure
- cluster service
- adding GFS resource, Adding a GFS Resource to an NFS Service
- adding IP address resource, Adding an IP Address Resource to an NFS Service
- adding NFS client resource, Adding NFS Client Resources to an NFS Service
- adding NFS export resource, Adding an NFS Export Resource to an NFS Service
- adding to cluster, Add a Service to the Cluster
- composition, Components to Configure
- cluster, prerequisite, Prerequisite Configuration
- cluster.conf file, The Cluster Configuration File
- cman_tool command, Troubleshooting
F
- failover domain, Prerequisite Configuration, Add a Service to the Cluster
- Force Unmount checkbox, Configuring a GFS Resource
G
- getenforce command, Troubleshooting
- GFS file system
- adding resource to cluster, Configuring a GFS Resource
- options, Configuring a GFS Resource
- prerequisite, Prerequisite Configuration
- resource, Components to Configure
- GFS resource
- adding to cluster service, Adding a GFS Resource to an NFS Service
- group_tool command, Troubleshooting
I
- IP address
- adding resource to cluster, Configuring an IP Address Resource
- prerequisite, Prerequisite Configuration
- resource, Components to Configure
- IP address resource
- adding to cluster service, Adding an IP Address Resource to an NFS Service
- IP, floating, NFS over GFS in a Red Hat Cluster
K
- kdump service, Testing the NFS Cluster Service
L
- LVM volume, prerequisite, Prerequisite Configuration
M
- Monitor Link checkbox, Configuring an IP Address Resource, Adding an IP Address Resource to an NFS Service
N
- netconsole service, Testing the NFS Cluster Service
- NFS
- failover and failback issues, Locking Considerations
- in a cluster, Locking Considerations
- locks, Locking Considerations
- startup, NFS over GFS in a Red Hat Cluster
- Version 3, NFS over GFS in a Red Hat Cluster
- NFS client
- adding resource to cluster, Configuring NFS Client Resources
- options, Configuring NFS Client Resources
- resource, Components to Configure
- NFS client resource
- adding to cluster service, Adding NFS Client Resources to an NFS Service
- NFS client systems, prerequisite, Prerequisite Configuration
- NFS export
- adding resource to cluster, Configuring an NFS Export Resource
- resource, Components to Configure
- NFS export resource
- adding to cluster service, Adding an NFS Export Resource to an NFS Service
- NFS service
- configuration overview, Configuring an NFS Cluster Service
- testing, Testing the NFS Cluster Service
P
- parent resource
- definition, Configuring an NFS Cluster Service
- POSIX locks, Locking Considerations
- Prerequisite Configuration Components, Prerequisite Configuration
R
- recovery policy, Add a Service to the Cluster
- resource
- adding to cluster, Configuring the Cluster Resources
- GFS file system, Configuring a GFS Resource
- IP address, Configuring an IP Address Resource
- NFS client, Configuring NFS Client Resources
- NFS export, Configuring an NFS Export Resource
S
- SELinux enforcing mode, Troubleshooting
- SELinux nfs_home_dirs value, Troubleshooting
- service
- adding to cluster, Add a Service to the Cluster
- automatic start, Add a Service to the Cluster
- showmount command, Troubleshooting
T
- testing
- NFS service, Testing the NFS Cluster Service
