Show Table of Contents
8.2. Updating NFS-Ganesha in the Offline Mode
Execute the following steps to update the NFS-Ganesha service from Red Hat Gluster Storage 3.1 to Red Hat Gluster Storage 3.1.1 or later:
Note
NFS-Ganesha does not support in-service update, hence all the running services and IO's have to be stopped before starting the update process.
- Stop the nfs-ganesha service on all the nodes of the cluster by executing the following command:
# service nfs-ganesha stop
- Verify the status by executing the following command on all the nodes:
# pcs status
- Stop the glusterd service and kill any running gluster process on all the nodes:
# service glusterd stop # pkill glusterfs # pkill glusterfsd
- Place the entire cluster in standby mode on all the nodes by executing the following command:
# pcs cluster standby <node-name>
For example:# pcs cluster standby nfs1 # pcs status Cluster name: G1455878027.97 Last updated: Tue Feb 23 08:05:13 2016 Last change: Tue Feb 23 08:04:55 2016 Stack: cman Current DC: nfs1 - partition with quorum Version: 1.1.11-97629de 4 Nodes configured 16 Resources configured Node nfs1: standby Online: [ nfs2 nfs3 nfs4 ] ....
- Stop the cluster software on all the nodes using pcs, by executing the following command:
# pcs cluster stop <node-name>
Ensure that it stops pacemaker and cman.For example:# pcs cluster stop nfs1 nfs1: Stopping Cluster (pacemaker)... nfs1: Stopping Cluster (cman)...
- Update the NFS-Ganesha packages on all the nodes by executing the following command:
# yum update nfs-ganesha # yum update glusterfs-ganesha
Note
- This will install glusterfs-ganesha and nfs-ganesha-gluster package along with other dependent gluster packages.
- Some warnings might appear during the upgrade related to shared_storage which can be ignored.
- Verify on all the nodes that the required packages are updated, the nodes are fully functional and are using the correct versions. If anything does not seem correct, then do not proceed until the situation is resolved. Contact the Red Hat Global Support Services for assistance if needed.
- Start the cluster software on all the nodes by executing the following command:
# pcs cluster start <node-name>
For example:# pcs cluster start nfs1 nfs1: Starting Cluster...
- Check the pcs status output to determine if everything appears as it should. Once the nodes are functioning properly, reactivate it for service by taking it out of standby mode by executing the following command:
# pcs cluster unstandby <node-name>
For example:# pcs cluster unstandby nfs1 # pcs status Cluster name: G1455878027.97 Last updated: Tue Feb 23 08:14:01 2016 Last change: Tue Feb 23 08:13:57 2016 Stack: cman Current DC: nfs3 - partition with quorum Version: 1.1.11-97629de 4 Nodes configured 16 Resources configured Online: [ nfs1 nfs2 nfs3 nfs4 ] ....
Make sure there are no failures and unexpected results. - Start glusterd service on all the nodes by executing the following command:
# service glusterd start
- Mount the shared storage volume created before update on all the nodes:
# mount -t glusterfs localhost:/gluster_shared_storage /var/run/gluster/shared_storage
- Verify if glusterfs-nfs is running after the update on all the nodes:
# ps -aux|grep nfs
- Disable glusterfs-nfs running (if, on any node):
# gluster volume set <volname> nfs.disable on
- Start the nfs-ganesha service on all the nodes by executing the following command:
# service nfs-ganesha start
Important
Verify that all the nodes are fully functional. If anything does not seem correct, then do not proceed until the situation is resolved. Contact Red Hat Global Support Services for assistance if required.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.